• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Zen3+ Architecture and Ryzen 6000 "Rembrandt" Mobile Processors Detailed

the CPU-side improvements mostly target efficiency and on desktop that's really not as much of a concern since AMD are already winning that race comfortably and with 350W GPUs saving 50W on a CPU seems like wasted effort.
I know that deep down; but still, I'd love it more for the theoretical that one could build a small home server that can more intelligently idle down when no one's home, or even leave a personal PC on but idling for longer stretches. Basically, that extra efficiency providing a little more savings on almost-always on systems, where the savings are counted over longer stretches of time rather than for mobility/portability reasons.
 
I know that deep down; but still, I'd love it more for the theoretical that one could build a small home server that can more intelligently idle down when no one's home, or even leave a personal PC on but idling for longer stretches. Basically, that extra efficiency providing a little more savings on almost-always on systems, where the savings are counted over longer stretches of time rather than for mobility/portability reasons.
Well, you'll potentially see desktop APUs with most of the benefits of Rembrandt since it's identical silicon and the only difference is really the panel connected, if a home server isn't headless.

There's way too much power wasted elsewhere in an ATX build with a regular board, slots, DDR5, chipset, fans, etc but something like a Zotac ZBOX which is way more tightly integrated may give you the savings you want.
 
I'm eagerly anticipating real-world IGP testing of a 15-28W R7 6800U or a 35W R7 6800HS.

This is, without a doubt, the sweet spot and may be the first time in a very long time that gaming on battery power isn't cut short halfway through a short-haul flight.

I can play civ 6, cities skylines, mankind and those sorta games, should be all good for me to go for a non dgpu laptop and still be able to entertain myself when required :D
in civ 6 my 4800h did 4 hours on igp on a plane power optimized with some custom tuning at 30 fps :D
 
I know that deep down; but still, I'd love it more for the theoretical that one could build a small home server that can more intelligently idle down when no one's home, or even leave a personal PC on but idling for longer stretches. Basically, that extra efficiency providing a little more savings on almost-always on systems, where the savings are counted over longer stretches of time rather than for mobility/portability reasons.
Without a mobile system with LPDDR RAM and a highly optimized motherboard, those power savings would be negligible compared to current platforms. To make proper use of those optimizations you need the whole platform designed around it.
 
The number of unsightly black boxes has grown by 100% in this generation.
1645222844074.png
 
The number of unsightly black boxes has grown by 100% in this generation.
View attachment 237169

Earlier I was reading the part of the presentation where AMD basically said that the new Pluton security processor is somehow revolutionary because Microsoft is responsible for its firmware updates. Really wasn't sure whether to cry, laugh, or both at the same time. MS, who despite being given ample time to prepare, still was utterly unprepared for P-core/E-cores. MS, who unilaterally broke what wasn't broke on Ryzen CPUs in multiple ways on Win 11 launch. MS, who patches printing vulnerabilities by recommending that we disable Print Spooler service.

In other recent news, other Ryzen users finally pieced together the puzzle and pinned the periodic ridiculous audio/video/input/everything stutter on the AMD fTPM/PSP. Been suffering this regularly for years now, never knew what caused it, even on clean 10 and 11 installations. Sure enough, goes away 100% when fTPM is disabled. I guess we can finally put to rest any illusions that AMD PSP is any better than Intel ME - yeah NSA backdoors yada yada, at least ME never lagged the entire PC twice a week.

AMD fTPM Causes Random Stuttering Issue : Windows11 (reddit.com)

 
Last edited:
Should be a nice piece of silicon. Hopefully laptop makers don’t gimp it. I was hoping DDR4 or will it be capable of either or??


Nice find.


Nice find.
Only DDR5 and LPDDR5 were ever mentioned. If the guys at AMD are any clever, they're now paying Samsung (and others) some "market development funds" just to produce more PMICs.
 
12CU RDNA2, 16 ROPs...I like where this is going. What Cezanne should have been; finally the GPU doesn't look like a tiny afterthought on the floorplan. Looking forward to seeing this silicon come to desktop.
Indeed. Hopefully ddr4 compatible.

should be a good upgrade over wife’s A12-9700p lol
 
Last edited:
We had Vega 11 in Raven Ridge and Picasso, but crippled with 8 ROPs. AMD die shrunk and took away 3 CUs and compensated with 2GHz clock. Even when OC'd the GPU domain barely gets lukewarm, they had the space/thermal/power envelope for it. For OEM and laptops sure, but on desktop 7nm Vega doesn't have a bandwidth problem if you know what budget sticks to go for. It was just a matter of saving a buck by recycling most of Renoir as they felt no need to do any more.
We keep hearing this repeated despite evidence that the 5600g GPU is faster then the GT 1030, and the GT 1030 is noticeably slower with DDR4 memory then it is GDDR5. So is AMD somehow immune to bandwidth while nvidia isnt, but only on APUs?

AMD went from 11 GPU slices to 8 due to bandwidth restrictions. Even at 720p resolutions where the ROPs are no longer an issue the 11 CU parts were no faster then the 8 cu part.
 
MS, who despite being given ample time to prepare, still was utterly unprepared for P-core/E-cores. MS, who unilaterally broke what wasn't broke on Ryzen CPUs in multiple ways on Win 11 launch. MS, who patches printing vulnerabilities by recommending that we disable Print Spooler service.
IKR.
I have no faith in Pluton whatsoever. Microsoft's track record isn't just a joke, it's a malicious one.
 
We keep hearing this repeated despite evidence that the 5600g GPU is faster then the GT 1030, and the GT 1030 is noticeably slower with DDR4 memory then it is GDDR5. So is AMD somehow immune to bandwidth while nvidia isnt, but only on APUs?

AMD went from 11 GPU slices to 8 due to bandwidth restrictions. Even at 720p resolutions where the ROPs are no longer an issue the 11 CU parts were no faster then the 8 cu part.

Both 1030s run off 64-bit buses. That amounts to 16GB/s for the DDR4, and 48GB/s for the GDDR5. You wanna try running any GPU on 16GB/s? The GDDR5 version still is just fine with 48GB/s for its core config, it's not much slower than 7nm Vega, and I wouldn't be surprised if it provides a more consistent experience. And Vega should damn well be faster, it's a bigger core ROPs-aside and clocks much higher.

48GB/s is 3200CL16 territory, just sad for Renoir and Cezanne. Put on a cheap 4000/4133/4400 Viper [Steel] kit, and you're easily between 60-80GB/s bandwidth. Beyond that you just don't see much effect from bandwidth. That's pretty much where OEM JEDEC DDR5 will start anyway (~70-80GB/s), which is where Ryzen 6000 is making its debut (and with much worse timings, matters a little bit but not too much).

People love throwing around the "bandwidth" argument when it comes to APUs. It doesn't scale infinitely, and it doesn't fix not having enough hardware. Yes, mem OC is king if you only value benchmarks; yet, in-game in the couple of titles I play on the TV with 4650G/5700G, it's always the core OC (especially Vega 7) that makes a big difference in more complex scenes/lighting/effects/foliage - increasing mem clock only ever changes peak or avg FPS. Thus, 768SP + 16ROP + 2GHz + DDR5 sounds great.

gpuz 5700g.png
 
Last edited:
Interested to see successor to 5700G and the new deskmini that supports it. This year maybe? :D
 
Indeed. Hopefully ddr4 compatible.

should be a good upgrade over wife’s A12-9700p lol
It isn't. These APUs are (LP)DDR5 only, and won't come to AM4. Which is good, as they would suffer quite a bit if limited to (LP)DDR4 bandwidth. @tabascosauz is correct that bandwidth isn't the be-all, end-all fix for APU gaming, but it's still necessary to some degree. If you want to not hold these iGPUs back, you give them as fast RAM as you can find - and even JEDEC DDR5 will beat most DDR4 there, as GPUs generally aren't latency sensitive. Of course they still need high core clocks and a capable CPU to go with that - the lesson from iGPU OC/tuning on every APU generation up until now has been that any single approach to performance increases can only deliver so much before other factors start holding you back - but that seems to be in place already. Considering that they're doing 2400MHz on 12 CUs at 35W, we should see some pretty good core clocks on a non-power limited desktop setup. 3GHz might even be doable. 16 CUs would of course have been nice to have, but that likely didn't make financial sense for a chip that covers everything from bottom-of-the-barrel laptops to high end dGPU gaming DTRs.
 
Both 1030s run off 64-bit buses. That amounts to 16GB/s for the DDR4, and 48GB/s for the GDDR5. You wanna try running any GPU on 16GB/s? The GDDR5 version still is just fine with 48GB/s for its core config, it's not much slower than 7nm Vega, and I wouldn't be surprised if it provides a more consistent experience. And Vega should damn well be faster, it's a bigger core ROPs-aside and clocks much higher.

48GB/s is 3200CL16 territory, just sad for Renoir and Cezanne. Put on a cheap 4000/4133/4400 Viper [Steel] kit, and you're easily between 60-80GB/s bandwidth. Beyond that you just don't see much effect from bandwidth. That's pretty much where OEM JEDEC DDR5 will start anyway (~70-80GB/s), which is where Ryzen 6000 is making its debut (and with much worse timings, matters a little bit but not too much).

People love throwing around the "bandwidth" argument when it comes to APUs. It doesn't scale infinitely, and it doesn't fix not having enough hardware. Yes, mem OC is king if you only value benchmarks; yet, in-game in the couple of titles I play on the TV with 4650G/5700G, it's always the core OC (especially Vega 7) that makes a big difference in more complex scenes/lighting/effects/foliage - increasing mem clock only ever changes peak or avg FPS. Thus, 768SP + 16CU + 2GHz + DDR5 sounds great.

View attachment 237196
For integrated it’s typically 128-bit mem bus?? Is Is this due to ddr4 configuration or could it be different?

isn’t ddr5 supposed to be quad channel or something?? Would that allow a 256-bit memory interface on integrated graphics??
 
For integrated it’s typically 128-bit mem bus?? Is Is this due to ddr4 configuration or could it be different?

isn’t ddr5 supposed to be quad channel or something?? Would that allow a 256-bit memory interface on integrated graphics??

DDR5 has the same overall bus width, the quad channel thing is just 32x4 as opposed yo 64x2. Better to still just think of it as 64x2 for simplicity's sake.
 
DDR5 has the same overall bus width, the quad channel thing is just 32x4 as opposed yo 64x2. Better to still just think of it as 64x2 for simplicity's sake.
Thanks. I remember was back maybe amd 690 or 790g motherboards from am3 I remember they put a dedicated ram chip on the mobo for the integrated graphics. Be one way around the bandwidth maybe?
 
These Rembrandt's are a very impressive upgrade over Renoir. Much better battery life and kills AL between 20-60W. AL only wins at stupid power > 70W draw, unfit for a mobile device IMO. AL only makes sense plugged in. Rembrandt's GPU kills the iGPU in AL, 2x the frame rate at 1080p. Much more balanced APU for a laptop.
 
It seems a very competent product for laptop, the way i see it, is that it will be better in performance in 15W class, tie at 28W class and will only lose to 45W class designs vs intel 12th gen, but it seems that AMD claims it will have better performance/watt characteristics across the board even vs 12th gen intel. Stellar iGPU of course, probably around 1.8X actual performance vs vega 8, so supposedly around Q3/Q4 we will have the desktop AM5 version, which depending on the price it seems also a very nice product (although at Q4 probably we will have at $180-$190 a 13400F (6P+4E cores) with the same or a little better performance vs a 5GHz turbo 8core 6800G + $? 96EUs arc AIB desktop solution that will have at least the same performance but possibly better than a 2.5GHz 12CU RDNA2 igpu...)
 
It is a blood bath to those unreleased low power Alder Lake SKUs


Curous.
Although, "45W TDP" Ryzen going well into 80W territory, nice trick, AMD. :)))
 
These Rembrandt's are a very impressive upgrade over Renoir. Much better battery life and kills AL between 20-60W. AL only wins at stupid power > 70W draw, unfit for a mobile device IMO. AL only makes sense plugged in. Rembrandt's GPU kills the iGPU in AL, 2x the frame rate at 1080p. Much more balanced APU for a laptop.
From what I saw, the "110W" AL laptop that LTT averaged 80W CPU package power after an initial 110W boost period, whilst the "45W" 6900HS averaged about 70W package power after initial boost:

1645359597646.png

Alder Lake is worse, sure, but this test just goes to prove that the stated TDPs are basically meaningless crap when the real TDP difference is about 15%, not the 144% difference that the on-paper specs would make you believe.

More realistically, if the 12900H was locked strictly to 110W and the Ryzen was locked strictly to 45W, AL would be a comfortable 20-30% faster across the board.

Where the 6900HS really seems to shine is at 45W - throwing more power at it provides negligible performance gains. For this reason I'm super excited about the 6000U models. 15W TDP will likely boost to around 45W which is about as far as the sweet spot goes in terms of perf/W; For the 6900HS at least, there appears to be very little reason to waste more than 45W on the CPU unless you are actively trying to heat up your laptop and empty the battery!

1645359915701.png
 
Even Intel or AMD, CPU are so uninteressting to me since a few years. Only Alder Lake was interessting.
More Cores, More Cores, but i dont know wehre ill need those cores as consumer and (CAD) metalworker.

For Games is a good IPC needed like the 12100F and for working with CAD the CPU could be a Celeron from 2011 cause it use GPU Compute.
And yes Solidworks works without any problem with an Intel Celeron 847, 16GB RAM and a GTX 470. :laugh:
 
If the Zhihu review is indicative, then the RDNA2 680M iGPU is not so impressive as i originally thought. desktop 1650 GDDR6 (which is slower than RX570) has around 1.8X higher clocks and 2X memory bandwidth vs MX450 25W and also double memory (4GB vs 2GB) and 680M only beats the 2GB card in the premium notebook designs with 6400MHz LPDDR5 and with 54W TDP settings?
Also only 1,53X faster than i7 11370H which is one year old and has 1.35GHz turbo clock, so not even as fast as the clock difference?
Anyway let's wait for more reviews to come to have a better understanding of the performance situation!
 
Last edited:
If the Zhihu review is indicative, then the RDNA2 680M iGPU is not so impressive as i originally thought. desktop 1650 GDDR6 (which is slower than RX570) has around 1.8X higher clocks and 2X memory bandwidth vs MX450 25W and also double memory (4GB vs 2GB) and 680M only beats the 2GB card in the premium notebook designs with 6400MHz LPDDR5 and with 54W TDP settings?
Also only 1,53X faster than i7 11370H which is one year old and has 1.35GHz turbo clock, so not even as fast as the clock difference?
Anyway let's wait for more reviews to come to have a better understanding of the performance situation!
That article is hilarious for the number of times it mentions raytracing... iGPUs can barely achieve playable framerates as-is, nobody in their right mind is going to try using RT on them.
 
Back
Top