Wednesday, January 6th 2021

AMD Ryzen 5000 Cezanne APU Die Render Leaked

VideoCardz has recently received a render of the upcoming AMD Ryzen 5000 Cezanne APU which is expected to be unveiled next week. The Zen 3 Cezanne APUs support up to 8 cores and 16 threads just like Zen 2 Renoir APUs. The Cezanne APU should support up to 8 graphics cores and 20 PCIe lanes, it is currently unknown whether these lanes will be PCIe 3.0 or PCIe 4.0. The Cezanne die appears to be ~10% larger than Renoir which comes from the larger Zen 3 core design and a larger L3 cache of 16 MB. The new Ryzen 5000H Cezanne series processors are expected to be announced by AMD next week and will power upcoming low and high power laptops.
Source: VideoCardz
Add your own comment

30 Comments on AMD Ryzen 5000 Cezanne APU Die Render Leaked

#1
Fouquin
The Zen 3 Cezanne APU sports the same 8 cores and 16 threads as Zen 2 Renoir APUs.
Should be "sports 8 cores and 16 threads just as Zen 2 Renoir APUs". The current wording suggests Zen 3 "Cezanne" is just rebadged Zen 2 "Renoir" cores, not that the two are simply the same number of cores. Something that isn't reflected by the following sentences. Just makes things less confusing for people.
Posted on Reply
#3
Uskompuf
FouquinShould be "sports 8 cores and 16 threads just as Zen 2 Renoir APUs". The current wording suggests Zen 3 "Cezanne" is just rebadged Zen 2 "Renoir" cores, not that the two are simply the same number of cores. Something that isn't reflected by the following sentences. Just makes things less confusing for people.
Thanks
Posted on Reply
#4
Chrispy_
The graphics component of AMD APUs is terrible. Three years ago it was around 50% faster than the inadequate Intel HD620 jammed into pretty much everything on sale, and 50% faster than that was still pretty terrible. Take a 2018 game and try and use a 2700U at 25W with dual-channel, maximum-speed RAM and you'd still be dropping frames at minimum settings and 720p.

Renoir gave us even less GPU, but at much higher clocks. The end result was that the 4800U Vega8 was faster than the old 2700U by about 25% but also so damn-near impossible to buy that it might as well not exist. The real laptops that actually went on sale used Vega6, so actual performance is pretty much in line with the old 2500U and 2700U.

Throughout 2019 and 2020, that level of performance has decreased from "adequate for minimum settings at 720p30" to "totally inadequate slideshow at rock-bottom settings". Games have moved on, other GPUs have moved on, Intel's IGPs have moved on, and Cezanne's crappy Vega graphics are still going to burden AMD ultraportables for another year, despite already having outstayed their welcome.

Rembrandt can't come soon enough, as Renoir already has far too much CPU performance to match it's IGP. Cezanne will do nothing for the IGP, making it even more imbalanced.
Posted on Reply
#5
Cheesecake16
Chrispy_The graphics component of AMD APUs is terrible. Three years ago it was around 50% faster than the inadequate Intel HD620 jammed into pretty much everything on sale, and 50% faster than that was still pretty terrible. Take a 2018 game and try and use a 2700U at 25W with dual-channel, maximum-speed RAM and you'd still be dropping frames at minimum settings and 720p.

Renoir gave us even less GPU, but at much higher clocks. The end result was that the 4800U Vega8 was faster than the old 2700U by about 25% but also so damn-near impossible to buy that it might as well not exist. The real laptops that actually went on sale used Vega6, so actual performance is pretty much in line with the old 2500U and 2700U.

Throughout 2019 and 2020, that level of performance has decreased from "adequate for minimum settings at 720p30" to "totally inadequate slideshow at rock-bottom settings". Games have moved on, other GPUs have moved on, Intel's IGPs have moved on, and Cezanne's crappy Vega graphics are still going to burden AMD ultraportables for another year, despite already having outstayed their welcome.

Rembrandt can't come soon enough, as Renoir already has far too much CPU performance to match it's IGP. Cezanne will do nothing for the IGP, making it even more imbalanced.
Vega is actually very good when scaled-down however RDNA1 on the other hand does not scale down well at all. Just look at RX5700 versus the RX5500XT, the RX5700 (180W TDP) has 63% more CUs then the RX5500XT for only 38% more power where as with Vega going from the 8CUs the 4800U has to 60CUs that the Radeon VII, you gain 7.5 times more CUs for 20 times more power. And looking at AT's review of the Xe LP iGP in Tiger Lake at lower resolutions Vega is faster then Xe which tells me that Vega is bottlenecked by the lack of cache that Vega has and by the lack of memory bandwidth that mobile Vega has so despite being based on the almost decade old GCN architecture, it holds up very well even in today's iGP market.
Posted on Reply
#6
watzupken
Chrispy_The graphics component of AMD APUs is terrible. Three years ago it was around 50% faster than the inadequate Intel HD620 jammed into pretty much everything on sale, and 50% faster than that was still pretty terrible. Take a 2018 game and try and use a 2700U at 25W with dual-channel, maximum-speed RAM and you'd still be dropping frames at minimum settings and 720p.

Renoir gave us even less GPU, but at much higher clocks. The end result was that the 4800U Vega8 was faster than the old 2700U by about 25% but also so damn-near impossible to buy that it might as well not exist. The real laptops that actually went on sale used Vega6, so actual performance is pretty much in line with the old 2500U and 2700U.

Throughout 2019 and 2020, that level of performance has decreased from "adequate for minimum settings at 720p30" to "totally inadequate slideshow at rock-bottom settings". Games have moved on, other GPUs have moved on, Intel's IGPs have moved on, and Cezanne's crappy Vega graphics are still going to burden AMD ultraportables for another year, despite already having outstayed their welcome.

Rembrandt can't come soon enough, as Renoir already has far too much CPU performance to match it's IGP. Cezanne will do nothing for the IGP, making it even more imbalanced.
In my opinion, iGPU generally gets the least love because as it gets better, it also tends to compete with CPU with the limited die space. iGPU on Tiger Lake made huge strides in terms of performance, but you can tell it also limited Intel to a miserable 4 core config even for the top end ultra low voltage CPU despite moving to 10nm++. I feel the main focus should still be on the CPU, and lesser on the iGPU. Moreover, if you are gaming at 720p or 1080p, it is more likely you will be CPU limited, than GPU limited.

Personally, it is nice to have a good iGPU because it enables people to play some games on their laptop without dishing out extra money for a dedicated graphic solution if they don't game much. But at the end of the day, iGPUs are unlikely to satisfy most gamer's requirements. Even if AMD can bump iGPU performance by 20%, you are mostly still constrained to 720p or 1080p with medium to low settings because the shared resources just can't keep up with a good old dedicated GPU. So for more serious gamers, they will just get a laptop or desktop with a more capable graphic card.
Posted on Reply
#7
Minus Infinity
Would be majorly bummed if they don't release Cezanne for desktop usage, although you'll never be able to buy one until Zen 4 is shipping.
Posted on Reply
#8
AlB80
I forgot my glasses. Do you see VCN 3.x there?
Posted on Reply
#9
HD64G
From the images I can see that L3 cache is much bigger.
Posted on Reply
#10
Dredi
Chrispy_The graphics component of AMD APUs is terrible. Three years ago it was around 50% faster than the inadequate Intel HD620 jammed into pretty much everything on sale, and 50% faster than that was still pretty terrible. Take a 2018 game and try and use a 2700U at 25W with dual-channel, maximum-speed RAM and you'd still be dropping frames at minimum settings and 720p.

Renoir gave us even less GPU, but at much higher clocks. The end result was that the 4800U Vega8 was faster than the old 2700U by about 25% but also so damn-near impossible to buy that it might as well not exist. The real laptops that actually went on sale used Vega6, so actual performance is pretty much in line with the old 2500U and 2700U.

Throughout 2019 and 2020, that level of performance has decreased from "adequate for minimum settings at 720p30" to "totally inadequate slideshow at rock-bottom settings". Games have moved on, other GPUs have moved on, Intel's IGPs have moved on, and Cezanne's crappy Vega graphics are still going to burden AMD ultraportables for another year, despite already having outstayed their welcome.

Rembrandt can't come soon enough, as Renoir already has far too much CPU performance to match it's IGP. Cezanne will do nothing for the IGP, making it even more imbalanced.
It won’t be any faster without faster memory. They would need HBM or a very large cache to improve performance at all regardless of the amount of GPU cores.
Posted on Reply
#11
evernessince
Cheesecake16Vega is actually very good when scaled-down however RDNA1 on the other hand does not scale down well at all. Just look at RX5700 versus the RX5500XT, the RX5700 (180W TDP) has 63% more CUs then the RX5500XT for only 38% more power where as with Vega going from the 8CUs the 4800U has to 60CUs that the Radeon VII, you gain 7.5 times more CUs for 20 times more power. And looking at AT's review of the Xe LP iGP in Tiger Lake at lower resolutions Vega is faster then Xe which tells me that Vega is bottlenecked by the lack of cache that Vega has and by the lack of memory bandwidth that mobile Vega has so despite being based on the almost decade old GCN architecture, it holds up very well even in today's iGP market.
What you are citing applies to smaller video cards in general.

For example, the 1660 super consumes 193w, a decent chunk more than the 5500XT. Performance gain over the 5500XT is not proportional to the increase power draw either.

Typically speaking, the smaller cards have a higher power overhead. Whether they be a lower binned card with worse power characteristics, a higher frequency to compensate for the lack of cores, or the memory takes a higher percentage of the power budget as a ratio to the number of cores.

High end video cards represent the best a company can make. Low end cards represent whatever they could throw together at a reasonable price (or at least it used to, not so sure about current GPU pricing).
Posted on Reply
#12
Vya Domus
I wonder when will they replace or add some sort of system level cache that the GPU can use, could bring a big improvement to performance and finally allow for wider GPUs to be implemented. What a lot of people complaing about Vega don't understand is that a much improved GPU wouldn't really be worth it at the moment.
Posted on Reply
#13
50eurouser
These APU's are bandwidth starved, no point for AMD to waste precious die space on cache like that of big navi or extra shaders, the speed bump will come from the transition to DDR5.
I assume that a DDR5-5600 dual channel (90GB/sec) will make APU's to reach parity with a GTX1650 GDDR5 more or less. The mid ground would likely give at least 80% more bandwidth than DDR4-3200, sources say basic (jedec) is DDR5-4800 and can go all the way to ~8000 MT.

DC DDR5-4800 ~76GB/sec
DC DDR5-5600 ~90GB/sec
DC DDR5-6000 ~100GB/sec
DC DDR5-7200 ~112GB/sec
DC DDR5-8400 ~134GB/sec
Posted on Reply
#14
yeeeeman
stop winning about igpu. manufacturers are limited by how much performance they can squeeze into 15W TDP.
For 45W parts though they should struggle a bit more, but we enter a different limitation here since AMD (and others) are not willing to go slow and wide just to give you more perfomance.
Again, if you add more performance, you then need more memory bandwidth and so on, so in the end the igpus will always be crappy.
Posted on Reply
#15
Chrispy_
DrediIt won’t be any faster without faster memory. They would need HBM or a very large cache to improve performance at all regardless of the amount of GPU cores.
See, that argument is rubbish. We've been on DDR4 for ages and yet over the course of 4 years AMD have still doubled IGP performance regardless by increasing clockspeeds and power consumption. This is most obvious on desktop parts like the Ryzen 5 3400G with Vega11 and no power or cooling limitations.

If DDR4 was the sole bottleneck, we'd be still stuck at 2017 performance levels, entirely throttled at DDR4's bandwidth limit; Evidently that's not true.
Posted on Reply
#16
Valantar
Cheesecake16Vega is actually very good when scaled-down however RDNA1 on the other hand does not scale down well at all. Just look at RX5700 versus the RX5500XT, the RX5700 (180W TDP) has 63% more CUs then the RX5500XT for only 38% more power where as with Vega going from the 8CUs the 4800U has to 60CUs that the Radeon VII, you gain 7.5 times more CUs for 20 times more power. And looking at AT's review of the Xe LP iGP in Tiger Lake at lower resolutions Vega is faster then Xe which tells me that Vega is bottlenecked by the lack of cache that Vega has and by the lack of memory bandwidth that mobile Vega has so despite being based on the almost decade old GCN architecture, it holds up very well even in today's iGP market.
Your data is too thin to support your conclusion. The 5700 is among the more effective RDNA implementations (though nowhere near the 5600 XT), while the 5500 XT is pushed quite far in terms of clocks to eke out the maximum performance from a small die. Presenting those as a like-for-like comparison is misleading. For reference, the original 5600 XT clocks (before they decided to boost them and make a mess of the launch) outperformed the 5500 XT by more than 40% while consuming less than 10% more power. So no, your examples do not conclusively demonstrate that Vega scales down better than Navi.

That being said, the iGPU on my Ryzen 5 Pro 4650G is quite something. While I would definitely have preferred RDNA in there, Vega at 1900MHz stock/2100 OC with a tiny voltage bump is quite something.
DrediIt won’t be any faster without faster memory. They would need HBM or a very large cache to improve performance at all regardless of the amount of GPU cores.
LPDDR5 will (somewhat) fix that for you.
Posted on Reply
#17
THANATOS
evernessinceWhat you are citing applies to smaller video cards in general.

For example, the 1660 super consumes 193w, a decent chunk more than the 5500XT. Performance gain over the 5500XT is not proportional to the increase power draw either.

Typically speaking, the smaller cards have a higher power overhead. Whether they be a lower binned card with worse power characteristics, a higher frequency to compensate for the lack of cores, or the memory takes a higher percentage of the power budget as a ratio to the number of cores.

High end video cards represent the best a company can make. Low end cards represent whatever they could throw together at a reasonable price (or at least it used to, not so sure about current GPU pricing).
Peak power consumption of GTX1660Ti is only 134W as shown in TPU review, I don't know where you got 193W.
ValantarLPDDR5 will (somewhat) fix that for you.
In my opinion RDNA2 based IGPs will have a small infinity cache. Some older leaks were talking about Van Gogh being bigger than Renoir even though It has only 4 Zen2 cores, this would explain It.
Posted on Reply
#18
Valantar
THANATOSIn my opinion RDNA2 based IGPs will have a small infinity cache. Some older leaks were talking about Van Gogh being bigger than Renoir even though It has only 4 Zen2 cores, this would explain It.
That's a possibility, though it's hard to tell given the die area such a cache would need. It might be as simple as a huge L3 cache shared between the CPU and GPU though. That would definitely be interesting.
Posted on Reply
#19
Punkenjoy
ValantarThat's a possibility, though it's hard to tell given the die area such a cache would need. It might be as simple as a huge L3 cache shared between the CPU and GPU though. That would definitely be interesting.
I was watching a class on CPU design and one of the main point the teacher was making was: "The people working on data movement and availability will be the winner of the next decade"

If AMD can pull that off, they will indeed be incredible. The main difficulties would be to get low latency and good cache hit for both CPU and GPU. Generally, GPU can handle latency way more than CPU so i wonder if there would be a need to do that.

if we look at this graphics

We can see that with 32 MB of cache, the hit rate at 1080p is around the same as the hit cache at 4K with 128MB. But 32 MB is still a very large chunk of die space for cache. Would it be worth it ?

i am pretty sure a NAVI2 APU with 8-16 compute unit and 32MB of cache could probably run 1080p game at a reasonable level of performance.

But the things is would it be a good investment of silicon space? Is that would increase sales and profits for the company ?

i am not sure about that. I, for sure, would want an APU that is that, and i think that the lower segments of GPU for gaming (the sub 250$ gpu) are to be replace by APU in the near future but i am not sure if AMD is there yet. Maybe when they will start making 5nm chips and get more die space.
Posted on Reply
#20
blazed
As a 2500u laptop owner, I think I can give a pretty fair assessment on these APU's. I'll admit, when I first bought it, I was VERY unimpressed. However, enhanced driver stability and features (sharpening and scaling), coupled with now standard dual channel RAM and higher TDP's out of the box on many of these laptops, mean that performance is much better than it used to be and actually looks pretty good on a smaller screen. In addition to this, the recent node shrinks I'm sure mean it will be even better. I'll admit, that some things still arn't that great, like for emulation, but I'm crossing my fingers for even more features that enhance the experience in future Adrenalin editions.

One thing I did notice on a lot of recent games is that instead of running in slo-mo, it just switches to 30 Hz. So even if you arn't getting full speed, the effect of using less frames means that you often can't even see a difference in speed anyways.

General computing seems just fine. The one thing to be noted though is that installing things does seem to take a great deal of time on an external HDD. Updating graphics drivers and BIOS can also be a hassle and somewhat risky without taking necessary precautions.

Now, would I recommend this as a gaming PC? No. My desktop provides much better performance. However, if say you need a backup computer, you need something simple for school (that you can maybe even convince your parents you need for 'educational' purposes), or you're just on a tight budget, I'd say it's definitely a great buy, especially if you can get a good deal on it. (I paid $599 CAD. Regular was $799, meaning a savings of 25%).

There is a lot of fun to be had on one of these at a lower price point.
Posted on Reply
#21
THANATOS
Punkenjoyif we look at this graphics
.......
We can see that with 32 MB of cache, the hit rate at 1080p is around the same as the hit cache at 4K with 128MB. But 32 MB is still a very large chunk of die space for cache. Would it be worth it ?

i am pretty sure a NAVI2 APU with 8-16 compute unit and 32MB of cache could probably run 1080p game at a reasonable level of performance.

But the things is would it be a good investment of silicon space? Is that would increase sales and profits for the company ?

i am not sure about that. I, for sure, would want an APU that is that, and i think that the lower segments of GPU for gaming (the sub 250$ gpu) are to be replace by APU in the near future but i am not sure if AMD is there yet. Maybe when they will start making 5nm chips and get more die space.
Van Gogh will have only 4 Zen2 cores and 8CU RDNA2 IGP and for that 32 MB Infinity cache is too much, Navi24 with 20-24CU probably will have only 32MB, I think Van Gogh will have 16MB at most.
In my opinion APUs won't replace <$250 GPUs in the next 2 years or even longer. Now It's the beginning of 2021, and we will have 8CU RDNA2 IGP while in desktop we have RX 6900XT with 80CU. In 2022 will come Rebrandth with only 12CU RDNA2 while the weakest Navi24 should have 20-24CU. I expect 16CU IGP in 2023.
Posted on Reply
#22
Steevo
THANATOSVan Gogh will have only 4 Zen2 cores and 8CU RDNA2 IGP and for that 32 MB Infinity cache is too much, Navi24 with 20-24CU probably will have only 32MB, I think Van Gogh will have 16MB at most.
In my opinion APUs won't replace <$250 GPUs in the next 2 years or even longer. Now It's the beginning of 2021, and we will have 8CU RDNA2 IGP while in desktop we have RX 6900XT with 80CU. In 2022 will come Rebrandth with only 12CU RDNA2 while the weakest Navi24 should have 20-24CU. I expect 16CU IGP in 2023.
We have all waited for APU's to replace low power dedicated cards for awhile, I don't hink AMD has the manpower, R&D funding, or the desire to use up any more wafers for another product. But I can imagine a APU with a single HBM dedicated for the GPU, but its probably just as cheap to throw more cache at the whole die than pay for the HBM and interposer plus yield loss and manufacturing faults.
Posted on Reply
#23
Valantar
The main issue with this scenario is power. Low end desktop GPUs alone still consume 75-120W of power. Low end mobile GPUs (not counting Nvidia's MX series and their equivalents) consume 40-65W. Even accounting for the inherent savings of a consolidated power delivery system, improved architectural and process efficiency, smaller PCBs, and HBM over GDDR, that still requires a dramatic jump in power density for these APUs. Of course, cooling 150+W on the desktop isn't a massive problem, but it's still a challenge, and requires a beefy cooler, driving up the total price for end users. It also demands far beefier SoC/iGPU VRMs than exist on any motherboard today, a cost that would also be passed on to anyone else buying these motherboards.

Another issue is of course making this all work. The most realistic scenario for a low-end dGPU-killer APU is MCM with a dGPU chiplet and HBM2. That raises the issue of room within the confines of the AM4 package - at least when looking at Matisse and Vermeer, there's no way they'd be able to fit both the dGPU die and HBM in there. Which means that either the I/O die needs to shrink (driving up pricing, as that would mean new silicon) or go away entirely (which won't really work unless the GPU die serves double duty as an I/O hub, DRAM controller and so on for the CPU. Which is again highly unlikely).

A slightly more realistic scenario is including a HBM controller on a beefy mobile APU and creating some mobile and desktop SKUs equipped with HBM - but that would continue the issue of APUs being very large, with not much room for a bigger GPU. 16-20 CUs would be really difficult to do even on 5nm unless you want a very expensive, very large die.
Posted on Reply
#24
evernessince
THANATOSPeak power consumption of GTX1660Ti is only 134W as shown in TPU review, I don't know where you got 193W.


In my opinion RDNA2 based IGPs will have a small infinity cache. Some older leaks were talking about Van Gogh being bigger than Renoir even though It has only 4 Zen2 cores, this would explain It.
www.anandtech.com/show/15206/the-amd-radeon-rx-5500-xt-review/9

Talking total system consumption for both GPUs here.
Posted on Reply
#25
zlobby
GoldenXYet still Vega.
Alas...
Posted on Reply
Add your own comment
Nov 21st, 2024 21:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts