Wednesday, January 6th 2021
AMD Ryzen 5000 Cezanne APU Die Render Leaked
VideoCardz has recently received a render of the upcoming AMD Ryzen 5000 Cezanne APU which is expected to be unveiled next week. The Zen 3 Cezanne APUs support up to 8 cores and 16 threads just like Zen 2 Renoir APUs. The Cezanne APU should support up to 8 graphics cores and 20 PCIe lanes, it is currently unknown whether these lanes will be PCIe 3.0 or PCIe 4.0. The Cezanne die appears to be ~10% larger than Renoir which comes from the larger Zen 3 core design and a larger L3 cache of 16 MB. The new Ryzen 5000H Cezanne series processors are expected to be announced by AMD next week and will power upcoming low and high power laptops.
Source:
VideoCardz
30 Comments on AMD Ryzen 5000 Cezanne APU Die Render Leaked
Renoir gave us even less GPU, but at much higher clocks. The end result was that the 4800U Vega8 was faster than the old 2700U by about 25% but also so damn-near impossible to buy that it might as well not exist. The real laptops that actually went on sale used Vega6, so actual performance is pretty much in line with the old 2500U and 2700U.
Throughout 2019 and 2020, that level of performance has decreased from "adequate for minimum settings at 720p30" to "totally inadequate slideshow at rock-bottom settings". Games have moved on, other GPUs have moved on, Intel's IGPs have moved on, and Cezanne's crappy Vega graphics are still going to burden AMD ultraportables for another year, despite already having outstayed their welcome.
Rembrandt can't come soon enough, as Renoir already has far too much CPU performance to match it's IGP. Cezanne will do nothing for the IGP, making it even more imbalanced.
Personally, it is nice to have a good iGPU because it enables people to play some games on their laptop without dishing out extra money for a dedicated graphic solution if they don't game much. But at the end of the day, iGPUs are unlikely to satisfy most gamer's requirements. Even if AMD can bump iGPU performance by 20%, you are mostly still constrained to 720p or 1080p with medium to low settings because the shared resources just can't keep up with a good old dedicated GPU. So for more serious gamers, they will just get a laptop or desktop with a more capable graphic card.
For example, the 1660 super consumes 193w, a decent chunk more than the 5500XT. Performance gain over the 5500XT is not proportional to the increase power draw either.
Typically speaking, the smaller cards have a higher power overhead. Whether they be a lower binned card with worse power characteristics, a higher frequency to compensate for the lack of cores, or the memory takes a higher percentage of the power budget as a ratio to the number of cores.
High end video cards represent the best a company can make. Low end cards represent whatever they could throw together at a reasonable price (or at least it used to, not so sure about current GPU pricing).
I assume that a DDR5-5600 dual channel (90GB/sec) will make APU's to reach parity with a GTX1650 GDDR5 more or less. The mid ground would likely give at least 80% more bandwidth than DDR4-3200, sources say basic (jedec) is DDR5-4800 and can go all the way to ~8000 MT.
DC DDR5-4800 ~76GB/sec
DC DDR5-5600 ~90GB/sec
DC DDR5-6000 ~100GB/sec
DC DDR5-7200 ~112GB/sec
DC DDR5-8400 ~134GB/sec
For 45W parts though they should struggle a bit more, but we enter a different limitation here since AMD (and others) are not willing to go slow and wide just to give you more perfomance.
Again, if you add more performance, you then need more memory bandwidth and so on, so in the end the igpus will always be crappy.
If DDR4 was the sole bottleneck, we'd be still stuck at 2017 performance levels, entirely throttled at DDR4's bandwidth limit; Evidently that's not true.
That being said, the iGPU on my Ryzen 5 Pro 4650G is quite something. While I would definitely have preferred RDNA in there, Vega at 1900MHz stock/2100 OC with a tiny voltage bump is quite something. LPDDR5 will (somewhat) fix that for you.
If AMD can pull that off, they will indeed be incredible. The main difficulties would be to get low latency and good cache hit for both CPU and GPU. Generally, GPU can handle latency way more than CPU so i wonder if there would be a need to do that.
if we look at this graphics
We can see that with 32 MB of cache, the hit rate at 1080p is around the same as the hit cache at 4K with 128MB. But 32 MB is still a very large chunk of die space for cache. Would it be worth it ?
i am pretty sure a NAVI2 APU with 8-16 compute unit and 32MB of cache could probably run 1080p game at a reasonable level of performance.
But the things is would it be a good investment of silicon space? Is that would increase sales and profits for the company ?
i am not sure about that. I, for sure, would want an APU that is that, and i think that the lower segments of GPU for gaming (the sub 250$ gpu) are to be replace by APU in the near future but i am not sure if AMD is there yet. Maybe when they will start making 5nm chips and get more die space.
One thing I did notice on a lot of recent games is that instead of running in slo-mo, it just switches to 30 Hz. So even if you arn't getting full speed, the effect of using less frames means that you often can't even see a difference in speed anyways.
General computing seems just fine. The one thing to be noted though is that installing things does seem to take a great deal of time on an external HDD. Updating graphics drivers and BIOS can also be a hassle and somewhat risky without taking necessary precautions.
Now, would I recommend this as a gaming PC? No. My desktop provides much better performance. However, if say you need a backup computer, you need something simple for school (that you can maybe even convince your parents you need for 'educational' purposes), or you're just on a tight budget, I'd say it's definitely a great buy, especially if you can get a good deal on it. (I paid $599 CAD. Regular was $799, meaning a savings of 25%).
There is a lot of fun to be had on one of these at a lower price point.
In my opinion APUs won't replace <$250 GPUs in the next 2 years or even longer. Now It's the beginning of 2021, and we will have 8CU RDNA2 IGP while in desktop we have RX 6900XT with 80CU. In 2022 will come Rebrandth with only 12CU RDNA2 while the weakest Navi24 should have 20-24CU. I expect 16CU IGP in 2023.
Another issue is of course making this all work. The most realistic scenario for a low-end dGPU-killer APU is MCM with a dGPU chiplet and HBM2. That raises the issue of room within the confines of the AM4 package - at least when looking at Matisse and Vermeer, there's no way they'd be able to fit both the dGPU die and HBM in there. Which means that either the I/O die needs to shrink (driving up pricing, as that would mean new silicon) or go away entirely (which won't really work unless the GPU die serves double duty as an I/O hub, DRAM controller and so on for the CPU. Which is again highly unlikely).
A slightly more realistic scenario is including a HBM controller on a beefy mobile APU and creating some mobile and desktop SKUs equipped with HBM - but that would continue the issue of APUs being very large, with not much room for a bigger GPU. 16-20 CUs would be really difficult to do even on 5nm unless you want a very expensive, very large die.
Talking total system consumption for both GPUs here.