• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Socket AM5 "Zen 4" Processors to have RDNA2 Integrated Graphics Across the Lineup

That make sense

They are suppose to move the I/O die to TMSC 7nm. That will leave them some room to implement a small RDNA2 GPU.

That would help them to get some feature like quicksync. Things that AMD really need to improve.

But i won't think those would be super powerful. But who know, with chiplets, they could design a specific SKU with a larger RDNA2 GPU for that reason.

It would need some infinity cache on to have enough bandwidth.
 
That make sense

They are suppose to move the I/O die to TMSC 7nm. That will leave them some room to implement a small RDNA2 GPU.

That would help them to get some feature like quicksync. Things that AMD really need to improve.

But i won't think those would be super powerful. But who know, with chiplets, they could design a specific SKU with a larger RDNA2 GPU for that reason.

It would need some infinity cache on to have enough bandwidth.
cool but they really need to add things akin to quicksync to have general value from it imo
Can someone explain to me what's the point of Quicksync (on CPU of all things)? I can understand NVENC and whatever name AMD uses for hardware encoding for streaming at least. But people streaming already have dedicated GPUs. Another use case I see is video transcoding. But as I've transcoded some of my H.264 library to H.265 for freeing up some storage, hardware encoders do horrible job of compressing the output files. They are just quick and importantly dirty, in that regard.
 
Most Intel CPUs since Sandy Bridge have been with iGPU (with the exception of F-models since 9000-series). Most AMD Zen CPUs have been without iGPU.

AMD APUs have been inferior to other AMD CPUs in some way as well. The 8 cores that was Zen's main argument did not exist in 2000 and 3000 series APUs. 4000-series APUs were officially OEM-only and even 5000-series APUs suffer from lower Cache.

You have no idea what you are talking about, have you not?

Compare the Ryzen 7 5700G in 65 watts against Ryzen 7 5800X in 105 watts, or 60% higher TDP.

Ryzen 7 5700G scores 24,406 PassMark points. PassMark - AMD Ryzen 7 5700G - Price performance comparison (cpubenchmark.net)
Ryzen 7 5800X scores 28,495 PassMark points. PassMark - AMD Ryzen 7 5800X - Price performance comparison (cpubenchmark.net)

Performance difference is 16.75%.
First has 20 MB combined L2+L3 cache.
Second has 36 MB combined L2+L3 cache.

Second has 80% more cache and 60% higher TDP, and still gives only 15ish% higher performance.
 
You're suggesting that AMD is going to have two distinct product lines for the desktop, one monolithic and the other made with chiplets? It could be, and it raises the question: where will they put the iGPU? On one CCD? Both CCDs? The I/O die? A separate chiplet, possibly?
I'm guessing this be a pretty small GPU built into the IO hub. Since they are using 7nm now it would pretty easy for them to just build that in and now they have feature parity with Intel. I would also suspect the bigger on die GPU will continue to be monolithic designs on the best node to get the most efficient use of space and power. Maybe they'll make a chiplet GPU but I kinda doubt it.

Can someone explain to me what's the point of Quicksync (on CPU of all things)? I can understand NVENC and whatever name AMD uses for hardware encoding for streaming at least. But people streaming already have dedicated GPUs. Another use case I see is video transcoding. But as I've transcoded some of my H.264 library to H.265 for freeing up some storage, hardware encoders do horrible job of compressing the output files. They are just quick and importantly dirty, in that regard.
Super useful for video editing. Not for encoding the final output but for accelerating the editing process itself.
 
You have no idea what you are talking about, have you not?
I might. Switched from 5600X to 5600G couple a weeks ago.
 
Can someone explain to me what's the point of Quicksync (on CPU of all things)? I can understand NVENC and whatever name AMD uses for hardware encoding for streaming at least. But people streaming already have dedicated GPUs. Another use case I see is video transcoding. But as I've transcoded some of my H.264 library to H.265 for freeing up some storage, hardware encoders do horrible job of compressing the output files. They are just quick and importantly dirty, in that regard.
You nailed it. It's marketing fluff, intel's only real product for 4 yrs running.
 
Why would it go on the CCDs? :confused: the point of going to chiplets was to eliminate unnecessary BS so Ryzen, Threadripper and Epyc with different I/O could scale off the same chiplet. Upcoming 3D cache will be stacked on the CCDs already, and the CCDs already struggle thermally so no sense in making it even worse.

Could be either I/O die or separate, both would have their advantages. But from how AMD APUs and all desktop Intel are designed though, probably I/O die right next to I/O and UMC and system agent stuff.
While I agree that packing the iGPU with the cores is the least likely option, it would have one big advantage: it would put the graphics cores close to the large and precious L3 cache. The (optional and precious) 3D V-cache extension would thus benefit both the CPU and GPU parts. AMD would certainly have to develop a distinct, larger variant of silicon only for Ryzens while still producing CCDs without graphics for TR and Epyc. Doing that would cost them millions but hey, they have the funds now, and they have economies of scale.

As for heat density, yes, you're right, but it couldn't possibly be worse than in monolithic Ryzens.
 
While I agree that packing the iGPU with the cores is the least likely option, it would have one big advantage: it would put the graphics cores close to the large and precious L3 cache. The (optional and precious) 3D V-cache extension would thus benefit both the CPU and GPU parts. AMD would certainly have to develop a distinct, larger variant of silicon only for Ryzens while still producing CCDs without graphics for TR and Epyc. Doing that would cost them millions but hey, they have the funds now, and they have economies of scale.

As for heat density, yes, you're right, but it couldn't possibly be worse than in monolithic Ryzens.
Plot twist: 6/8 core models will be the only ones with gpu and are monolithic, b/c yields are stupid high, anyway. They'll be the same thing going into mobile, but on a different socket. 12 core goes to mobile in chiplets, but has no GPU, b/c it's high end.
 
That make sense

They are suppose to move the I/O die to TMSC 7nm. That will leave them some room to implement a small RDNA2 GPU.

That would help them to get some feature like quicksync. Things that AMD really need to improve.

But i won't think those would be super powerful. But who know, with chiplets, they could design a specific SKU with a larger RDNA2 GPU for that reason.

It would need some infinity cache on to have enough bandwidth.
quicksync is intel only.
 
Plot twist: 6/8 core models will be the only ones with gpu and are monolithic, b/c yields are stupid high, anyway. They'll be the same thing going into mobile, but on a different socket. 12 core goes to mobile in chiplets, but has no GPU, b/c it's high end.

I don't think that's what this leak is suggesting. Currently all Zen 3 processors belong to the same architecture family (19h), but not to the same model group. 5600X/5800X/5900X/5950X fall under the same model number (21), while 5600G/5700G are different (50).

The leaked table shows 3 separate model groups (40, 60, 70) within the same arch family (19h), but all 3 purportedly have an iGPU. Since AMD is most likely continuing to maintain the same two chiplet and monolithic families, that's what gives rise to the suspicion that they're adding a small iGPU to chiplet CPUs.

The way it's looking, monolithic Zen will continue to have a cache deficit compared to chiplet Zen, so for desktop 6-8 cores is likely to stay with chiplets.
 
This just confirms what has become clear over the last 12 months, AMD and Nvidia are abandoning the entry level GPU market. Looking at the absurd prices of cards like the 6600XT, they don't want to give you a 6500 or a 3050 that is affordable. AMD will push the entry level now into their CPU's and if you want to do serious gaming buy a premium priced discrete card. Not sure what Nvidia plans to do, but with Intel soon to have much stronger iGPU's too Nvidia will be out on a limb.
 
I can totally see two product stacks (and we see 3 in the leak)

E series (45W and under 8 cores with basic GPU, maybe a single CCX lineup?)
G series (65-95W, strong IGP)
X series (65W-170W) - basic GPU, all the cores and performance

My only concern is that 65-95W is not enough to have a stong IGP and 95-170W is too much for 16 cores.
 
My only concern is that 65-95W is not enough to have a stong IGP and 95-170W is too much for 16 cores.
the 5700g disagrees with you
 
All CPUs with graphics. I don't know how I feel about it. As long as it is efficient and does not bring the cores down I'm OK with it.
Well, Intel has done it since Sandy Bridge if you don't count the recent F SKUs and most Xeons. Also some invidual ones (like i5-2550K) didn't have their iGPUs active but anyway.
 
Well, Intel has done it since Sandy Bridge if you don't count the recent F SKUs and most Xeons. Also some invidual ones (like i5-2550K) didn't have their iGPUs active but anyway.
I get you but just because Intel has been doing this iGPU doesn't mean AMD has to do it as well in the desktop segment. Mobile segment it is a must I'd say. People thinking that this is great because if the discrete graphics breaks they have an alternative is a rare situation. Don't fiddle with your discrete graphics if you dont know what you are doing and you will be ok. I lean towards not having it, instead use the space for something else. The iGPU also boost price and that is something I really don't want. Unless, there will be a line of processors not having iGPUs. I'd be ok with that.
 
I know but it does take space nonetheless. AMD can always use that space for something else than graphics.

Well, how many more cores do you really need on MSDT?

I'm certainly not against having an IGP in a powerful CPU. Even just as backup, but also as primary driver. The fact you don't need discrete is a big plus for many use cases.

I think: when the IGP is no longer coming in at the expense of otherwise essential features, its always a nice to have.
 
Well, how many more cores do you really need on MSDT?

I'm certainly not against having an IGP in a powerful CPU. Even just as backup, but also as primary driver. The fact you don't need discrete is a big plus for many use cases.

I think: when the IGP is no longer coming in at the expense of otherwise essential features, its always a nice to have.
Sure if you take it that way but as for me I have never used an iGPU and was never forced to use it. Nice to have for sure but still you can go without it. Less power is being used and there is always room for other stuff not necessarily cores. And of course power usage.
Besides, if AMD is going to use chiplets and that is the deal here, would all 8c chiplets have iGPU? What if you get 16c 2 chiplets? would both have iGPU?
I'm not sure what to think of it at this point. It is not something bad just a concern if it really is that necessary to have one and if AMD does not have a better way to utilize that space.
 
Less power is being used and there is always room for other stuff not necessarily cores. And of course power usage.
Ironically, in case of 5x00X and 5x00G, the G models use less power for most use cases besides full load when limiting factor is the power limit. Obviously that is not due to iGPU but rather IO Die.
iGPU is power gated if not used and the only downside is some die area cost.
 
Ironically, in case of 5x00X and 5x00G, the G models use less power for most use cases besides full load when limiting factor is the power limit. Obviously that is not due to iGPU but rather IO Die.
iGPU is power gated if not used and the only downside is some die area cost.
Yes but that is a power limiter not power max. Dont you think that is not the same as 5800x for instance?
The boosts clocks are different as well due to that factor which means, even though 5700G is a great product it is still slower than 5800x. This is because the power limit. It's done for a reason also and I'm not saying it is a bad thing. Price prospect has been brought already in an earlier post. Don't want to talk about it again.
 
Sure if you take it that way but as for me I have never used an iGPU and was never forced to use it. Nice to have for sure but still you can go without it. Less power is being used and there is always room for other stuff not necessarily cores. And of course power usage.
Besides, if AMD is going to use chiplets and that is the deal here, would all 8c chiplets have iGPU? What if you get 16c 2 chiplets? would both have iGPU?
I'm not sure what to think of it at this point. It is not something bad just a concern if it really is that necessary to have one and if AMD does not have a better way to utilize that space.

It only makes sense for the iGPU to be part of the I/O die or on its own, because the only things on the CCDs are the common hardware shared between Ryzen/TR/Epyc. If the client IOD receives a small GPU, the chiplets are unaffected, and the server IOD can stay unaffected as well. Only one part, the client IOD, needs to change (which it will anyway, to accommodate DDR5 UMC, USB4, and possible die shrink).

The iGPU isn't the power hog here. The chiplet Ryzens have always been terrible for idle power because that I/O die basically will not go below 10-15W at reasonable IF speeds. And then with 1CCD you have roughly 10W of phantom power that disappears into the nether and isn't going into cores or SOC, and on 2CCD that's about 15W misc power. On APU that misc power is basically negligible.

On 4000G and 5000G, even at 2000-2200MHz Infinity Fabric + OC'd iGPU + DF Cstates disabled + Uncore OC mode, idle entire package power always is in the 5-10W range. If you run stock GPU and slower IF, likely regularly below 5W package. iGPU basically sleeps the entire time (since AGESA 1200, iGPU ASIC Power more accurately mirrors Package Power).

AMD and Intel have optimized their iGPUs excellently for idle power - same deal with Intel -F and -KF, people like repeating the same tired story that the iGPU robs power budget, when in reality there is zero difference unless you're using it.
 
Last edited:
It only makes sense for the iGPU to be part of the I/O die or on its own, because the only things on the CCDs are the common hardware shared between Ryzen/TR/Epyc. If the client IOD receives a small GPU, the chiplets are unaffected, and the server IOD can stay unaffected as well. Only one part, the client IOD, needs to change (which it will anyway, to accommodate DDR5 UMC, USB4, and possible die shrink).
If the iGPU is in IO die there needs to be memory access. DDR5 sure but it will be slow so cache and that would mean IO die would have to have cache mem for the iGPU. That would cost a lot.
If on it's own, then it would be small and still required memory. Connection to the cores as well IF probably. Makes me think of the latency and probably inefficient.

Never said iGPUs are power hogs :) but if the iGPU is on the I/O die, it will require power nonetheless and I dont think AMD will be able to switch it off if idle.
 
Yes but that is a power limiter not power max. Dont you think that is not the same as 5800x for instance?
The boosts clocks are different as well due to that factor which means, even though 5700G is a great product it is still slower than 5800x. This is because the power limit.
What I meant was that my 5600G for example idles (well, desktop usage with background stuff and browser and such) at 13W while 5600X in the same situation idles at 28W. The same difference of 12-13W on average goes on across the entire range until power limiter kicks in. This was just an example of why a iGPU would not necessarily be a problem power consumption wise.
If the iGPU is in IO die there needs to be memory access. DDR5 sure but it will be slow so cache and that would mean IO die would have to have cache mem for the iGPU. That would cost a lot.
Isn't it the other way around - IO Die has the memory controller which makes it the logical place to put an iGPU with a direct access to it.
Never said iGPUs are power hogs :) but if the iGPU is on the I/O die, it will require power nonetheless and I dont think AMD will be able to switch it off if idle.
AMD will be able to switch off iGPU wherever it is.
 
Yeah, the whole chiplet setup dictates that iGPU is optional. It would be nuts to have the extra GPU chiplet die in all packages. The AM5 socket provides for an iGPU is all, exactly the same as AM4.
 
If the iGPU is in IO die there needs to be memory access. DDR5 sure but it will be slow so cache and that would mean IO die would have to have cache mem for the iGPU. That would cost a lot.
If on it's own, then it would be small and still required memory. Connection to the cores as well IF probably. Makes me think of the latency and probably inefficient.

Never said iGPUs are power hogs :) but if the iGPU is on the I/O die, it will require power nonetheless and I dont think AMD will be able to switch it off if idle.
The iGPU on the I/O die would have it's own cache and do not need to have to have more unless they want to reach higher performance level. It's goig to be shrinked from Globalfoundry 12nm to TSMC 7nm so they will have some space. They aren't adding much to it as per the rumors. (DDR5 instead of DDR4, additional usb, 4 more GPU die

This article is about all Ryzen getting an igpu, not a high performance one. they can fit rdna 2 in mobile chip so they can fit it in the I/O without problem.

I wouldn't mind a GCD (GPU complex die) but since they say that every Ryzen will have one, i doubt it. But let think about it for fun

Navi 23 with 32 compute unit and 32 MB of infinity cache is 237 mm². Zen 3 CCD is about 80 mm². So let's imagine a Navi die where you remove all the PCI-E and Memory hardware, replace it with infinity fabrics instead, downgrade to 16 compute unit and 16 MB of Infinity cache. It should get very close to 80 mm² (or could easily match if it use TMSC 5nm) and could probably fit next to an 8 core Zen 4 CCD and could give users good low end 1080P medium/high detail performance.

The whole package could have up to 170w to use but this GPU could use about 50-60 watts and get decent clock frequency if worth it. It would still be limited by bandwidth as the hit rate would be low and it could be starved by the DDR5 memory shared with the CPU so very high clock is probably not really useful anyway. But that could hit hard and give people access to the low end GPU market quite easily.

One thing to take into consideration there, is AMD want to be sure that it's own IP is in as much PC as possible. They see intel coming with Xe that is in almost every intel CPU. The truth is the GPU wars is going to be heated up and they will have to fight back on the low-mid range. A dedicated cards is probably not the best option but a small integrated GPU with a small chunk of infinity cache could fill that role very easily.
 
I know but it does take space nonetheless. AMD can always use that space for something else than graphics.

Dont worry. Non-functional GPU's are basicly lasercut. It means that no current is flowing through them and it has zero effect in thermal(s).
 
Back
Top