Wednesday, February 10th 2021

AMD Ryzen 7 Pro 5750G Zen 3 Based Desktop APU Spotted with 4.75 GHz Frequency

AMD is slowly preparing the launch of its next-generation Ryzen Pro 5000 series of APUs designed for desktop applications. The biggest difference over the previous generation Renoir 4000 series is that this generation is now offering a major improvement in microarchitecture. Using Zen 3 core at its base, the Cezanne processor lineup is supposed to integrate all of the IPC improvements and bring them to the world of APUs. Doubling the level three (L3) cache capacity from 8 MB to 16 MB, Zen 3 cores are paired with a good amount of cache to improve performance.

Thanks to a user from Chiphell forums, we have the first details about AMD Ryzen 7 Pro 5750G APU. The new generation design is bringing a big improvement with clock speeds. Having a base frequency of 3.8 GHz, the Zen 3 based design now goes up to 4.75 GHz, representing a 350 MHz increase over the past generation Ryzen 7 Pro 4750G APU. For more details, we have to wait for the official announcement.
AMD Ryzen 7 Pro 5750G AMD Cezanne
Source: Tom's Hardware
Add your own comment

56 Comments on AMD Ryzen 7 Pro 5750G Zen 3 Based Desktop APU Spotted with 4.75 GHz Frequency

#26
Vayra86
ValantarHere's hoping Intel responds in kind with 11.9 GHz versions of their 12th gen CPUs :D
TDP over 9000? :wtf:

Oh no wait, they have TVB now, so 11,9 only when cooling allows, the stars in Andromeda have aligned with our solar system, and Swan gets his fat payout :rolleyes:
Posted on Reply
#27
Valantar
Vayra86TDP over 9000? :wtf:

Oh no wait, they have TVB now, only when cooling allows, right?
TVB2 - now with a LN2 support, peak frequencies* available for up to several picoseconds at full-pot temperature.

*Peak frequencies subject to availability of local high voltage conversion substation or power plant for power delivery. Risk of severe electric shock, burns, frostbite, or time travel. May interfere with pacemakers. CPU might weld itself to your case if run continuously. Consult a doctor before use.
Posted on Reply
#28
kruk
GoldenXThis only applies to mesa, the Windows drivers are terrible with old games.
I don't get how could this iGPU differ so much from Polaris, which provides near flawless experience for pretty much every 10+ years old game I own (yes, including OpenGL games). Have some examples (you can send me a PM if you wish)? I'm really curious, as I was thinking of upgrading to APU + dGPU. Thanks!
Posted on Reply
#29
TumbleGeorge
DrediThere is no point in either of those things. The GPU is so severely memory bandwidth limited that the CU count increase would benefit only fringe use cases. Expect more when DDR5 is out of the door.
That is not too precise. When Radeon 4770 was released was enough for playing games of it's time on 1080p with high and ultra settings with very comfortable FPS...Yes it has a one of first GDDR5 chips... but bandwidth was via 128 bit with speed...equal or little slower than dual channel DDR4 3200 - 51.2GB/s... Conclusion: there is not problem to make iGPU with more performance, because on market has more faster DDR4 modules...3600; 3733; 3866; 4000 and up to 4400 on enough fair price.
Posted on Reply
#30
Valantar
TumbleGeorgeThat is not too precise. When Radeon 4770 was released was enough for playing games of it's time on 1080p with high and ultra settings with very comfortable FPS...Yes it has a one of first GDDR5 chips... but bandwidth was via 128 bit with speed...equal or little slower than dual channel DDR4 3200 - 51.2GB/s... Conclusion: there is not problem to make iGPU with more performance, because on market has more faster DDR4 modules...3600; 3733; 3866; 4000 and up to 4400 on enough fair price.
A mobile Cezanne Vega 8 iGPU has more than twice the FP32 compute performance of a 4770 though, so it'll still be more bottlenecked than the 4770 unless it also has 2x the bandwidth.
Posted on Reply
#31
GoldenX
krukI don't get how could this iGPU differ so much from Polaris, which provides near flawless experience for pretty much every 10+ years old game I own (yes, including OpenGL games). Have some examples (you can send me a PM if you wish)? I'm really curious, as I was thinking of upgrading to APU + dGPU. Thanks!
Most D3D9 era games will have graphical glitches or bad performance under Windows 10, due to changes introduced by D3D12. You would have to use Windows 7, which is not available as an option for this driver.
Posted on Reply
#32
TumbleGeorge
ValantarA mobile Cezanne Vega 8 iGPU has more than twice the FP32 compute performance of a 4770 though, so it'll still be more bottlenecked than the 4770 unless it also has 2x the bandwidth.
But is not possible to play 1080p with ultra or high settings with Vega 8. Where is problem? Today games is too heavy or today performance number for GPUs and/or iGPUs are fake?
Posted on Reply
#33
kruk
GoldenXMost D3D9 era games will have graphical glitches or bad performance under Windows 10, due to changes introduced by D3D12. You would have to use Windows 7, which is not available as an option for this driver.
Well, that is weird as I use Windows 10 and as I said have no major problems. I do however keep my install clean with nothing else installed as the most recent WHQL drivers ...
Posted on Reply
#34
Makaveli
What really fascinates me about this product is

"However, the Zen 3 APU has a stronger FCLK than Ryzen 5000 (Vermeer) processors. The Ryzen 7 Pro 5750G allegedly had its FCLK at 2,300 MHz, and there are rumors that engineering samples can even do 2,500 MHz."

This was posted on THG for this same story.
Posted on Reply
#35
Valantar
TumbleGeorgeBut is not possible to play 1080p with ultra or high settings with Vega 8. Where is problem? Today games is too heavy or today performance number for GPUs and/or iGPUs are fake?
Uh, this seems like such an obvious statement that I'm worried I might have misread you, but yes, games are far, far more complex and demanding today than 12 years ago. Resolutions are much higher. Polygon counts are much higher. Lighting, shadows, shading, textures are all massively more complex. Etc., etc. That's why we have the constant pressure for more powerful GPUs. A 4770 isn't able to play today's titles at 1080p Ultra, and a Vega 8 is more than capable of playing 2009-era titles at 1080p ultra (though shared power and memory bandwidth budgets with the CPU might cause it to underperform compared to pure on-paper spec comparisons depending on the game). Of course an iGPU from 2019-2021 doesn't have many driver optimizations for titles from the late 2000s, so bugs or performance issues are somewhat expected.
Posted on Reply
#36
Dredi
TumbleGeorgeThat is not too precise. When Radeon 4770 was released was enough for playing games of it's time on 1080p with high and ultra settings with very comfortable FPS...Yes it has a one of first GDDR5 chips... but bandwidth was via 128 bit with speed...equal or little slower than dual channel DDR4 3200 - 51.2GB/s... Conclusion: there is not problem to make iGPU with more performance, because on market has more faster DDR4 modules...3600; 3733; 3866; 4000 and up to 4400 on enough fair price.
There are no jedec spec ddr4 modules on the market that are over 3200. Anyway, the chip is heavily bandwidth limited and increasing the CU count would not bring such benefits that would make it worthwhile to a) have worse yields, b) smaller amount of total chips and c) higher leak current.
Posted on Reply
#37
TumbleGeorge
ValantarResolutions are much higher.
We discuss only why today "Vega" 8 which theoretically is faster than Radeon 4770 and has a memory connection that is no slower than VRAM in Radeon 4770 fails to play games on similar settings on 1080p. Higher resolutions has not related and not discussed. The explanation with general tales does not satisfy me. I still think that shared dual-channel DDR4 memory is fast enough and is not a bottleneck in the system, but is able to support iGPUs with even higher performance than currently available.

PS if I'm not wrong 5000G series able to work with DDR4 with overclock i.e. JEDEC standards is not problem.
Posted on Reply
#38
tabascosauz
DrediThere are no jedec spec ddr4 modules on the market that are over 3200. Anyway, the chip is heavily bandwidth limited and increasing the CU count would not bring such benefits that would make it worthwhile to a) have worse yields, b) smaller amount of total chips and c) higher leak current.
JEDEC standards? Are you serious? Next are you going to link me Linus' video saying that all RAM should be 3200CL20 1.2V with ECC because normal people are incapable and helpless at verifying memory stability?

You do realize that AMD's best Infinity Fabric and best Unified Memory Controller are both found on Renoir APUs? 2200-2400MHz IF is the norm depending on quality and the VSOC you wanna push. In the span of one generation, AMD's UMC went from a laughingstock to something Intel doesn't even have an answer for (5400+ easy, 6000 validations with the right cooling).

AMD's refusal to throw out Vega, and their yields hampering their ability to keep Vega 11 in there are keeping Renoir and Cezanne from being all that they can be.
Posted on Reply
#39
Valantar
TumbleGeorgeWe discuss only why today "Vega" 8 which theoretically is faster than Radeon 4770 and has a memory connection that is no slower than VRAM in Radeon 4770 fails to play games on similar settings on 1080p. Higher resolutions has not related and not discussed. The explanation with general tales does not satisfy me. I still think that shared dual-channel DDR4 memory is fast enough and is not a bottleneck in the system, but is able to support iGPUs with even higher performance than currently available.

PS if I'm not wrong 5000G series able to work with DDR4 with overclock i.e. JEDEC standards is not problem.
Wow, did you even read my post? You can't just pick one thing out of a list of many interrelated factors and say "well this one is wrong so nothing else matters". I mean, do you have any data showing that the 4770 can run games at 1080p Ultra that the Vega 8 can't? Remember, the only games the 4770 can (possibly, though given that 1080p was very high end back then, not all that likely) run games from ca. 2009 at that resolution and those settings. Guess what? The Vega 8 can too! When people say these APUs can't game at 1080p ultra - which is what I'm assuming you're basing this on - they are talking about current games! From the past few years! In other words, games the 4770 likely can't run at all, let alone at a resolution even close to 1080p or settings even close to Ultra. Older games? It obviously depends on the game and how demanding it is, but sure. Absolutely. You understand that the basis for comparison now and ten+ years ago is completely different, right? That games today, regardless of resolution, are massively more demanding (for the reasons I listed in my previous post)?

I mean, come on. At least treat the people you're talking to with sufficient respect to not cherry-pick stuff to an absurd degree like that.
Posted on Reply
#40
Dredi
tabascosauzJEDEC standards? Are you serious?
Most of these go to OEM systems where it’s not realistic to assume overclocked memory.
Posted on Reply
#41
Vendor
not worth it i guess because ddr4 will bottleneck that igpu massively, might be good as a cpu. Can't wait when ddr5 arrives which will have single stick dual channel and with two stick quad channel for the first time in consumer grade then i think those have the potential to reach the performance level of a 1060 6gb or 1070
Posted on Reply
#42
TumbleGeorge
1080p are same number of pixels and their attributes...place, color, transparency, etc. in 2009 and today too. Picture in game has same attributes too, only new which has big needs of performance...from last 2-3 years is real time ray tracing. Other new is TressFX but for it is enough very small part of performance. There is little problem to correct compare performance between Radeon HD 4770 and Vega 8. First card is not supported from many years. Drivers is very old and "legacy". I'm sure that if card was good supported to today results of tests will be not so badly. LoL Radeon HD 4770 run Crisis on 1080p(with 24-25 fps :D)(PS. on lowest settings)
VendorCan't wait when ddr5 arrives which will have single stick dual channel and with two stick quad channel for the first time in consumer grade then i think those have the potential to reach the performance level of a 1060 6gb or 1070
:eek: if this is true will be shocking!
Posted on Reply
#43
jonup
ValantarThat's not quite true. I saw significant performance increases on my 4650G moving not only from DDR4-3200 to 3800 but also 1900MHz to 2100MHz. With faster DDR4 (which these should handle easily) or LPDDR4X on mobile, these could probably benefit significantly from higher clocks still.


Some people want to build small, compact, low-power PCs yet still want performance. I have a 4650G in my HTPC, and I'm very happy I didn't have to stick a dGPU in there, as it would have increased noise and power draw even if it would also obviously have increased performance. The 4650G can handle Rocket League and other non-graphically intensive games at good frame rates, runs dead quiet, sips power, and I only have a single fan in the system, so it's pretty much perfect for me. My case can fit an LP GPU, so I could have stuck a 1650S in there, but that would have added two ~60mm fans running constantly, which was a no-go for HTPC use. And a full height GPU would have necessitated a larger case and/or worse CPU cooling. It's all a question of balance.

And obviously in the future, if I can run more stuff natively on the HTPC without moving away from its tiny, semi-passively cooled, single-fan build, that's something I'd want.
So the short answer half @$$ gaming. Fair enough.
Also performance doesn't come free. So if you want more graphics performance, it will come at a power consumption price. Which would either reduce CPU performance or higher power envelope. I would assume, most APU customers are like me and they are not willing to give up CPU performance for higher GPU performance. And as you said, we are all looking for reduce power consumption, so higher power consumption is also not a good solution. I also go about silence in different way than you. Idk how you can maintain dead silence at full tilt. My APU build is in a moddified (more dampening added) P180 mini mATX case which is larger than many full ATX cases, with 3 sub 500rpm fans, and a Macho with stock fan on it. There is no replacement for displacement. :rockout:
Posted on Reply
#44
Valantar
TumbleGeorge1080p are same number of pixels and their attributes...place, color, transparency, etc. in 2009 and today too. Picture in game has same attributes too, only new which has big needs of performance...from last 2-3 years is real time ray tracing. Other new is TressFX but for it is enough very small part of performance. There is little problem to correct compare performance between Radeon HD 4770 and Vega 8. First card is not supported from many years. Drivers is very old and "legacy". I'm sure that if card was good supported to today results of tests will be not so badly. LoL Radeon HD 4770 run Crisis on 1080p(with 24-25 fps :D)(PS. on lowest settings)
Uh ... wow. Okay. No. Just no. That is not how rendering in games works. Well, it kind of might be in highly simplistic 2D engines. But ... not for any 3D game, not ever. The demands placed on a GPU to render a contemporary game at any given resolution and quality level - let's stick with 1080p60 Ultra here, as that's what we've been talking about - has changed massively since 2009. The number of pixels itself is just a part of the equation. Geometry has increased massively in complexity. Polygon counts for in-game models are through the roof compared to back then. Texture sizes and quality, texture filtering, the quality and implementation of shaders, etc., etc., etc. In 2009 we barely had DirectX 11 support. I mean, are you seriously telling me you think this is no more demanding to render than this, if set to the same resolution? Because it is. Massively so. I mean, seriously, if games didn't become more demanding to render at the same resolution, why do people upgrade their GPUs? Most people don't change their screen resolutions often. So what changes, what causes people to upgrade? Games get more demanding graphics. More complex geometry, lighting, shadows, ambient occlusion, texture filtering, etc., etc., etc. All of which requires more GPU resources to render at the same framerate.

Here's a good primer on how 3D game rendering works. I think you might gain from reading it.
jonupSo the short answer half @$$ gaming. Fair enough.
Also performance doesn't come free. So if you want more graphics performance, it will come at a power consumption price. Which would either reduce CPU performance or higher power envelope. I would assume, most APU customers are like me and they are not willing to give up CPU performance for higher GPU performance. And as you said, we are all looking for reduce power consumption, so higher power consumption is also not a good solution. I also go about silence in different way than you. Idk how you can maintain dead silence at full tilt. My APU build is in a moddified (more dampening added) P180 mini mATX case which is larger than many full ATX cases, with 3 sub 500rpm fans, and a Macho with stock fan on it. There is no replacement for displacement. :rockout:
Everything is a compromise. Please stop pretending otherwise. Your compromise is to put an APU build in a massive case for cooling, which is ... well, wasteful, IMO. My compromise is to make a highly optimised, tiny semi-passive build that has cooling when it needs it, but runs entirely fanlessly when not under load. It's not silent while gaming, but well ... I don't care at that point. If I'm gaming at the TV I either have game audio through speakers or headphones on, and the single 140mm Noctua fan in that PC doesn't get loud enough to be audible in either scenario, even if it's clearly audible at full tilt in a silent room. Your build would be completely and utterly unsuited for my use case, as I guess mine would be for yours. So, sorry, but I prefer my solution to yours. The compromises you've chosen to accept are far, far too significant for me.

And half assed? I wouldn't say so. It's never been intended for balls-to-the-wall performance. That's not the point. It's the best performance possible at that size, noise level and power draw. It's about as optimal as it could be (I guess a 4750G would be better) currently. Could it be faster? Sure, but that would mean more noise and power draw, or more size to dampen the noise. Could it be quieter under load? Sure, but I don't care, as it's not perceptible. Could it be more efficient? Not at that level of performance. It's a very well balanced build, thanks to the great Renoir APUs. Not every PC is a 400W power draw gaming monster. And that's fine.

As for increasing power draw: I agree it wouldn't be ideal, but given that my HTPC never exceeds 110W at the wall while gaming, there's room to scale without it becoming an issue. A better cooler (mine is a hacked-together DIY project I made because I wanted to see if it would work) in the same case could no doubt sustain a 100-120W heat load from the APU under load, so I'd be perfectly fine with that. As long as idle draws don't increase that is, but they don't tend to do that on modern CPUs.
Posted on Reply
#45
tabascosauz
jonupSo the short answer half @$$ gaming. Fair enough.
Also performance doesn't come free. So if you want more graphics performance, it will come at a power consumption price. Which would either reduce CPU performance or higher power envelope. I would assume, most APU customers are like me and they are not willing to give up CPU performance for higher GPU performance. And as you said, we are all looking for reduce power consumption, so higher power consumption is also not a good solution. I also go about silence in different way than you. Idk how you can maintain dead silence at full tilt. My APU build is in a moddified (more dampening added) P180 mini mATX case which is larger than many full ATX cases, with 3 sub 500rpm fans, and a Macho with stock fan on it. There is no replacement for displacement. :rockout:
You...do realize that bigger isn't actually better? This is Ryzen - air cooling always falls around the same temperature range, power density is still a thing, and Renoirs don't even really max out their PPT limit on a regular basis :confused:

My previous APU build that I've broken down into its component parts was in a 12L NCASE M1 and that was already big and at the point where thermals don't really improve anymore. Temperatures don't just magically keep going down the bigger you build and the bigger the cooler you use...

I mean hey you do you, but this really isn't a case of "no replacement for displacement"......a 7.3 actually *does* certain things better than a 3.5EB......
Posted on Reply
#46
watzupken
Chrispy_Every time I see these die layout images, I'm reminded and saddened how little area a single Vega CU takes up, and yet they removed three of them.

The reduction from Raven Ridge/Picasso to Renoir/Cezanne isn't from Vega10 to Vega8, because very few models have all CUs enabled (yields/defects?) Vega10 is 10/11 and the equavalent is Vega7 because the 4800U/5800U will be so exceptionally rare that they're either unavailable or priced outside any reasonable value for most people.

From Vega10 to Vega7 is a big oof. Bandwidth be damned, there's still plenty for a better IGP at the resolutions and framerates APUs target...
The layout actually shows that the die is already very packed. So you need to consider if you want more Vega cores, then something has got to give. In my opinion, it is nice to have a Vega 11 in there, but considering that performance did not regress when they reduced the Ryzen 7 to a 8 CUs, I think there is no loss here. In addition I feel that iGPU may have hit a ceiling in terms of the graphical quality and resolution that it can run, because as you push these 2 settings up, the more memory bandwidth will be required. By spamming more CUs in there, you get diminishing returns, so effectly wasting die space.
Posted on Reply
#47
R0H1T
Vayra86TDP over 9000? :wtf:
You forgot the obligatory :shadedshu:

Posted on Reply
#48
Chrispy_
watzupkenThe layout actually shows that the die is already very packed. So you need to consider if you want more Vega cores, then something has got to give. In my opinion, it is nice to have a Vega 11 in there, but considering that performance did not regress when they reduced the Ryzen 7 to a 8 CUs, I think there is no loss here. In addition I feel that iGPU may have hit a ceiling in terms of the graphical quality and resolution that it can run, because as you push these 2 settings up, the more memory bandwidth will be required. By spamming more CUs in there, you get diminishing returns, so effectly wasting die space.
The performance per clock did reduce though, significantly.

Vega10 in the 2700U/3700U typically runs at ~950MHz because of power consumption on the hungry old 14nm GloFo process bumping up against the 15-25W TDP of those chips. The reason Vega 8 in Renoir isn't worse is because AMD jacked up the clockspeeds and they typically run the Vega cores at 1400-1500MHz (looking at the Vega7 in the 4700U). So yeah, the reduction of Vega units is almost proportionally matched by the increase in clockspeed. We know from overclocking with faster RAM on the desktop APUs that shared DDR4 memory bandwidth, although not great, is more than enough to provide meaningful, significant GPU performance increases. The whole "AMD have stopped making their IGPs faster because there's no bandwidth" myth has been been busted multiple times, from multiple different angles.

I'm not proposing a Vega20 or Vega24 behemoth IGP that dwarfs the rest of the APU, I'm simply suggesting that the die size savings AMD made by chopping off 30% of the graphics cores seems like a big sacrifice of overall performance for relatively low gains in die area. It stinks of Intel's old mantra of "provide the bare minimum IGP you can get away with because without better choices, people won't know better to complain about it". IMO, even the original 11CU design would be enough to make a meaningful difference to APU graphics performance, and it would increase the die area by a paltry 3-4%. It would have made the die squarer, which is actually a better shape for cost in terms of dies per wafer which somewhat counters the cost of the added die area...
Posted on Reply
#49
Valantar
Chrispy_The performance per clock did reduce though, significantly.

Vega10 in the 2700U/3700U typically runs at ~950MHz because of power consumption on the hungry old 14nm GloFo process bumping up against the 15-25W TDP of those chips. The reason Vega 8 in Renoir isn't worse is because AMD jacked up the clockspeeds and they typically run the Vega cores at 1400-1500MHz (looking at the Vega7 in the 4700U). So yeah, the reduction of Vega units is almost proportionally matched by the increase in clockspeed. We know from overclocking with faster RAM on the desktop APUs that shared DDR4 memory bandwidth, although not great, is more than enough to provide meaningful, significant GPU performance increases. The whole "AMD have stopped making their IGPs faster because there's no bandwidth" myth has been been busted multiple times, from multiple different angles.

I'm not proposing a Vega20 or Vega24 behemoth IGP that dwarfs the rest of the APU, I'm simply suggesting that the die size savings AMD made by chopping off 30% of the graphics cores seems like a big sacrifice of overall performance for relatively low gains in die area. It stinks of Intel's old mantra of "provide the bare minimum IGP you can get away with because without better choices, people won't know better to complain about it". IMO, even the original 11CU design would be enough to make a meaningful difference to APU graphics performance, and it would increase the die area by a paltry 3-4%. It would have made the die squarer, which is actually a better shape for cost in terms of dies per wafer which somewhat counters the cost of the added die area...
I don't think we'll see AMD significantly increase iGPU compute power until we get RDNA2 APUs. Even if RDNA2 CUs are larger, they dramatically outperform Vega in gaming performance per teraflop of compute, plus it fits better with future driver development. So they're likely focusing their R&D money on projects with more long-term value for now, with the high frequency Vega 8 just being copied over to new designs until an RDNA replacement is ready.

What I would love to see: a base monolithic APU with ~10CUs and a small Infinity Cache, with an on-die link for a possible secondary gpu die (like we've seen in recent patents). Especially if their MCM GPU tech allows for combining asymmetrical GPUs (acting as one combined unit, not several linked together) that would let them scale up almost indefinitely. An extra, tiny 20CU die for entry level gaming, and a large, 40+CU die (with HBM?) for something more serious? Yes please. Though of course this is firmly in "pure fantasy" territory. Even 10 CUs of RDNA2 at >2GHz with LPDDR5X-6400 or higher would be amazing though. 12 or 14? Yes please.
Posted on Reply
#50
TumbleGeorge
Chrispy_Vega20
I want AMD APU with iGPU something with performance like in this next year. Yes without HBM but DDR5 in dual channel maybe will enough for it?
Posted on Reply
Add your own comment
Feb 3rd, 2025 10:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts