Wednesday, May 29th 2024

AMD Ryzen 9000 Zen 5 Single Thread Performance at 5.80 GHz Found 19% Over Zen 4

An AMD Ryzen 9000 "Granite Ridge" desktop processor engineering sample with a maximum boost frequency of 5.80 GHz was found to offer an astonishing 19% higher single-threaded performance increase over an AMD Ryzen 9 7950X. "Granite Ridge" is codename for the Socket AM5 desktop processor family that implements the new "Zen 5" CPU microarchitecture. The unnamed "Granite Ridge" processor comes with an OPN code of 100-0000001290. Its CPU core count is irrelevant, as the single-threaded performance is in question here. The processor boosts up to 5.80 GHz, which means the core handling the single-threaded benchmark workload is achieving this speed. This speed is 100 MHz higher than the 5.70 GHz that the Ryzen 9 7950X processor based on the "Zen 4" architecture, boosts up to.

The single-threaded benchmark in question is the CPU-Z Bench. The mostly blurred out CPU-Z screenshot that reveals the OPN also mentions a processor TDP of 170 W, which means this engineering sample chip is either 12-core or 16-core. The chip posts a CPU-Z Bench single-thread score of 910 points, which matches that of the Intel Core i9-14900K with its 908 points. You've to understand that the i9-14900K boosts one of its P-cores to 6.00 GHz, to yield the 908 points that's part CPU-Z's reference scores. So straight off the bat, we see that "Zen 5" has a higher IPC than the "Raptor Cove" P-core powering the i9-14900K. Its gaming performance might end up higher than the Ryzen 7000 X3D family.

Many Thanks to TumbleGeorge for the tip.
Source: Wccftech
Add your own comment

132 Comments on AMD Ryzen 9000 Zen 5 Single Thread Performance at 5.80 GHz Found 19% Over Zen 4

#26
Pumper
stimpy88When AMD first released the Zen2 architecture, CPU-Z's author (or Intel) decided that he didn't like the Zen2 out-performing the Intel chip at the time, so a new benchmark version was released, reducing the AMD scores (Intel scores stayed the same) by some 15%. I have never taken the CPU-Z benchmark seriously after that, as it's apparently just an Intel sponsored benchmark.
In that case, we can ignore AMD vs. Intel results in CPU-z, but AMD vs. AMD is still relevant and shows a pretty chunky improvement over Zen4.
Posted on Reply
#27
Daven
It’s also important to note that this is an engineering sample. Final clocks could be higher.
Posted on Reply
#28
Denver


For those who don't know, clock-to-clock Zen4 offered a 1% improvement over Zen3 in CPU-Z. So to bring 19% gains in such a shallow benchmark shows major design changes.

But wasn't this leak declared a fake?
Posted on Reply
#29
john_
londisteWhich 8-core chips runs at 230W power limit? That is what 7900X and 7950X have.
I am talking about AM4.
With AM5 AMD decided to give users what they where cheering for. Some extra performance for much higher power consumption.
Posted on Reply
#30
Daven
DenverBut wasn't this leak declared a fake?
It could very well be fake as all rumors and leaks come from unofficial sources. But who declared this one so?
Posted on Reply
#31
persondb
CPU-Z benchmark has always been bad. It is essentially a look at a best case scenario though.
It runs entirely from L1I amd has a bunch of things that are synthetic and easy for modern CPUs.
Posted on Reply
#32
Denver
DavenIt could very well be fake as all rumors and leaks come from unofficial sources. But who declared this one so?
"Don't bother. The baidu thread started like this

Op claimed that amd's launching zen5 in august, then october. Said that the source has proofs and can confirm it

Did a 180 turn 1 day later and claimed june launch, july availability (was known long before he made his post)

A user put up a screenshot of alleged zen5 cpuz bench, deleted it after a couple hours

He claimed that he did it just for fun and didn't expect people to repost and take it seriously

Chinese users laughing at wccf reposting his shit

A number of chinese tech forums have already started to warn users against sharing baidu bs and threatened bans or infrator points. Your choice on whether ya wanna believe in them"

"Also, the CPU-Z screenshot doesn't list AVX-VNNI in ISA extensions."

Amd/comments/1d2od9j
Posted on Reply
#33
Noyand
john_At 5.8GHz doesn't just equals 14900K. It equals an overclocked and unstable 14900K.
Are you sure about that?The baseline profile didn't affect the ST performance when it was benchmarked, even locking the 14900k to 65w give the same result in ST. MT results for Zen 5 is where we will probably see the gains, but in ST RPL is still very strong

Posted on Reply
#34
Daven
NoyandAre you sure about that?The baseline profile didn't affect the ST performance when it was benchmarked, even locking the 14900k to 65w give the same result in ST. MT results for Zen 5 is where we will probably see the gains, but in ST RPL is still very strong
Different benchmarks, different test beds and as Denver investigated, this all could be fake.
Denver"Don't bother. The baidu thread started like this

Op claimed that amd's launching zen5 in august, then october. Said that the source has proofs and can confirm it

Did a 180 turn 1 day later and claimed june launch, july availability (was known long before he made his post)

A user put up a screenshot of alleged zen5 cpuz bench, deleted it after a couple hours

He claimed that he did it just for fun and didn't expect people to repost and take it seriously

Chinese users laughing at wccf reposting his shit

A number of chinese tech forums have already started to warn users against sharing baidu bs and threatened bans or infrator points. Your choice on whether ya wanna believe in them"

"Also, the CPU-Z screenshot doesn't list AVX-VNNI in ISA extensions."

Amd/comments/1d2od9j
Thanks for checking into this rumor. Fortunately we don't have to wait long for at least the AMD released numbers. Lisa Su's keynote is next Monday. And if we are lucky, review samples will follow shortly thereafter.
Posted on Reply
#35
phanbuey
john_At 5.8GHz doesn't just equals 14900K. It equals an overclocked and unstable 14900K.
Also CPU-z benchmark is for years considered one of the Intel friendly ones.

While I doubt, I hope AMD to be considering bringing the X3D chips the same day with the regular ones. They can put a ridiculous high price if they want on them, but it will be stupid if they don't announce them together with the regular ones. They have to finally start understanding the power of marketing. Zen 5 will have a totally different, much higher level of acceptance, if an 8 core 9800X3D annihilates everything in gaming benchmarks with differences of 20-50%. If they fear internal competition, they can start that chip at $550. Zen 4 and AM5 would had much higher success if the X3D chips where introduced together with the new platform.
They're not bringing the x3d till 2025 - internal leaks already confirmed this - will be announced in Jan. Since intel doesn't have arrow lake ready these will just hang out at $550 until there's reason to drop them.
Posted on Reply
#36
Shtb
I would call it a fake, wouldn't you?

Moreover, i still remember the story about how developers of this utility revised their tests after, irc, Zen1 showed better results (and its result was of course downgraded).
Didn't someone from Intel have contact with these developers back then?
Posted on Reply
#37
Bwaze
ShtbI would call it a fake, wouldn't you?

Moreover, i still remember the story about how developers of this utility revised their tests after, irc, Zen1 showed better results (and its result was of course downgraded).
Didn't someone from Intel have contact with these developers back then?
I think you won't find any concrete proof, but it was pretty obvious back then that Intel simply paid developer to change the benchmark to be more "representative of real workloads". It was also a time when Intel proclaimed what real workload is and what isn't, and of course excluded anything Zen was particulary good at.
Posted on Reply
#38
JohH
Questionable source but about where I expect Zen 5 to land, in the 15-25% range.
Posted on Reply
#39
Blueberries
I've been reading "next generation 10-20% IPC lift" from both sides every year for the last ~15-20 years.

I know better than to believe those claims until I see them. It's usually too good to be true.
Posted on Reply
#40
Makaveli
BlueberriesI've been reading "next generation 10-20% IPC lift" from both sides every year for the last ~15-20 years.

I know better than to believe those claims until I see them. It's usually too good to be true.
Maybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
Posted on Reply
#41
KarymidoN
thesmokingmanIt's hard to compare from different sources and obviously bring salt. Anyways onto more leaks...
You guys have to consider a lot of those results are BEFORE that whole Intel baseline instability situation... remenber if you run the 14900k/s/f today in the baseline preset you're loosing a lot of performance (multitread) compared to the release day reviews.
Posted on Reply
#42
kapone32
MakaveliMaybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
Don't forget about the clock speed increase. 1700X 4.1 vs 5800X @ 5.0 or 5950X @ 5.1 Ghz, Also Multi core CPU enhancements with Windows updates also improved performance and faster RAM made a discernible difference up to 3600 Mhz.
Posted on Reply
#43
watzupken
MakaveliMaybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
Intel essentially stagnated between Skylake to Comet Lake. Between Alder Lake to Raptor Lake refresh, there was very little IPC improvement. The general performance improvement was mainly due to the aggressive clockspeed starting with the 13x00 series. So while AMD is busy improving their CPU architecture to deliver higher performance, Intel was busy tweaking their chips to soak in as much power to deliver the high clockspeed.
Posted on Reply
#44
mkppo
CPU-Z single core bench has pretty much zero relevance to real world applications and uses a tiny part of the CPU. Even if the 19% increase in performance is true in this bench, it's like saying Zen 5 has better L1 latency or something along the lines. And for some reason the news article sounds like after this 19% gain, Zen 5 has caught up to 14th gen's IPC which is obviously not the case. Zen 4 is close to 14th gen's in IPC in actual workloads.

Now if it's 19% increase in a real world application, that would be progress. Sort of similar to Zen 2 - Zen 3. Based on the architectural changes, it should at least be more than Zen 3 - Zen 4.
Posted on Reply
#45
atomsymbol
john_AMD tried to promote it's chips as super efficient. They did that keeping 12 and 16 core chips at 8 core chips power consumption levels. Then users online where praising Intel's chips for being 1% faster in single threaded benchmarks and games while using twice the power. What was expected from AMD to do, other than offer users what they wanted? That +1% performance for a +50% power increase.
Intel didn't drag AMD to anything. Users and tech press did. They are so desperate to keep offering wins to Intel, that they made efficiency look like a secondary, unimportant feature.
Note 1: High power consumption is an industry-wide trend, in both desktop CPUs and desktop GPUs, enabled by advances in chip manufacturing and by larger&heavier coolers.

Note 2: Neither AMD nor Intel is FORCING users to run the CPU at 250 Watts, and neither AMD nor Intel nor Nvidia is FORCING gamers to run GPUs at 400 Watts. Running a CPU or GPU at high wattage is an OPTION offered to consumers. Another OPTION offered is to limit CPU's max temperature to 75℃ in the BIOS (single-threaded performance stays the same, while multi-threaded performance is reduced). 144Hz 4K HDR gaming is just an OPTION offered by high-end displays. Complaining about 250 Watt CPU consumption, while multiple options of how to limit/optimize power consumption and temperatures do exist and are fairly obvious, is a sign of incompetence+misunderstading on the side of the user of the desktop machine.
Posted on Reply
#46
londiste
atomsymbolNote 1: High power consumption is an industry-wide trend, in both desktop CPUs and desktop GPUs, enabled by advances in chip manufacturing and by larger&heavier coolers.
I would phrase this the other way around. High power consumption is an industry-wide trend, caused by relative lack of advances in chip manufacturing.

For long years there were regular huge improvements in manufacturing processes that enabled huge increases of transistor budgets and huge efficiency increases. These manufacturing process increases have slowed down a lot in recent years but the industry and consumer expectation is for the performance of end product to keep increasing.
atomsymbolNote 2: Neither AMD nor Intel is FORCING users to run the CPU at 250 Watts, and neither AMD nor Intel nor Nvidia is FORCING gamers to run GPUs at 400 Watts. Running a CPU or GPU at high wattage is an OPTION offered to consumers. Another OPTION offered is to limit CPU's max temperature to 75℃ in the BIOS (single-threaded performance stays the same, while multi-threaded performance is reduced). 144Hz 4K HDR gaming is just an OPTION offered by high-end displays. Complaining about 250 Watt CPU consumption, while multiple options of how to limit/optimize power consumption and temperatures do exist and are fairly obvious, is a sign of incompetence+misunderstading on the side of the user of the desktop machine.
This is about cost to the consumer. If consumer would prioritize buying - and paying for - low power consumption and efficiency the products offered would reflect that. Basically for a CPU or GPU it means going larger-wider (more cores, more shader) and lower frequencies. This is exactly what enterprise and data center are doing - they feel the power requirements, cooling requirements and initial investment of buying the thing is relatively smaller. Thus, products offered there are more efficient.

Nothing stops me or you from buying an RTX 4090 and running it at half power limit - at 225W that will become a very efficient GPU with surprising bit of its performance intact. The problem - this will bring its performance down to lets say RTX 4080 level. RTX 4080 would be much cheaper to buy.

Although if I remember correctly 4090 is most efficient somewhere around 300W where it does not lose as much performance and would still be faster and more effcient than RTX4080. More costly, still.
Posted on Reply
#47
atomsymbol
londisteI would phrase this the other way around. High power consumption is an industry-wide trend, caused by relative lack of advances in chip manufacturing.

For long years there were regular huge improvements in manufacturing processes that enabled huge increases of transistor budgets and huge efficiency increases. These manufacturing process increases have slowed down a lot in recent years but the industry and consumer expectation is for the performance of end product to keep increasing.
If you mean 1980-ties and 1990-ties, then I mostly agree. After year 2000 it gets more complicated: AMD Bulldozer CPUs were a step back compared to the K10 architecture, which wasn't caused by manufacturing but by micro-architecture. Intel only slightly increasing IPC for 10 years is related to micro-architecture and to the absence of a competitive micro-architecture from AMD and ARM. While the size of an atom of silicon is indeed a constant, the truth is that the number of transistors on a single chip sold to a consumer has kept increasing exponentially for the past 20 years (albeit the exponent is now slightly lower than it was before 2000), which means that the main obstacle to more performance is lack of progress in micro-architecture and not a lack of progress in the number of transistors. GAA transitors will provide a lot of extra transistors for CPU micro-architecture designers to use throughout the next decade. But breakthoughts in micro-architecture have a PACING different from the PACING of advances in manufacturing. Huge mistakes in micro-architecture actually do happen sometimes (while mistakes in manufacturing are very tiny when compared to mistakes in micro-architectures). Micro-achitecture doesn't follow Moore's law.
Posted on Reply
#48
londiste
This is an interesting point. I am not sure if that comes completely down to microarchitecture.

It has been clear for a while that frequencies will no longer increase considerably which has an effect on how microarchitectures need to evolve. Some - if not most - of the evolution has happened and will have to happen on different levels. Multi-/manycore CPUs and their consenquences in the system and software level has been significant and will go on.

Purely on microarchitecture there seem to be two cardinally different directions being attempted - going small simple like RISC-V or some of ARM, alternatively going wide and complex for which the Apple M is probably the best mainstream example. I think the problem with simple is that it will eventually have to rely on either clock speed or parallelism to improve. Clock speeds are not expected to improve considerably these days and parallelism works well with cores of any size or complexity. Plus of course ASICs for specific tasks for efficiency improvements.

Interesting times either way :D
Posted on Reply
#49
atomsymbol
londisteThis is an interesting point. I am not sure if that comes completely down to microarchitecture.
Of course that performance largely comes down to micro-architecture. For example, Python (or any programming language with arbitrary-precision integers as the default integer type) suffers a fairly large performance slowdown (even if you manage to JIT-compile the Python code into native code) JUST because CPUs don't have native support for accelerating arbitrary-precision integers. The same can be said about performance hit caused by garbage collection. And the same can be said in terms of CPUs lacking hardware support for message passing (that is: acceleration of concurrent programming languages).

Just a note: JIT compilation arrives to CPython with version 3.13, although it might be initially disabled by default and might noticeably improve performance only after version 3.14+ (peps.python.org/pep-0744/)
londisteIt has been clear for a while that frequencies will no longer increase considerably which has an effect on how microarchitectures need to evolve. Some - if not most - of the evolution has happened and will have to happen on different levels. Multi-/manycore CPUs and their consenquences in the system and software level has been significant and will go on.

Purely on microarchitecture there seem to be two cardinally different directions being attempted - going small simple like RISC-V or some of ARM, alternatively going wide and complex for which the Apple M is probably the best mainstream example. I think the problem with simple is that it will eventually have to rely on either clock speed or parallelism to improve. Clock speeds are not expected to improve considerably these days and parallelism works well with cores of any size or complexity. Plus of course ASICs for specific tasks for efficiency improvements.

Interesting times either way :D
The RISC-V standard will have (but I have no idea when it will happen, it is taking a long time) an extension "J" for accelerating dynamic programming languages (github.com/riscv/riscv-j-extension). With it in place, competition between ARM/x86 and RISC-V might become quite interesting.
Posted on Reply
#50
JWNoctis
londisteNothing stops me or you from buying an RTX 4090 and running it at half power limit - at 225W that will become a very efficient GPU with surprising bit of its performance intact. The problem - this will bring its performance down to lets say RTX 4080 level. RTX 4080 would be much cheaper to buy.

Although if I remember correctly 4090 is most efficient somewhere around 300W where it does not lose as much performance and would still be faster and more effcient than RTX4080. More costly, still.
Off topic, but specifically for this card, its professional equivalent based around the same GPU, RTX 6000 Ada, has a power limit of 300W. Going by that logic, the best efficiency range for the consumer-grade RTX 4090 could be slightly lower than 300W, since it had less of the GPU chip active and only half the VRAM. It stands to reason that professional cards would be rated close to best efficiency for their expected usage pattern, or slightly above to account for overhead from the rest of the system.

In my own experience, I could run a 4070 Ti Super at 200W instead of the rated 285W, and lose maybe 10-15% framerate or compute throughput doing so. The only significantly affected benchmark I've seen so far was FurMark. It also appeared that significant power - on the scale of 50W for this card - would be consumed by the memory bus and VRAM when they run at rated frequency, a consumption that won't be significantly reduced by reducing board power limit, amplifying actual GPU power reduction by percentage in such scheme.

So...yes, it aligns with expectations.
Posted on Reply
Add your own comment
Jun 29th, 2024 09:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts