Friday, February 24th 2023

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

AMD's Ryzen 7000-series CPUs with 3D V-Cache are set to launch next week and alongside the launch, there will obviously be reviews of the upcoming CPUs. As with many other companies, AMD prepared a reviewers guide for the media, to give them some guidance, as well as expected benchmark numbers based on the test hardware AMD used in-house. Parts of that reviewers guide has now appeared online, courtesy of a site called HD Tecnologia. For those that can't wait until next week's reviews, this gives a glimpse of what to expect, at least based on the games tested by AMD.

AMD put the Ryzen 9 7950X3D up against Intel's Core i9 13900K, both systems were equipped with 32 GB of DDR5-6000 memory and liquid cooling. Tests were done with both AMD's own Radeon RX 7900 XTX and an NVIDIA GeForce RTX 4090 graphics card. We won't go into details of the various benchmarks here, as you can find those below, but according to AMD's figures, AMD came out on top with a 5.6 percent win over the Intel CPU, at 1080p using the Radeon RX 7900 XTX and by 6 percent using the GeForce RTX 4090. This was across 22 different games, with Horizon Zero Dawn and F1 2021 being the games favouring the AMD CPU the most and Far Cry 6 and the CPU test in Ashes of the Singularity being the games favouring the AMD CPU the least. TechPowerUp will of course have a review ready for your perusing by the time the new CPUs launches next week, so you'll have to wait until then to see if AMD's own figures hold true or not.
Sources: HD Tecnologia, via VideoCardz
Add your own comment

133 Comments on AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

#76
wolar
THU31Your logic is flawed.
First of all, when testing CPU performance, you want to remove all other bottlenecks if possible. You want to see the maximum framerate a CPU can produce.
Second of all, there are no 720p or 480p monitors out there. But there are plenty of 1080p displays, with refresh rates including 480 Hz and higher.

You should watch the Hardware Unboxed video on this topic. Most people can't comprehend why CPUs are tested this way.

If you test a CPU with a GPU bottleneck, you have no idea what will happen when you upgrade your GPU. You framerate might stay exactly the same, because your CPU is maxed out.
But when you test it at 1080p, you will know exactly how much headroom you have.

This is exactly why low-end GPUs like 3050 and 3060, or even RX 6400, are tested with the fastest CPU on the market. You don't want a CPU bottleneck to affect your results.
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
Posted on Reply
#77
JustBenching
Assimilator> claims I don't understand what apples to apples mean
> goes on to suggest running memory at different speeds because MUH IMC

People like you are why the human race is doomed.
Im sure people that are factually correct is the reason the human race is doomed, while people like you bathing in falsehoods are the road to salvation. Absolutely on point :D

Sure, underclocking one while overclocking the other one is apples to apples
Posted on Reply
#78
efikkan
THU31Seems like the 3D cache was much more beneficial with DDR4. DDR5 almost doubles the bandwidth with similar latency, so the gains are much smaller.
But even moreso than that, Zen 4 is a quite bit faster than Zen 3, and this is primarily achieved through front-end improvements, which ultimately means it will be less cache sensitive. So we should expect it to get smaller gains from extra L3 cache (relatively speaking).
AnotherReaderThere are still cases where the additional cache helps tremendously. F1 2021 and Watch Dogs Legion see enormous gains.
Yes, those are edge cases. Cherry-picking edge cases to prove a point isn't a particular good argument, especially if you want to extrapolate this into general performance. The F1 game series have been known to be outliers for years, and I find it interesting that they don't even use the latest game in the series.
AnotherReaderKeep in mind that CPUs aren't like GPUs; they are latency engines, i.e. designed to reduce the latency of a task. For them, latency trumps bandwidth, and L3 cache's latency advantage is even greater for Zen 4 because of Zen 4's higher clocks.
Firstly, L3 doesn't work the way most people think;
L3 (in current AMD and Intel architectures) is a spillover cache for L2, you should not think of it like a faster piece of RAM or a little slower L2. L3 will only be beneficial when you get cache hits there, and unlike L2, you don't get cache hits there from prefetched blocks etc. as L3 only contains recently discarded blocks from L2. L3 is a LRU type cache, which means every cache line fetched into L2 will push out another from L3.
You get a hit in L3 when: (ordered by likelyhood)
- An instruction cache line has been discarded from this core (or another core).
- A data cache line has been discarded from this core, most likely due to branch misprediction.
- A data cache line has been discarded from another core, but this is exceedingly rare compared to the other cases, as data stays in L3 for a very short time, and the chances of multiple threads accessing the same data cache line within a few thousand clock cycles is minuscule.

This is the reason why we see only a handful applications be sensitive to L3, as it has mostly to do with instruction cache. For those who know low level optimization, the reason should be immediately clear; highly optimized code is commonly known to be less sensitive to instruction cache, which essentially means better code is less sensitive to L3. Don't get me wrong, extra cache is good. But don't assume software should be designed to "scale with L3 cache", when that's a symptom of bad code.

Secondly, regarding latency vs. bandwidth;
Latency is always better when you look at a single instruction or a single block of data, but when looking at real world performance you have to look at the overall latency and throughput. If for instance a thread is stalled and waiting for two or more cache lines to be fetched, then slightly higher latency doesn't matter as much as bandwidth. This essentially comes down to the balance between data and how often the pipeline stalls. More bandwidth also means the prefetcher can fetch more data in time, so it might prevent some stalls all together. This is why CPUs overall are much faster than 20 years ago, even though latencies in general have gradually increased.
But this doesn't really apply to L3 though, as the L3 cache works very differently as described above.

Lastly, when compared to a small generational uplift, like Zen 2 -> Zen 3 or Zen 3 -> Zen 4, the gains from extra L3 is pretty small and the large gains are mostly down to very specific applications. This is why I keep calling it mostly a gimmick. If you on the other hand use one of those applications where you get a 30-40% boost, then my all means go ahead an buy one, but for everyone else, it's mostly something to brag about.
1stn00bIntel 13900K official supported memory is DDR5 5600 : www.intel.com/content/www/us/en/products/sku/230496/intel-core-i913900k-processor-36m-cache-up-to-5-80-ghz/specifications.html everything above that is based on your luck at silicon lottery.
Not to mention that you are likely to downgrade that speed over time (or risk system stability).
Posted on Reply
#79
THU31
wolarSure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
Posted on Reply
#80
TheoneandonlyMrK
wolarIf you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
I think the bit your not getting is that reviews don't benchmark to give customers an idea of how it will typically run on Their personal 144Hz 1440p or 4k 4090 equipped hardware, even in a 4090 review.
It's to see how each GPU, in this case the 4090 is compared to another few, now and in the future.
A subtle difference.
Posted on Reply
#81
wolar
THU31All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
Mate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
Posted on Reply
#82
THU31
There's nothing for you to explain. You need stuff explained to you, but you're unwilling to listen.

4K testing is for the "here and now", it only shows you how current hardware behaves.
1080p testing is for both now and the future, it tells you how the CPU will behave in 2 years or more, when more powerful GPUs are available.
Posted on Reply
#83
JustBenching
wolarIf you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p.
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
Posted on Reply
#84
mb194dc
fevgatosCPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
Posted on Reply
#85
JustBenching
mb194dcIt gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
And 1080p results also tell you that as well, so why do you need to test at 4k? If I see a CPU getting 100 fps at 1080p then obviously it can get 100fps at 4k, if the card allows, so what exactly did a 4k CPU test offer me?
Posted on Reply
#86
ymbaja
Minus InfinityWow, what a waste of effort. I expect virtually nothing for productivity software. This seems be far weaker than the 5800X3D uplifts despite AMD's hype. Also overhyped and lied about 7900XT(X) performance too.

I could care less about about gaming performance with cpu's like Zen 4 and Raptor Lake, they are more than strong enough. For productivity I still think 13700K is the sweet spot, but will wait and see if the RL refresh is more than a tweak to clock speeds.
Remember the 7000 series already doubled the L2 cache over the 5000 series. Likely why additional L3 doesn’t make as much of an improvement.
Posted on Reply
#87
efikkan
fevgatosCPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
That's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
Posted on Reply
#88
Denver
mb194dcIt gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
Don't you think that a person who buys a GPU over $1000 will just want to find out which is the best CPU for gaming regardless of the price, also having a very strong(best) CPU ensures that you don't have to change it for a long time and just upgrade the GPU, plus, it is likely that the successor to the 4090 will be so powerful that it starts to be limited by weak CPUs even at 4k.

Anyway, I agree with the idea that the ideal is to test in 1080p, 1440p and 4k.
Posted on Reply
#89
JustBenching
efikkanThat's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
Are we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:

It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
Posted on Reply
#90
Punkenjoy
One of the main problem of CPU testing is not the resolution they test, but the kind of game used by reviewer. It's almost always FPS/ 3rd person action game and there is very few if none of the game that are actually most of the time CPU bound.

Just build a very large base in valheim and you will be CPU limted at 4K. Build large town in a lot of colony/city builder game and you will be CPU bound at 4k. Even today, in MMORPG, in area where there is a lot of people or in raid, you can be CPU bound with modern CPU. In some case, it's laziness or lack of time to get produce a proper save (Like building a large base in Valheim or building a large city or factory in other type of games) in other, it's just very hard to test (like mmorpg).

But most of the time, the data extracted from non GPU-limited scenario can be extrapolated up to a degree to those kind of game so it's still worth it if it's the easiest thing to do.
Posted on Reply
#91
BoboOOZ
wolarMate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
Try to understand it harder, then.

Actually, they should be testing at 480p to find out the maximum framerate the CPU can give. they don't do that because they would get even more 'it's not realistic' comments. The scientific method is hard to grasp sometimes.
Posted on Reply
#92
efikkan
fevgatosAre we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:
Any sense of rationality is completely gone from your comment, just look at the previous one;
fevgatosTesting CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.
fevgatosIt's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
Your example is nonsense, as you wouldn't see that kind of performance discrepancy unless you are comparing a low-end CPU to a high-end CPU when benchmarking a large sample of games. And trying to use a cherry-picked game or two to showcase a "bottleneck" is ridiculous, as there is no reason to assume your selection has the characteristics of future games coming down the line.

The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
Posted on Reply
#93
JustBenching
efikkanAny sense of rationality is completely gone from your comment, just look at the previous one;


This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.
Any sense of rationality is completely gone from YOUR comment. A GPU bound testg ives you ABSOLUTELY no information about how the cpu performs. If you wanted to see how the GPU performs then go watch the GPU review, why the heck are you watching the cpu one?

I'm sorry but no point arguing anymore, you are just wrong. If I followed your advice only looking at 4k results I would have bought an 7100 or a g4560 instead of an 8700k since they performed exactly the same at 4k with the top GPU of the time. And then I upgraded to a 3090. If the graph below doesn't make you realize how wrong you are, nothing will so you are just going to get ignored

Posted on Reply
#94
Jism
Hxxim not impressed with these results from a processor that costs $120+ more than the current top dog at its current retail price. Realistically performance may actually be less since AMD likely cherry picked some of those results. Regardless i think the real performance gains will matter on the lower SKUs as they will be cheaper and more direct competing on price. Im looking forward to the 7800x3d in april.
The price of cache is expensive. more expensive then just adding more cores to the party.

Its designed for a certain type of workload, games seem to benefit the most from additional L3 cache.

Other then that in regular apps it wont be any better then a flagship. Most likely due to higher sustained clocks.
Posted on Reply
#95
Space Lynx
Astronaut
I'm getting excited for W1zzards review of the 7900x3d in a few days...

Posted on Reply
#97
Godrilla
wEeViLzwhy only 1080p?
For marketing purposes.
Also if anyone is wondering why CPUs are tested at 1080p and not in lower resolutions, this because 1080 is the middle ground for the minimum resolution currently being played of majority of gamers and the performance can be extrapolated for future gpu upgrade paths especially higher resolutions. If the cpu A can hit 240 fps at 1080p with GPU B in title C,then that CPU should be able to hit 240 hz at a higher resolution with successors to gpu B in the same title C. imo. It's not practical to test in lower resolutions although I've seen otherwise. I bet a majority of readers here play at 1440p and 4k that are interested in the zen4 3d cpus. Niche group for a niche product. The one's playing at 1440p or 4k could care less of the delta gain at lower resolutions than 1080p. Many just want to know what the 0.1% lows and frame variance graph looks like.
Lastly if you invest in am5 and have cl30 ddr5 kits at 6ghz you don't have to upgrade the memory for possibly even with zen6 3d upgrade path. By extrapolating 5800X3D with cl14 with ddr4 at 3600 mhz compared to the performance of current cpus with ddr5 memory it comes very competitive. I believe we will see the same competition without constantly upgrading ram with future am5 cpus
Posted on Reply
#98
robb
efikkanAny sense of rationality is completely gone from your comment, just look at the previous one;


This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.


Your example is nonsense, as you wouldn't see that kind of performance discrepancy unless you are comparing a low-end CPU to a high-end CPU when benchmarking a large sample of games. And trying to use a cherry-picked game or two to showcase a "bottleneck" is ridiculous, as there is no reason to assume your selection has the characteristics of future games coming down the line.

The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
You clearly lack any comprehension of what the hell a cpu review is all about.
Posted on Reply
#99
chrcoluk
Too much manipulation there, no wonder most of the reviewers all bench the same few games.

Also why did they disable VBS?
Posted on Reply
#100
stimpy88
AMD, Zen 5 better be a huge jump, or you're done playing in the big boy league.
chrcolukAlso why did they disable VBS?
Because it slows the system down 5-10%
Posted on Reply
Add your own comment
Mar 15th, 2025 20:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts