Friday, February 24th 2023

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks
AMD's Ryzen 7000-series CPUs with 3D V-Cache are set to launch next week and alongside the launch, there will obviously be reviews of the upcoming CPUs. As with many other companies, AMD prepared a reviewers guide for the media, to give them some guidance, as well as expected benchmark numbers based on the test hardware AMD used in-house. Parts of that reviewers guide has now appeared online, courtesy of a site called HD Tecnologia. For those that can't wait until next week's reviews, this gives a glimpse of what to expect, at least based on the games tested by AMD.
AMD put the Ryzen 9 7950X3D up against Intel's Core i9 13900K, both systems were equipped with 32 GB of DDR5-6000 memory and liquid cooling. Tests were done with both AMD's own Radeon RX 7900 XTX and an NVIDIA GeForce RTX 4090 graphics card. We won't go into details of the various benchmarks here, as you can find those below, but according to AMD's figures, AMD came out on top with a 5.6 percent win over the Intel CPU, at 1080p using the Radeon RX 7900 XTX and by 6 percent using the GeForce RTX 4090. This was across 22 different games, with Horizon Zero Dawn and F1 2021 being the games favouring the AMD CPU the most and Far Cry 6 and the CPU test in Ashes of the Singularity being the games favouring the AMD CPU the least. TechPowerUp will of course have a review ready for your perusing by the time the new CPUs launches next week, so you'll have to wait until then to see if AMD's own figures hold true or not.
Sources:
HD Tecnologia, via VideoCardz
AMD put the Ryzen 9 7950X3D up against Intel's Core i9 13900K, both systems were equipped with 32 GB of DDR5-6000 memory and liquid cooling. Tests were done with both AMD's own Radeon RX 7900 XTX and an NVIDIA GeForce RTX 4090 graphics card. We won't go into details of the various benchmarks here, as you can find those below, but according to AMD's figures, AMD came out on top with a 5.6 percent win over the Intel CPU, at 1080p using the Radeon RX 7900 XTX and by 6 percent using the GeForce RTX 4090. This was across 22 different games, with Horizon Zero Dawn and F1 2021 being the games favouring the AMD CPU the most and Far Cry 6 and the CPU test in Ashes of the Singularity being the games favouring the AMD CPU the least. TechPowerUp will of course have a review ready for your perusing by the time the new CPUs launches next week, so you'll have to wait until then to see if AMD's own figures hold true or not.
133 Comments on AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks
Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.
Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.
Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
Sure, underclocking one while overclocking the other one is apples to apples
L3 (in current AMD and Intel architectures) is a spillover cache for L2, you should not think of it like a faster piece of RAM or a little slower L2. L3 will only be beneficial when you get cache hits there, and unlike L2, you don't get cache hits there from prefetched blocks etc. as L3 only contains recently discarded blocks from L2. L3 is a LRU type cache, which means every cache line fetched into L2 will push out another from L3.
You get a hit in L3 when: (ordered by likelyhood)
- An instruction cache line has been discarded from this core (or another core).
- A data cache line has been discarded from this core, most likely due to branch misprediction.
- A data cache line has been discarded from another core, but this is exceedingly rare compared to the other cases, as data stays in L3 for a very short time, and the chances of multiple threads accessing the same data cache line within a few thousand clock cycles is minuscule.
This is the reason why we see only a handful applications be sensitive to L3, as it has mostly to do with instruction cache. For those who know low level optimization, the reason should be immediately clear; highly optimized code is commonly known to be less sensitive to instruction cache, which essentially means better code is less sensitive to L3. Don't get me wrong, extra cache is good. But don't assume software should be designed to "scale with L3 cache", when that's a symptom of bad code.
Secondly, regarding latency vs. bandwidth;
Latency is always better when you look at a single instruction or a single block of data, but when looking at real world performance you have to look at the overall latency and throughput. If for instance a thread is stalled and waiting for two or more cache lines to be fetched, then slightly higher latency doesn't matter as much as bandwidth. This essentially comes down to the balance between data and how often the pipeline stalls. More bandwidth also means the prefetcher can fetch more data in time, so it might prevent some stalls all together. This is why CPUs overall are much faster than 20 years ago, even though latencies in general have gradually increased.
But this doesn't really apply to L3 though, as the L3 cache works very differently as described above.
Lastly, when compared to a small generational uplift, like Zen 2 -> Zen 3 or Zen 3 -> Zen 4, the gains from extra L3 is pretty small and the large gains are mostly down to very specific applications. This is why I keep calling it mostly a gimmick. If you on the other hand use one of those applications where you get a 30-40% boost, then my all means go ahead an buy one, but for everyone else, it's mostly something to brag about. Not to mention that you are likely to downgrade that speed over time (or risk system stability).
Example 1:
1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS
In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?
Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS
In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.
What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.
If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
It's to see how each GPU, in this case the 4090 is compared to another few, now and in the future.
A subtle difference.
4K testing is for the "here and now", it only shows you how current hardware behaves.
1080p testing is for both now and the future, it tells you how the CPU will behave in 2 years or more, when more powerful GPUs are available.
It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.
So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.
If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
Anyway, I agree with the idea that the ideal is to test in 1080p, 1440p and 4k.
It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.
CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.
If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
Just build a very large base in valheim and you will be CPU limted at 4K. Build large town in a lot of colony/city builder game and you will be CPU bound at 4k. Even today, in MMORPG, in area where there is a lot of people or in raid, you can be CPU bound with modern CPU. In some case, it's laziness or lack of time to get produce a proper save (Like building a large base in Valheim or building a large city or factory in other type of games) in other, it's just very hard to test (like mmorpg).
But most of the time, the data extracted from non GPU-limited scenario can be extrapolated up to a degree to those kind of game so it's still worth it if it's the easiest thing to do.
Actually, they should be testing at 480p to find out the maximum framerate the CPU can give. they don't do that because they would get even more 'it's not realistic' comments. The scientific method is hard to grasp sometimes.
The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
I'm sorry but no point arguing anymore, you are just wrong. If I followed your advice only looking at 4k results I would have bought an 7100 or a g4560 instead of an 8700k since they performed exactly the same at 4k with the top GPU of the time. And then I upgraded to a 3090. If the graph below doesn't make you realize how wrong you are, nothing will so you are just going to get ignored
Its designed for a certain type of workload, games seem to benefit the most from additional L3 cache.
Other then that in regular apps it wont be any better then a flagship. Most likely due to higher sustained clocks.
Also if anyone is wondering why CPUs are tested at 1080p and not in lower resolutions, this because 1080 is the middle ground for the minimum resolution currently being played of majority of gamers and the performance can be extrapolated for future gpu upgrade paths especially higher resolutions. If the cpu A can hit 240 fps at 1080p with GPU B in title C,then that CPU should be able to hit 240 hz at a higher resolution with successors to gpu B in the same title C. imo. It's not practical to test in lower resolutions although I've seen otherwise. I bet a majority of readers here play at 1440p and 4k that are interested in the zen4 3d cpus. Niche group for a niche product. The one's playing at 1440p or 4k could care less of the delta gain at lower resolutions than 1080p. Many just want to know what the 0.1% lows and frame variance graph looks like.
Lastly if you invest in am5 and have cl30 ddr5 kits at 6ghz you don't have to upgrade the memory for possibly even with zen6 3d upgrade path. By extrapolating 5800X3D with cl14 with ddr4 at 3600 mhz compared to the performance of current cpus with ddr5 memory it comes very competitive. I believe we will see the same competition without constantly upgrading ram with future am5 cpus
Also why did they disable VBS?