• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

why only 1080p?
For marketing purposes.
Also if anyone is wondering why CPUs are tested at 1080p and not in lower resolutions, this because 1080 is the middle ground for the minimum resolution currently being played of majority of gamers and the performance can be extrapolated for future gpu upgrade paths especially higher resolutions. If the cpu A can hit 240 fps at 1080p with GPU B in title C,then that CPU should be able to hit 240 hz at a higher resolution with successors to gpu B in the same title C. imo. It's not practical to test in lower resolutions although I've seen otherwise. I bet a majority of readers here play at 1440p and 4k that are interested in the zen4 3d cpus. Niche group for a niche product. The one's playing at 1440p or 4k could care less of the delta gain at lower resolutions than 1080p. Many just want to know what the 0.1% lows and frame variance graph looks like.
Lastly if you invest in am5 and have cl30 ddr5 kits at 6ghz you don't have to upgrade the memory for possibly even with zen6 3d upgrade path. By extrapolating 5800X3D with cl14 with ddr4 at 3600 mhz compared to the performance of current cpus with ddr5 memory it comes very competitive. I believe we will see the same competition without constantly upgrading ram with future am5 cpus
 
Any sense of rationality is completely gone from your comment, just look at the previous one;


This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.


Your example is nonsense, as you wouldn't see that kind of performance discrepancy unless you are comparing a low-end CPU to a high-end CPU when benchmarking a large sample of games. And trying to use a cherry-picked game or two to showcase a "bottleneck" is ridiculous, as there is no reason to assume your selection has the characteristics of future games coming down the line.

The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
You clearly lack any comprehension of what the hell a cpu review is all about.
 
Too much manipulation there, no wonder most of the reviewers all bench the same few games.

Also why did they disable VBS?
 
AMD, Zen 5 better be a huge jump, or you're done playing in the big boy league.

Also why did they disable VBS?
Because it slows the system down 5-10%
 
All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
I’m quoting you but my point is valid for everyone. A lot of arguing about testing resolution and you are … both right.
Testing at 1080P still is relevant to keep the GPU (mostly) out of the equation, BUT testing at higher resolutions is important to give customers an idea of what to expect when they buy the CPU. To show only 1080P results is wrong (or is a marketing move to highlight something you are advertising).
 
I would love to see a non X 3D cpu to see how the performance and the market would cope with that if a cpu like AMD Ryzen 7 7800 non X 3D would be a strong cpu controlled lower TDP than the non we might have a solid max 100watt cpu that beats hard.
 
I would love to see a non X 3D cpu to see how the performance and the market would cope with that if a cpu like AMD Ryzen 7 7800 non X 3D would be a strong cpu controlled lower TDP than the non we might have a solid max 100watt cpu that beats hard.
Which kind of application do you mean? Because if we are speaking about gaming , basically every cpu stays below 100W
 
Maybe it wont be so bad. I was feeling kinda blah yesterday. I watched a video this morning that was fairly enlightening.
 
Which kind of application do you mean? Because if we are speaking about gaming , basically every cpu stays below 100W

You share? In @W1zzard's review of the AMD Ryzen9 7950X it have a average of 125W and the Ryzen 7 7700X of average of 80W.

So think about it, the Ryzen 7700 non-X averages 57.4W in games so if a well AMD tuned Ryzen 7 non X 3D would do about 65W on average it would be a killer gaming CPU because we saw with the AMD Ryzen 7 5800X3D how much fps gain there was with the extra cache in gaming while not using an insane amount of watt.

This would make a lot of customers happy in countries where power still costs money you have to think outside the united states of america because they do complain about thinks but they get them in bigger sizes than other countries so they are still cheaper than the rest of the world.
 
I don't understand what you're trying to say. You want a 7800 non-X non-3D? Why? There's already the 7700, which is exactly the same as the 7700X, just with slightly lower boost clocks and voltages.

There is a point where every additional 100 MHz raises the voltage and power consumption significantly. That's why the X models are so inefficient at stock settings. There's no room for a 7800 non-X, because it would have to be faster than a 7700X while being more efficient. That's not possible, because they would have to use top quality chiplets, which are reserved for server CPUs.

The 7700 is already redundant for JUST gaming workloads. You either choose the 7600 for maximum value, or the 7800X3D for maximum performance.
 
I don't understand what you're trying to say. You want a 7800 non-X non-3D? Why? There's already the 7700, which is exactly the same as the 7700X, just with slightly lower boost clocks and voltages.

There is a point where every additional 100 MHz raises the voltage and power consumption significantly. That's why the X models are so inefficient at stock settings. There's no room for a 7800 non-X, because it would have to be faster than a 7700X while being more efficient. That's not possible, because they would have to use top quality chiplets, which are reserved for server CPUs.

The 7700 is already redundant for JUST gaming workloads. You either choose the 7600 for maximum value, or the 7800X3D for maximum performance.

I was speaking of a Ryzen 7 7800 non X 3D it would make sense if the performance uplift is there gaming wise.

Between the 7700 vs 7700X there is about 5% difference in gaming not much but also for ITX systems it could benefit from the lower power usage and heat but I guess the market might be small and people want a 100c CPU that needs a AIO to keep cool.
 
The platform BIOS, AGESA version, Chipset Driver, and the Windows version will have an impact on the benchmarks.

I have been testing beta BIOS 0921, AGESA 1005c, Chipset V5, Windows 11 Pro 22H2,.. with 7950X chip.

AGESA 1005c.jpg

My system was fully stable, I had no issues with EXPO 6000, or manual 6400 memory profiles with AGESA 1005c. The all-core average active clock under load was 55.0x with AGESA 1005c & Chipset V5 @ default settings. I tested several benchmarks, games,.. got the same results.

Default settings, EXPO under load.
Load 55.0x.jpg

Default settings, EXPO Idle.
Idle 57.5x.jpg

I understand Asus has been working on official BIOS & Firmware for the new X3D CPUs,.. and have released new BIOS 0922 on support site, I haven't tested the 0922 BIOS. I rolled back to 0805, AGESA 1003, and Chipset V4,.. which seems to work best for my non-X3D chip.

Is the new 7950X3D base clock lowered 300MHz to 4.2GHz and boost clocks @ 5.1GHz all-core with 5,7GHz single core. :confused:

The review will be interesting, looking forward to results. :)
 
Last edited:
Looking at the list of games and have to say they suck. The ones that are interest/good have little to no performance gains. Why would some rush out to huy a gaming cpu that offers no benefit.
 
You share? In @W1zzard's review of the AMD Ryzen9 7950X it have a average of 125W and the Ryzen 7 7700X of average of 80W.

So think about it, the Ryzen 7700 non-X averages 57.4W in games so if a well AMD tuned Ryzen 7 non X 3D would do about 65W on average it would be a killer gaming CPU because we saw with the AMD Ryzen 7 5800X3D how much fps gain there was with the extra cache in gaming while not using an insane amount of watt.

This would make a lot of customers happy in countries where power still costs money you have to think outside the united states of america because they do complain about thinks but they get them in bigger sizes than other countries so they are still cheaper than the rest of the world.
12900k averages less than 57w in games.
 
Testing at 1080P still is relevant to keep the GPU (mostly) out of the equation,
Yes, so you are comparing the CPU's. Another way to look at the situation is, in a game, the CPU is not nearly as limited by the operation of the GPU as much as the GPU is limited by the operation of the CPU. So by testing at 1080p your going to be able to better compare the differences between the CPU's relative performance way more than the GPU's.
BUT testing at higher resolutions is important to give customers an idea of what to expect when they buy the CPU.
Isn't that is what another sets of CPU synthetic tests are for?
 
Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.
Testing at 1080p could be indicative of what we will get at 4k/ultrawide when paired with RTX 5090/6090 later.

For the best gaming performance per buck you might want to upgrade GPU more often than CPU, which is especially true for Intel where MB needs to be upgraded. So this info is very useful.
 
Yes, so you are comparing the CPU's. Another way to look at the situation is, in a game, the CPU is not nearly as limited by the operation of the GPU as much as the GPU is limited by the operation of the CPU. So by testing at 1080p your going to be able to better compare the differences between the CPU's relative performance way more than the GPU's.

Isn't that is what another sets of CPU synthetic tests are for?
Synthetic are good to compare CPU for that specific Synthetic workload. It's not really useful for gaming. By example Zen1 was very good at many synthetic workload but was behind in gaming.

There are few good reason to test CPU in CPU limited scenario:

1) See how the CPU would do on CPU bound games that are harder to test like MMO
2) See how well the CPU could do in future
3) See what is the best of the best (because for many that matter)


Depending on what you do, it may matter or not. If you play mostly to competitive shooter by example, just get whatever mid range CPU of the brand you prefer and stop worrying about this futile debate for you. If you play MMO, heavy CPU intensive games like factorio, valheim (when focusing on base construction), etc. well get whatever best CPU your budget can offer.

This debate is way overstated. We always had and will always need to test CPU in non limited scenario. But at the end, it's not as important as people think it is. Unless we have a huge disparity between option (like in Bulldozer era.) For most people, any modern CPU that isn't low end would be more than enough for quite some time. That doesn't mean it's worthless to see what is the best and all that. This is an enthusiast site after all.
 
You share? In @W1zzard's review of the AMD Ryzen9 7950X it have a average of 125W and the Ryzen 7 7700X of average of 80W.
I dont know what are you referring to.
In the 7950X review the average of gaming is this

1677443838585.png


87W.

12900k averages less than 57w in games.
it depends on the game, but my 13900K is around 80W.
 
which is exactly the same as the 7700X, just with slightly lower boost clocks and voltages.
few bios tweaks and slight oc it is the exact same sku. with the same max fclk
 
The pi$$ing contest still going strong. 200fps vs 205fps in 1080p.
Time to change the CPU.... /s
 
A 2DIMM motherboard is required if you want to increase memory clock in 1:2 mode.
It would be interesting to look for DDR5 modules with Hynix chips in shops and on eBay etc...
For example, a Hynix DDR5-4800 8GBx2 kit (Green PCB) works with DDR5-6000
Add a heatsink if you want to OC DDR5-7500 or higher with a cheap DDR5-4800 Hynix chip.
If you're looking for a cheaper option, you may want to wait for information on Micron and Samsung's 2nd generation (DDR5-5600 chip).

Example
7950X + 4 DIMM M/B + DDR5-6000 CL30 (2x16GB) 1:1 mode *4DIMM M/B is Limit DDR5-7000 (due to signal reflection)
7950X + 2 DIMM M/B + DDR5-8600 CL42 (2x16GB) 1:2 mode

Hynix M-die (DDR5-4800 / DDR5-7000+ Manual OC) *Release 2020/10
Hynix A-die (DDR5-5600 / DDR5-8000+ Manual OC) *Release 2022/11
Micron (DDR5-4800 / DDR5-5200+ Manual OC) *Release 2021/11
Micron (DDR5-5600 / unknown) *There is no information yet as it has just been released. *Release 2023/02
Samsung M-die (DDR5-4800 / DDR5-6000+ Manual OC) *Release 2021/12
Samsung D-die (DDR5-5600 / unknown) *There is no information yet as it has just been released. *Release 2023/02
*Release times may vary by country.
*Very high clocks may not run due to IMC and motherboard variations.
*OEM memory may have PMIC voltage locked.


Hynix chip.jpg
 
Last edited:
I dont know what are you referring to.
In the 7950X review the average of gaming is this

View attachment 285609

87W.
I saw the Application power consumption.

But I am still on to see if the 3D V-cache is only gaming performance uplift because if so non X variants would still make sense for normal people and oem if they can come in at a good price from the X variant.

People are going to spend 2k minimum and then use 1080p?

Let's see the 4k results with RT or higher.

Yeah that haven't really moved for some years now with 1080p gaming, I tried 4K takes too much money and from day to day 1080p is too small so I went back to 1440p still much easier to drive than 4K and looks better in my eyes than 1080p.
 
Testing at 1080p could be indicative of what we will get at 4k/ultrawide when paired with RTX 5090/6090 later.

For the best gaming performance per buck you might want to upgrade GPU more often than CPU, which is especially true for Intel where MB needs to be upgraded. So this info is very useful.

4K is 4x the pixels of 1080p, so one generation uplift is much less than comparing 4K FPS to 1080p. An RTX 3090 gives you averagely 88 FPS in TechPowerUP suit of games in 4K, in 1080p it has 181 FPS - 105% more. And 2080 Ti goes from 59 FPS in 4K to 130 GPS in 1080p - 120% more.

With RTX 4090 you get 144 FPS in 4K, but only 74% more in 1080p - but absurdly high 251 FPS is here limited by the latency of system, not performance of CPU - we can be sure the actual performance is more than 100% more in 1080p than in 4K.

A generational uplift from RTX 3090 to 4090 only gives you 63% in 4K, 40% in 1080p, and from 2080Ti yo 3090 only gave you 49% uplift in 4K, 39% uplift in 1080p.

So you really have to skip a generation of GPU and use the same CPU after 4 years, and it's still not the same as comparing 4K numbers to 1080p.

720p is of course even more absurd, that's like planning on using the same CPU after more than 8 years.

This "let's get GPU bottleneck out if equation" numbers are very good for selling CPUs, but the theoretical increases at low resolution have very little effect on what you'll get from that product.

But it's good for sales, good for creating FOMO in buyers.

Because showing actual FPS they'll get from a cheap $250 CPU and very expensive $700 one in 4K, and perhaps with a more down to earth GPU that majority of gamers use shows a very, very different picture.
 
Back
Top