• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

4K is 4x the pixels of 1080p, so one generation uplift is much less than comparing 4K FPS to 1080p. An RTX 3090 gives you averagely 88 FPS in TechPowerUP suit of games in 4K, in 1080p it has 181 FPS - 105% more. And 2080 Ti goes from 59 FPS in 4K to 130 GPS in 1080p - 120% more.

With RTX 4090 you get 144 FPS in 4K, but only 74% more in 1080p - but absurdly high 251 FPS is here limited by the latency of system, not performance of CPU - we can be sure the actual performance is more than 100% more in 1080p than in 4K.

A generational uplift from RTX 3090 to 4090 only gives you 63% in 4K, 40% in 1080p, and from 2080Ti yo 3090 only gave you 49% uplift in 4K, 39% uplift in 1080p.

So you really have to skip a generation of GPU and use the same CPU after 4 years, and it's still not the same as comparing 4K numbers to 1080p.

720p is of course even more absurd, that's like planning on using the same CPU after more than 8 years.

This "let's get GPU bottleneck out if equation" numbers are very good for selling CPUs, but the theoretical increases at low resolution have very little effect on what you'll get from that product.

But it's good for sales, good for creating FOMO in buyers.

Because showing actual FPS they'll get from a cheap $250 CPU and very expensive $700 one in 4K, and perhaps with a more down to earth GPU that majority of gamers use shows a very, very different picture.
With all of that said, my 12900k with max tuned ddr5 at 7200c30 is bottlenecking my 4090 in settings I actually play games with. That is, 4k DLSS Q + RT. It majorly bottlenecks in hogwarts for example, and it gets really close on some areas in cyberpunk.
 
Unfortunately all CPU reviews for gaming are flawed because a massive sector of games that has 100's of thousands of concurrent players gets totally ignored. Until Civ 6 turn time gets tested, Stellaris/HOI4 etc Grand Strategy simulation rate gets tested, Cities Skylines simulation rate gets tested etc there is a huge hole of data.

What is the point in testing older AAA games that get at most 20-30% of the concurrent players as Civ 6 for example? What is the point in testing Civ 6 FPS? The damn game is basically a complicated board game on the PC. 60FPS is 100% absolutely fine and the turn time is what makes a bigger difference to playability. Where is Stellaris or HOI4 data? All Paradox grand strategy games run on the same engine so testing 1 would indicate performance for all of them and there are over 100K concurrent players for those titles on steam alone. Sure Hogwarts might be at 300K concurrent players at the moment but give it 1 month and those Paradox games will still be at 100K or so and Hogwarts will have dropped massively.

Unless a game is there because it has a specific game engine you want tested or because it uses and API you want to test surely the games tested should cover as broad an array of game types as possible to be considered complete.
 
Until Civ 6 turn time gets tested,

can't remember who does this, but its either pcgamer, guru3d, or gamersnexus. I know one of those tests civ 6 turn time. i remember coming across it before on cpu reviews.
 
With all of that said, my 12900k with max tuned ddr5 at 7200c30 is bottlenecking my 4090 in settings I actually play games with. That is, 4k DLSS Q + RT. It majorly bottlenecks in hogwarts for example, and it gets really close on some areas in cyberpunk.
You cannot use Hogwarts (and CP2077) as a benchmark for bottleneck: they are just poorly optimized games.
 
can't remember who does this, but its either pcgamer, guru3d, or gamersnexus. I know one of those tests civ 6 turn time. i remember coming across it before on cpu reviews.

GN used to but they stopped in recent reviews.

HUB do Factorio but not sure if they include it in the final average performance comparison.
 
You cannot use Hogwarts (and CP2077) as a benchmark for bottleneck: they are just poorly optimized games.
CP is anything but a poorly optimized game. It's so far from the truth whoever says something like that has no idea what he is talking about.
 
CP is anything but a poorly optimized game. It's so far from the truth whoever says something like that has no idea what he is talking about.

Indeed. It scales really well across different hardware, and it has great multi-threading support. You can get over 200 FPS in 1080p on a 4090, and DDR5 has a significant impact as well. The game is very demanding, but it looks amazing.

Good optimization does not mean low system requirements. It means being able to utilize the hardware properly and scale accordingly.

Hogwarts Legacy definitely has optimization problems, especially on the CPU side (it doesn't even fully utilize 4 cores). VRAM usage is also too high. The graphics don't justify the requirements.
Forspoken is even worse. That's a completely messed up game, on both platforms.
 
A generational uplift from RTX 3090 to 4090 only gives you 63% in 4K, 40% in 1080p, and from 2080Ti yo 3090 only gave you 49% uplift in 4K, 39% uplift in 1080p.

So by that math, one generational uplift is roughly from 1080p to 1440p.

Many like myself run a 3440x1440 ultrawide. From https://www.gpucheck.com/gpu/nvidia-geforce-rtx-4090/, we know ultrawide framerate is closer to 1440p rather than 4k. Some others I know just pair a high-end card with 1440p, due to crappy Windows DPI scaling, etc. For all those cases, the 1080p numbers here would be a good indicator when the GPU is upgraded to RTX 5090, which makes perfect sense: GPU every 2 years, CPU every 4 years.

So you really have to skip a generation of GPU and use the same CPU after 4 years

And what's wrong with that? GPU every 4 years, CPU every 8 years. Also works for many. So 1080p could indicate 4k performance on RTX 6090.

See? Just because you don't need the numbers, doesn't mean others don't. Many really don't upgrade CPUs that often.
 
Back
Top