• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 4090 & 53 Games: Ryzen 7 5800X vs Core i9-12900K

Because this 1 review shows intel ahead in performance of the 5800x, and out of the woodwork come all these accounts with low post history having a meltdown in the comments about memory speed and wizzard did it wrong and he should have used this config and "he'll never show AMD in a good light" and blah blah blah. One managed to get banned for being a dick over it.

That's fanboy behavior, just because your favorite CPU didnt do as well as the competition doesnt mean the world is conspiring against AMD or such nonsense.
Dunno why but its always top of the line Intel CPU crushing some mid AMD (especially past gen)
Would be fair to have an update 7950x vs 13900k and to add some previous gen intel chip like 10600k/11600k
Right now it looks like a good intel advertisement. Dont forget that many of your site visitors are not so familiar with hardware. What will they remember some time later? Intel>AMD approx 1.2x. Thats it.
See this guy? This is what happens when you dont read.
That's because people no longer read these days. It's pretty obvious from your conclusion that these tests are meant to show how even a slightly old CPU is going to massively bottleneck this GPU.
It's also odd that many of these accounts have few-to-no posts in their history and little to no activity until an AMD article comes out, then all of a sudden they are in full force.....

If you want to hold out until zen4 3d, why not try out a 5800x3d?
It's really the best choice. None of these newer CPUs will matter unless you're pushing over 144hz at like 1080p or some such nonsense. If you're going for 1080p240, then yeah newer ones will make a difference, but for 99% the 5800x3d is the perfect gaming chip.
 

spaceballs-time.gif
 
If you had to pick one game, that was a commercial success and that everybody knows, what would you choose?


Well the DX9 was kind of a joke, but since you asked, how about the classic Half Life 2? Maybe the 4090 can break 1000 fps. :)
Edit. Was black mesa or Anno or borderlands 1,2 done in DX9?
 
Last edited:
Because this 1 review shows intel ahead in performance of the 5800x, and out of the woodwork come all these accounts with low post history having a meltdown in the comments about memory speed and wizzard did it wrong and he should have used this config and "he'll never show AMD in a good light" and blah blah blah. One managed to get banned for being a dick over it.
But he said, that my heart is broken (I like amd cpu over intel cpus), but its not, I answered why. I did not read all of the comments >.>

And wot is wrong to telling, that Zen 1-2-3 memory should be in DualRank mode or 4 sticks of SingleRank mode, to achieve better results in some memory demanding games.
Its just a good reminder.
 
Last edited:
Because this 1 review shows intel ahead in performance of the 5800x, and out of the woodwork come all these accounts with low post history having a meltdown in the comments about memory speed and wizzard did it wrong and he should have used this config and "he'll never show AMD in a good light" and blah blah blah. One managed to get banned for being a dick over it.

That's fanboy behavior, just because your favorite CPU didnt do as well as the competition doesnt mean the world is conspiring against AMD or such nonsense.

See this guy? This is what happens when you dont read.

It's also odd that many of these accounts have few-to-no posts in their history and little to no activity until an AMD article comes out, then all of a sudden they are in full force.....


It's really the best choice. None of these newer CPUs will matter unless you're pushing over 144hz at like 1080p or some such nonsense. If you're going for 1080p240, then yeah newer ones will make a difference, but for 99% the 5800x3d is the perfect gaming chip.
Seriously, what is wrong with 'lower-ranked' users' opinions? I really hate it when people count the number of stars below one's name and merit the validity of one's response by it.


But he said, that my heart is broken (I like amd cpu over intel cpus), but its not, I answered why.
And wot is wrong in telling in comments, that Zen 1-2-3 memory should be in DualRank mode or 4 sticks of SingleRank mode, to achieve better results in some memory demanding games.
Every CPU has its own sweet spot when it comes down to memory configuration i.e. rank, slots, speed, timings, etc.
I for one believe it's one's own duty to research what is the best config for a given CPU, but only because intel and AMD won't publish such information. Naturally, vendors will only cite compatibility stuff, so thanks to guys like @W1zzard for trying to help the wicked get something that will serve us right.

I'm far from even thinking @W1zzard will 'lease' his integrity for showing any bias for any vendor there is, even if he has its own personal beliefs. The moment I catch some wind of somerhing like this happening here would be the moment I put the entire TPU domain in my blacklist.
 
Every CPU has its own sweet spot when it comes down to memory configuration i.e. rank, slots, speed, timings, etc.
The review itself is not about how intel is cooler over amd. But it has 5800X in it, so I think its valid to tell in comments what u can du, to run 5800X even better by using these sweet spots.
 
Last edited:
Yeah it seems people think this is "AMD vs Intel at similar config" whereas the test is "The current GPU Test System that I have right now, a decent but slightly aged config, vs 12900K" to find out how much of a difference an upgrade can bring (added this to test setup page, too, so people can stop freaking out)
I feel like most people are complaining about the price difference between the two cpu's. It's nice to have clarifcation.
12900K was running at default settings, so yes
Thanks
If you had to pick one game, that was a commercial success and that everybody knows, what would you choose?
does it have to me a commerical success? how about about one that well optimized like of the oold Batman Arkam games. One of those games ran really well on a 20 threaded I7 6950x.
IF on Zen 3 can do 1800 on all, 1900 on some, 2000 on very few
AMD says for Zen 4 2000 MHz is the sweet spot.
I'm not aware of anyone who has ever gotten 3000 MHz IF on Zen 3 or Zen 4
I run my infiniy Frabric at 1600mhz I never get errors & Most of my cinebench scores are higher than everyone else running 1800mhz ¯\_(ツ)_/¯


Oh, I wanted to ask if your going to be adding in Prefromance pre-watt & maybe adding prefromance pres-qaure milimeter too &/or a combination of both. Now that Zen4 is all on the same node for CCD's & IOD's & Raporlake is monolithic to get an aggregate of prefromance.
 
Last edited:
Pretty soon we'll have to buy video cards depending on which engines or games you like if you don't already.

Agreed. If you are a World of Warcraft player like I am, then almost nothing beats the 5800X3D right now. Just look at the numbers that Intel put in their own marketing slide. Intel would be the LAST company to exaggerate the performance of an AMD CPU, yet their own marketing slide clearly shows the 5800X3D dominating both the top-end 12-series and 13-series Intel CPUs when it comes to World of Warcraft.

Intel13seriesWoW.png

Looking at the 53 games that were reviewed in the TPU article, i'm seeing ~50 games that I don't play or care about. I enjoy GTA 5, and I occasionally play Far Cry 5 and Battlefield 5. Ironically GTA 5 also seems to be faster on AMD CPUs. So, Zero reason for me to go Intel.
 
Reading the comments has been quite painful. So many people did not understand the point of this test.

The original 4090 review was done with the 5800X. This re-test is supposed to minimize the CPU bottleneck. It is not supposed to be a comparison between various CPUs, it is supposed to utilize one of the fastest processors currently available.

People complained about the platform used in the original review because the 4090 was heavily bottlenecked. It still is even with a 12900K and it still will be with Raptor Lake and Zen 4 X3D, but we can see the extra performance that can be gained using the fastest CPUs on the market.
 
I appreciate the effort that W1zzard puts into these reviews and providing this data. I took away if you replace a ‘good gaming CPU’ with a ‘great gaming CPU’, even at 4K average fps you’ll see strong benefits in some games with a RTX 4090. That’s really good information.

I will definitely read TPU’s Raptor Lake reviews tomorrow. The only data point I’ll need to go elsewhere for is Flight Simulator as I play a lot of sim type games.
 
no 5800x3d? why even bother with the 5800x. Seems like a huge time suck for no reason...
Sorry to bash all your work W1zzard. I know it was a handful and I shouldn't jump so quick to complain. So, thanks for all your work and go with the 5800x3d in the next one and use appropriate ram speeds for each (sweet spot for each architecture).
 
Should I test 5800X3D or faster RAM speed?
Would prefer the 5800x3d first. Allows people to see if they need to upgrade to next gen or not.
 
Last edited:
It would have been interesting to compare with a 5900X or 5950X since they have more cores (which can definitely be beneficial in some games) even though the 5800X only has 1 CCX !
Also the best RAM optimization for ZEN 2 & ZEN 3 is 3733MHz CL14, and you also can get a small boost when using 4 sticks instead of 2

But I'm surprised that the 4090 is still being bottlenecked at 4K though... I wonder if it's because most games are still being optimized for Intel CPUs (since they have the biggest market share and history for Desktop Gaming CPUs)

Can't wait to see the reviews and benchmarks for the 4090 Ti and RDNA 3 GPUs!

PS: Good job, I'm sure it was a lot of work and hours...!!!
 
The original 4090 review was done with the 5800X. This re-test is supposed to minimize the CPU bottleneck. It is not supposed to be a comparison between various CPUs, it is supposed to utilize one of the fastest processors currently available.
Its not "fastest", it just "a random one, I have in my desk, which is faster than 5800X".
I mean dont say "fastest" or else people gonna complain about why 12900K was picked as "fastest" CPU, but not 5800X3D or 7xxxX. Wait they already do... (and this thread changed wildly into "which CPU is the fastest, and how u shout tune it")
 
Last edited:
Would love to see more online games being tested since FPS is much more important in this case.
3840-2160.png
 
Seeing the comments of people arguing about this on facebook has been hilarious
People really didn't understand in the slightest this was done since TPU's done a lot of testing on the 5800x system


6 pages of comments to read up on, but I'm all for the x3D results - from my own experience with all of this, having four ranks of memory is critical. Either 4x8GB or 2x16GB is the bare minimum, with 3600Mhz C16 being the lowest you'd want to go on such a performance oriented CPU (but understandable since its also a common speed and easy to purchase, and going above 3600 does vary a lot between RAM kits and IMC's)

The below isn't shitting on w1zz as his test setup was made before a lot of this was known, let alone commonly known.
Once he commited to his test setup, he couldn't change it.
That said., anything involving modern comparisons that don't need to be compatible with all the previous benchmarks should include an updated memory setup - Zen3/Zen3D, really prefers four ranks and low latency over anything else. The 3D chips alleviate the latency largely, but the exta ranks still help.


My thoughts:
1. Was this only two 8GB sticks? That's 100% going to lower performance on the Zen3 system
We've known this for a long, long time, even zen 2 had lower results with 2x SR sticks and TPU has covered it in the past.
1666241377773.png

Even the 5600x gets anywhere from 5-15% performance hits from two ranks, with otherwise equal memory

GN's youtube video (since i opened it yesterday and commented on another thread already) shows this super clearly:
Their "stock" setup is 3200CL14 4x8 - the key to reading this is that every 2x8 result is at the bottom of the chart.
When dual rank CL18 can beat single rank CL14, you know it's gotta be avoided.
1666241469199.png


How much would that 12900K vs 5800x 6.5% difference shrink, when GN found four ranks of memory gave a 9.5% performance gain?

2. 4000 CL20 is great for high MHz, but the poor timings is definitely an issue.
Look at the GN graph, 3200CL14 was superior to 3800CL18 on average - benchmarks love the MHz, games love the latency (hence the x3D's big advantage)

3. I like these charts less than the old style. Needs more colours.

4. FPS values. Knowing its 20% faster is nice, but knowing if that's 200FPS vs 240FPS or 50FPS vs 60FPS, matters




If you want to test 5800X3D , Please have DDR4 3733mhz or 3200 CL14, Thanks.
4x8 or 2x16GB for sure, 3200CL14 or 3600CL16 seem to be the best choices for a 'common' setup without requiring manual tweaking that any end user could achieve

:roll: :roll:Oh, you sweet summer child...
No he was partly correct - lower timings and quad ranks (4xSR or 2xDR) definitely will help results

Damn, i wish i had more RAM modules available. I could do my own 3200Mhz testing and compare SR vs DR, but i cant do the scale of w1zz or have combinations of timings and MHz values
Not even sure what game benchmark i could use to demonstrate it, just doing 4xSR vs 2x DR with the same speeds should be enough to demonstrate its worth the time

Final edit: Since GN had shadow of the tomb raider show larger differences, i'll test 2xSR, 4xSR, 2xDR and 2xDR+2xSR at 3200C16 with that. It's not going to be massively definitive, but it's at least repeatable for consistency and verification by others.

I cant do much with my 5800x system as its only got two memory slots, but i can do SR vs DR, at the same speed.
 
Last edited:
Seeing the comments of people arguing about this on facebook has been hilarious
People really didn't understand in the slightest this was done since TPU's done a lot of testing on the 5800x system


6 pages of comments to read up on, but I'm all for the x3D results - from my own experience with all of this, having four ranks of memory is critical. Either 4x8GB or 2x16GB is the bare minimum, with 3600Mhz C16 being the lowest you'd want to go on such a performance oriented CPU (but understandable since its also a common speed and easy to purchase, and going above 3600 does vary a lot between RAM kits and IMC's)

The below isn't shitting on w1zz as his test setup was made before a lot of this was known, let alone commonly known.
But anything involving modern comparisons should include this - Zen3/Zen3D, really prefers four ranks and low latency over anything else.


My thoughts:
1. Was this only two 8GB sticks? That's 100% going to lower performance on the Zen3 system
We've known this for a long, long time, even zen 2 had lower results with 2x SR sticks and TPU has covered it in the past.
View attachment 266248
Even the 5600x gets anywhere from 5-15% performance hits from two ranks, with otherwise equal memory

GN's youtube video (since i opened it yesterday and commented on another thread already) shows this super clearly:
Their "stock" setup is 3200CL14 4x8 - the key to reading this is that every 2x8 result is at the bottom of the chart.
When dual rank CL18 can beat single rank CL14, you know it's gotta be avoided.
View attachment 266249

How much would that 12900K vs 5800x 6.5% difference shrink, when GN found four ranks of memory gave a 9.5% performance gain?

2. 4000 CL20 is great for high MHz, but the poor timings is definitely an issue.
Look at the GN graph, 3200CL14 was superior to 3800CL18 on average - benchmarks love the MHz, games love the latency (hence the x3D's big advantage)

3. I like these charts less than the old style. Needs more colours.

4. FPS values. Knowing its 20% faster is nice, but knowing if that's 200FPS vs 240FPS or 50FPS vs 60FPS, matters





4x8 or 2x16GB for sure, 3200CL14 or 3600CL16 seem to be the best choices for a 'common' setup without requiring manual tweaking that any end user could achieve


No he was partly correct - lower timings and quad ranks (4xSR or 2xDR) definitely will help results

Damn, i wish i had more RAM modules available. I could do my own 3200Mhz testing and compare SR vs DR, but i cant do the scale of w1zz or have combinations of timings and MHz values
Not even sure what game benchmark i could use to demonstrate it, just doing 4xSR vs 2x DR with the same speeds should be enough to demonstrate its worth the time

Pretty sure I urged W1zzard to use quad ranks config back when the new testing PC with 5800X came out LOL
 
I understand W1zzard's objective with this review: to pick up the original review's performance, and compared it using a more powerful processor.

IMO, the problem isn't this review but rather the original one, where he should have used the most powerful processor he had available, from either Intel or AMD, in order to maximize the performance of the GPU being tested.
 
Pretty sure I urged W1zzard to use quad ranks config back when the new testing PC with 5800X came out LOL
I think i did too, but there was genuine arguments to be made for getting the IF as high as possible
I just want the x3D to be setup with a new config, since it's a new test setup anyway.


And it's totally fair to have both setups on 32GB of RAM, too.
 
I just want the x3D to be setup with a new config, since it's a new test setup anyway.
5800X3D will use our typical CPU review config


2x 16 GB 3600 CL14 IF 1:1. Many of my Zen 3 CPUs can't do 1900 or 1883, so out of fairness I'm using 1800 on all
 
Well. This certainly makes it harder.

1666255147991.png


*flips desk*

Even the free demo locks you out if you change hardware, such as your RAM amount...

5800X3D will use our typical CPU review config


2x 16 GB 3600 CL14 IF 1:1. Many of my Zen 3 CPUs can't do 1900 or 1883, so out of fairness I'm using 1800 on all
That's a pretty good ram config, and should give near the best performance these can do
TR had a good benchmark that's now useless since i've been locked out of running it, i'm going to need a moment to scream at useless DRM.
 
TR had a good benchmark
Any slightly busy in-game location will be a better and more realistic benchmark, even if you stand completely still
 
Any slightly busy in-game location will be a better and more realistic benchmark, even if you stand completely still
Yeah I wanted something easily repeatable with a built in benchmark, I dont have your benchmark suite available (or the time to download it on a 50Mb connection lol)


I uh, am running 80GB of ram with 6 ranks right now tho?

I can just install an entire OS and my games directly into that i guess...
1666259888963.png


Okay! finished fighting with dumb in game benchmarks and just used CapFrameX on a game i use already - Deep Rock Galactic.
It's unreal engine 4, DX12 and has all the fancy features under the sun except ray tracing, because it's got awesome lighting already.

This has been run at 4K all settings on ultra, DLSS on auto.

This was done with shitty RAM timings so i could benchmark all these sets of ram at 3800 1:1:1
2x8GB
2x32GB
and 2x32GB + 2x8GB
1666262866858.png


The results will not shock you, although I couldn't rename the results and had to MSpaint the RAM values over the top
1666263233466.png



TL;DR: average FPS is the same - GPU limited by a 3090.
0.2% FPS has 8% improvement with four memory ranks
1% FPS has 3.6% improvement


Can't control how it sorts these, annoyingly (Or rename them)
1666263670571.png


And another stat from the same results:
1666263838344.png

Will it make it beat a 12900k? no.
Will it narrow that gap? definitely. And this is at CL18 for W1zzards sake, CL16 or CL14 would absolutely be faster than my results if the gains scale with faster RAM




Viewed as %'s, you can see the dips drop 29% off the average FPS, vs 12%
1666264307488.png
 
Last edited:
Pretty solid article (new member here, long time poster at AT forums).

One thing caught my eye: Wizzard wrote that 12900K has 15-20% higher IPC than Zen 3. Could you tell me what is he basing this on? It's a common knowledge (from AT review, computerbase aggregate charts and SPEC2017 1 thread results) that Golden Cove has between 10 and 11% higher average IPC in desktop apps than Zen 3. Cumulative performance jump is greater as Golden Cove can clock much higher (stock and even OCed), but 15-20% higher IPC is simply not factual.

5800X3D vs 12900K would be great to see, along with Ryzen 7000 and Raptor Lake. Zen 4 and Golden/Raptor Cove have basically neck even integer IPC (according to SPEC2017 1 thread results at the same clock). The only differentiator in games is the lower latency for intel parts due to their monolithic design, hence they are slightly faster. I expect that Raptor Lake will be at best ~5% faster than 7950X when paired with 4090, in 4K gaming, and more or less even with 12900K(S). Also, I expect that Zen 4 X3D parts should make a bigger leap and reign supreme in games with next gen GPUs, whenever they launch.

I'm looking forward to techpowerup's 13th gen review.
 
Back
Top