• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 4090 & 53 Games: Core i9-13900K vs Ryzen 7 5800X3D

7600x beats x3d and raptor lake according to this video, in highs, in lows, everything. as long as you pair with ddr5 6000 cas 30... how come I didn't realize this when I bought my 13600k on launch day... im so fucking confused right now. ugh my head hurts.
If you game only and do nothing more, 7600X is marginally better. Nothing to worry too much about. You will, however, not be able to upgrade from 13600K.
You made no mistake with this choice. 13600K is much better all-rounder beating 7600X by a lot in multi core workloads and providing great performance in games.
True that.
Here you can read a review in which the testing methodology in games is based on finding the most cpu heavy scenarios (not built-in benchmarks like so many reviewers like to do) to show differences between different cpus and as you will see 13600K performs really solid better than 7600X in games
Doubtful. HUB recent tests giver edge to 7600X in gaming.
 
If you game only and do nothing more, 7600X is marginally better. Nothing to worry too much about. You will, however, not be able to upgrade from 13600K.

I don't want to upgrade after this. The industry is going crazy these days, this is it for me for several years. lol
 
  • Like
Reactions: N/A
In that review, the 13900k performs 13% better than the 5800x3d (100/88.6). The 13900k review is 15 days old, nothing changed.
I'd just like to know how we went from 13% with a rtx 3080 to 6.2% with a rtx 4090 in a completely cpu bound scenario.
What do you mean nothing changed?

1. Different games were benchmarked. Games are not a monolith, the 13900k being 13% faster in 12 games doesn't mean it'll be 13% faster in 53 games. It's pretty clear that the 13900k test suite favored the 13900k over the 5800X3D.
2. The entire GPU & architecture changed. This can change the performance even in similar games.
3. A different Nvidia driver was used. See #2.
 
Last edited:
Doubtful. HUB recent tests giver edge to 7600X in gaming.
This is a very reliable source in my country with a long history. HU doesn't even give the places in the games where they test and for some titles they're running built-in benchmarks. This is why their results are flattened and the differences between the processors are small.
 

7700x with 6000mhz cl30 Memory beats the 13700k same with 4090.

Games like Csgo , Valorant the Intel dont have any chance.

So when you link a cyberpunk where the 13600k beats all the ryzen its Just false a little bit.

Its more like which game you play and Pick the best Processor
 
Last edited:
Well, I kind of regret getting raptor lake now after watching this video... looks like 7600x beats it easily across the board as long as you slot in some ddr5 6000 cas 30 ram. i think even lows are better by 20% with the 7600x over the 5800x3d and 13600k... turns out 7600x was the sleeper winner all along you just have to use really high end ram.

fuck. i don't know know what to do now. i should have never betrayed my love for AMD after all... :cry:

7600x beats x3d and raptor lake according to this video, in highs, in lows, everything. as long as you pair with ddr5 6000 cas 30... how come I didn't realize this when I bought my 13600k on launch day... im so fucking confused right now. ugh my head hurts.

Would 7950X pair with a very low latency CL30 DDR5 6000 beats 13900K with the best DDR5 7800 as well I wonder?
 
Hardware Unboxed just seem to have higher Zen4 results compared to the majority of reviews. Even when comparing the same games at the same resolutions, but maybe that's in-game benchmark vs in-game play I'm not sure. The general launch review consensus is that a DDR5 13600K competes with the 7700x.

 
Hardware Unboxed just seem to have higher Zen4 results compared to the majority of reviews. Even when comparing the same games at the same resolutions, but maybe that's in-game benchmark vs in-game play I'm not sure. The general launch review consensus is that a DDR5 13600K competes with the 7700x.

Thats why i asking for a 7700 vs 13700 benchmark with the Right ram and 4090

Lets see the true.
 
Intel and MB manufacturers been pushing for this whole DDR5 8000-9000+ bus speed aka the higher, the better.

I wanna know if a low latency CL30s DDR5 6000 really make up for DDR5 9000+ speed in real-world gaming usage.
 
Last edited:
Thank you for finally having all games using they proper APIs in these charts.
 
What do you mean nothing changed?

1. Different games were benchmarked. Games are not a monolith, the 13900k being 13% faster in 12 games doesn't mean it'll be 13% faster in 53 games. It's pretty clear that the 13900k test suite favored the 13900k over the 5800X3D.
2. The entire GPU & architecture changed. This can change the performance even in similar games.
3. A different Nvidia driver was used. See #2.
Nothing changed about the platform, as i'm saying since yesterday they went from a rtx 3080 to a rtx 4090 which is a more powerful gpu. That's what changed and that's what should have caused a bigger gap.
Again, some of the additional 43 games have even a bigger gap (i listed 5 of them) even if the initial test suite was favoring the 13900k.

The difference shouldn't be 13%, that's obvious, but it can't be half of that with the most powerful gpu on the market. I don't know which one is incorrect but they can't coexist.
 
This is a very reliable source in my country with a long history. HU doesn't even give the places in the games where they test and for some titles they're running built-in benchmarks. This is why their results are flattened and the differences between the processors are small.
Its more like which game you play and Pick the best Processor
Hardware Unboxed just seem to have higher Zen4 results compared to the majority of reviews.
At the end of the day folks, Raptor Lake CPUs have a slight edge in gaming with current imperfect measurements (different RAM and all...), but this is nothing revolutionary to lose your head about or argue for/against apologetically. Difference between the top three CPUs is 1.5%-9%, which is mostly negligible, depends on RAM and selection of games.

Average difference seen below is not necessarily relevant for each individual user, depending on their system and game content. Plus, AMD is now addressing with Windows and game developers reports that 7900X and 7950X are a bit slower when both chiplets are used. Expect some improvements in gaming performance of those top CPUs in months ahead.
Performance Intel ADL RPL Zen4 Zen3 3D centre.png
 
Nothing changed about the platform, as i'm saying since yesterday they went from a rtx 3080 to a rtx 4090 which is a more powerful gpu. That's what changed and that's what should have caused a bigger gap.
Again, some of the additional 43 games have even a bigger gap (i listed 5 of them) even if the initial test suite was favoring the 13900k.

The difference shouldn't be 13%, that's obvious, but it can't be half of that with the most powerful gpu on the market. I don't know which one is incorrect but they can't coexist.
But something did change about the platform. I listed both: 3080 to 4090 & different drivers. This can change previous tested game results, as there may be different limitations that the new card is introducing that the old one was not. Raw power is not everything when you're limited by the CPU. Architecture & Driver overhead/optimization can play large roles when you start hitting CPU limitations in a game.

The 5800X3D is also an anomaly in terms of CPU's. It's not simply a brute-force ghz & IPC CPU, it relies on the cache to do most the heavy lifting. There are many games that still favor brute force, and those are shown when the 13900k is clearly ahead in the 1080p graph.
 
In that review, the 13900k performs 13% better than the 5800x3d (100/88.6). The 13900k review is 15 days old, nothing changed.
I'd just like to know how we went from 13% with a rtx 3080 to 6.2% with a rtx 4090 in a completely cpu bound scenario.
It has nothing to do with the CPU. It's the 4090. It's a trend I noticed in all the reviews I've seen.
 
Nothing changed about the platform, as i'm saying since yesterday they went from a rtx 3080 to a rtx 4090 which is a more powerful gpu. That's what changed and that's what should have caused a bigger gap.
Again, some of the additional 43 games have even a bigger gap (i listed 5 of them) even if the initial test suite was favoring the 13900k.

The difference shouldn't be 13%, that's obvious, but it can't be half of that with the most powerful gpu on the market. I don't know which one is incorrect but they can't coexist.
nVIDIA GRD driver issues with hardware scheduler and driver overhead perhaps?
 
What do you mean nothing changed?

1. Different games were benchmarked. Games are not a monolith, the 13900k being 13% faster in 12 games doesn't mean it'll be 13% faster in 53 games. It's pretty clear that the 13900k test suite favored the 13900k over the 5800X3D.
2. The entire GPU & architecture changed. This can change the performance even in similar games.
3. A different Nvidia driver was used. See #2.
And windows 10, not 11. From Alder it is known as Windows 11 must use.
For the review 13900K used Windows 11, now it used 10 and I don't understand why.
 
Thank you for finally having all games using they proper APIs in these charts.
I thought of you before publishing the article and made sure Days Gone is DX11 ;)

For the review 13900K used Windows 11, now it used 10 and I don't understand why.
Because the vast majority of gamers of Windows 10. I guess I could do a "50 Games Windows 10 vs Windows 11 RTX 4090 + 13900K" article next .. or maybe something with 7700X or 7600X or 13700K. I guess I could start a poll
 
It's such a great time right now, so many good options for hardware that you can't go wrong. All depends on preferences, needs and your pocket book.

Although from my perspective, if one is gaming at 1080P still, they're probably on a lower end GPU. I wonder how scaling would work for something like a 3060 class GPU instead of the 4090 at those resolutions. Things run so fast with an xx80-class GPU or better that 1080P runs are basically for ranking not practicality. And as we know at 4K or close, it's all GPU bound so you can get a lot of runway out of a CPU a couple generations old.
 
Because the vast majority of gamers of Windows 10. I guess I could do a "50 Games Windows 10 vs Windows 11 RTX 4090 + 13900K" article next .. or maybe something with 7700X or 7600X or 13700K. I guess I could start a poll
The migration from W10 to W11 is free.
The recommendation for Alder and Raptor owners is to migrate to W11. I guess they did. A fair comparison would be the review with W11 because it does not disadvantage the platforms, but W10 can create problems for the Intel platform because it cannot efficiently manage P and E cores.
 
Because the vast majority of gamers of Windows 10. I guess I could do a "50 Games Windows 10 vs Windows 11 RTX 4090 + 13900K" article next .. or maybe something with 7700X or 7600X or 13700K. I guess I could start a poll

Do a poll! :toast:

I'm curious what people want.
 
The migration from W10 to W11 is free.
The recommendation for Alder and Raptor owners is to migrate to W11. I guess they did.
A ton of people have not used the upgrade offer, or adoption rates wouldn't be as low as they are

Do a poll! :toast:

I'm curious what people want.
 
A ton of people have not used the upgrade offer, or adoption rates wouldn't be as low as they are
In a review, it is extremely important not to disadvantage any platform. W11 does not disadvantage the 5800X3D, but the tests are inconclusive for the 13900K if W10 is used because it cannot effectively divide the loads between the P and E cores. There is no P and E in W10 and it is very likely that some "P" loads are assigned to the cores E. Honestly, it's like testing modern processors with Cinebench R15 or older.
P.S. I'm using W10 on the i5-12500 system because this processor doesn't have E cores. If it did, I'd migrate to W11
 
I've been greatly appreciating these comparison looks as it helps to see where things lie. I know if I had an AM4 setup here I would buy a 5800X3D when on sale just because of how much they swing for gaming. I'm hoping we see a Zen 4 X3D next year and that Intel's MTL competes because this has been a great time for customer choice in CPUs.
 
Jumping straight into the action, and starting with the highest 4K Ultra HD resolution, which is the native stomping ground for the GeForce RTX 4090, we're seeing that the Ryzen 7 5800X3D is matching the Core i9-13900K "Raptor Lake" very well. Averaged across all 53 games, the i9-13900K is a negligible 1.3% faster than the 5800X3D.

This is what i was expecting to see in the previous articles (although they didnt focus on the same things)
With the usual CPU reviews performance per watt and power efficiency graphs thrown in, it does make the new intels just seem like a "why the f*ck would anyone want this, for gaming?"


0.1% lows come ahead according to reviews that cover that, which isnt a surprise since the intel systems have time based and thermal based throttles to worry about - if they reach the time limit of PL1 and drop to 125W, they may start to show small FPS drops and microstutter the x3D doesnt at its lower wattages
 
Thanks for the amazing review.

About the bottleneck generated between the 5800X3D and the RTX 4090, here's my experience

I let you two captures from Modern Warfare II performance test at 4k

This one is taken with my new 5800X3D and RTX 4090.

https://bit.ly/3NCqBkd

This other was before to change the CPU, I had a 5600x paired with same RTX 4090 too.

https://bit.ly/3T4wbNm

Both were taken playing at 4k. You can draw your own conclusions.

I have a 5600x at 4k 120hz must not buy 5800x3d... So tempting at $329.
Please, read my post #150

Best regards
 
Back
Top