• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3080 with AMD Ryzen 3900XT vs. Intel Core i9-10900K

When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore. Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.
 
Nice review. Eagerly awaiting the same review but with the RTX 3090 instead. As already mentioned, 1% or 0.1% lows are very important to me and many other people and would make your reviews much more informative.

So out of 23 modern games tested, the 10900k (PCIE 3.0) is only 2% or 1.6 FPS average faster than the 3900XT (PCIE 4.0) at 4K.

Next gen Zen 3 is only 3 weeks out. It’s safe to assume that AMD will now take the gaming crown from Intel after over a decade. I don’t know the exact specifics of Zen 3 (like if IPC will increase), but even just a modest +200mhz bump over Zen 2 without any changes should do the trick.

You also have to factor that both next-gen consoles are based off 8 core AMD Zen CPU’s. 99% of PC games are just console ports. It’s also safe to assume that new PC games will be better optimized on AMD CPU’s going forward.

I’m fairly confident once Intel releases their next gen platform with PCIE 4.0, things will be even again or possible 1-2% faster. It’s good that both companies are leapfrogging each other now.
 
People, just keep in mind that
AMD Ryzen 3900XT vs. Intel Core i9-10900K
have almost 200€ price difference where I live... so one have to pay around 145% the price to have have almost 10% performance gain.
 
~1 month for 4900x ;o just wait.
 
@Wizzard
I think you have a mistake in the `Test setup` page where it says the memory configuration for 9900k. It says:
Thermaltake TOUGHRAM, 16 GB DDR4
@ 4000 MHz 19-23-23-42
but I think it should be:
Thermaltake TOUGHRAM, 16 GB DDR4
@ 3733 MHz 19-23-23-42
 
People, just keep in mind that
AMD Ryzen 3900XT vs. Intel Core i9-10900K
have almost 200€ price difference where I live... so one have to pay around 145% the price to have have almost 10% performance gain.
With such high price I'd suggest you order from another country, like Germany.

It's only a 40€ difference in Sweden, tax included, or 108 %.

That 10 % performance gain is pretty much only in benchmarks, as I doubt the vast majority would run this card at 1080.
 
You mean these four kits?
That are like 5x as expensive as the typical 3733 CL19 kits?
As per this:
The cheapest possible one is a CL17 and 75€, meaning that in worst case it's less that 2x expensive, in best case 50% more expensive.
Not a big deal, like he said, those chasing the last few FPS wouldn't mind spending 50 - 70€ more to get there. And I totally agree.
 
The PC - just another gaming console. No other use for it.

No one cares about benchmark scores in applications, only games matter. If a cpu is bad at games then it's no good at all?
And GPU'a are not only for gaming, but no one is doing gpu acceleration benchmarks. Because games are only thing that matters.
 
When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore. Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.

I've been hearing about the multi threaded revolution on desktops "very soon!" for close to 20 years now. I will wait for that in the same way I will/did wait for "VR revolution!" or "PhysX revolution!" or "BIG NAVI!" or any other "just you wait! (insert rnd#-of-years) for the (whatever corporate goal/marketing-nonsense/technological pie-in-the sky)!!!". By wait, I mean I will ignore the corpo-marketing speak. Sun rises and sets, still, on the single threaded performance, as of 2020, as far as consumer desktops are concerned. For better, but mostly for the worse. Which means, Intel is still king. For better, again, mostly for the worse.

...
..
.
 
I've been hearing about the multi threaded revolution on desktops "very soon!" for close to 20 years now. I will wait for that in the same way I will/did wait for "VR revolution!" or "PhysX revolution!" or "BIG NAVI!" or any other "just you wait! (insert rnd#-of-years) for the (whatever corporate goal/marketing-nonsense/technological pie-in-the sky)!!!". By wait, I mean I will ignore the corpo-marketing speak. Sun rises and sets, still, on the single threaded performance, as of 2020, as far as consumer desktops are concerned. For better, but mostly for the worse. Which means, Intel is still king. For better, again, mostly for the worse.

...
..
.
So how does it feel to spend $1000 on a graphics card and run it at 1080p?
 
When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore. Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.

People have been spouting this shit ever since Ryzen 1st gen came out. 3years later and even AMD's own 4c/8t is completely destroying 1st gen and refresh in gaming. There's no magic AMD finewine we've already seen the maximum on these Ryzen cpu's. Hardware Canucks made Ryzen 1st and 2nd gen game benchmark with the 3080 and the 1st gen is bottlenecking at 4k - in some cases more than 10% difference.
 
Just a couple things to think about.

A few months ago I saw an article that mentioned MathLab, or a similar program, ran slower on AMD than Intel for one simple reason. The reason was that the program was compiled with a math library that only supported Intel's special instruction set - I think it was SSE2 - no AMD equivalency, so running the program on a Zen chip meant that a non-optimized routine was used. It would be so easy to test for the CPU, figure out what instruction set to use, and then branch to the proper subroutines as needed. Yes, it will add to the file size of the program, yet it will only positively affect performance, not the negative impact for purchasing a CPU that the software publisher doesn't consider.

Take a look at Micro$oft Flight Simulator 2020 - it is built on the DirectX 11 platform, what is a shame. That means one core of a CPU gets hammered, and it can only utilize four cores (or two cores and two threads) total. And it looks as if even then they did a poor job of optimizing it, as it only puts a 15% to 20% load on the cores used. While DirectX 12 helps spread the love, it still is not designed for these high core counts, you can see that in the white papers Micro$oft released when they introduced DirectX 12.

How many people do nothing but play a single game, and nothing else, at that time? Take me for instance, I will record gameplay, have a browser window open readings comments on another video, and playing a game, and my email program running in the background. No, I don't stream, yet I could while playing the game and recording it. If the only thing you want to do is game then get a console. If you want a darn good simulator then get a decent PC. If you want to do more than just game, all at the same time, then you truly need a PC. Just because a lot of publications (on-line and/or in print) do not publish GPU acceleration does not mean people are not interested in it, or can not find those numbers elsewhere. Even Adobe is making better use of the GPU now days, even if they are lagging behind some other popular - and free - video editing software in doing so.
 
It's AVX512 and no, that isn't the reason why Ryzens fall behind Intel's Cores - it is basically solely related to memory controller latency; chiplet design doesnt help here (For the record, Intel's very own HEDT platform also falls behind their mainstream when it comes to CPU-bottlenecked gaming performance).
 
Just a couple things to think about.
Interesting first post. Let's analyze shall we?

A few months ago I saw an article that mentioned MathLab, or a similar program, ran slower on AMD than Intel for one simple reason. The reason was that the program was compiled with a math library that only supported Intel's special instruction set - I think it was SSE2 - no AMD equivalency, so running the program on a Zen chip meant that a non-optimized routine was used.
AMD has included SSE2 in their CPU's since 2003 with the release of the Athlon64 series of CPU's. Whatever arcticle you read must be VERY old as AMD has utilized SSE2 for 17 years. Hardly relevant today, nor worth mentioning in a thread conversation about the Geforce RTX 3080.
Please review: https://en.wikipedia.org/wiki/SSE2

Take a look at Micro$oft Flight Simulator 2020 - it is built on the DirectX 11 platform, what is a shame.
True, MSFS2020 should be DirectX12 which has been available for more than 5 years. However, Microsoft knows full well that much of it's market for MSFS2020 still runs Windows 7, which is limited to DirectX11, and they likely do not want to alienate that very lucrative part of the market even if they don't officially support it. It was a very interesting choice.

That means one core of a CPU gets hammered, and it can only utilize four cores (or two cores and two threads) total.
Rubbish. DirectX11 is highly configurable and can utilize any and all resources available to the system it's running on. Any limits that are observed are limits imposed by the developers of a game, not DirectX11.

Welcome to TPU! Even though I'm being critical of your statement, that does not mean you are not welcome. Your comment didn't seem trollish, was well stated and seemingly well thought out even if it was incorrect about some points.
 
Last edited:
Wait what
I wasn't aware that MSFS 2020 would run on 7
 
Wait what
I wasn't aware that MSFS 2020 would run on 7
It should. I haven't tried it yet personally, but then why else would it be artificially limited to DX11? They're not advertising Windows 7 compatability, but it's implied with the exclusive use of DX11.
 
Whatever arcticle you read must be VERY old as AMD has utilized SSE2 for 17 years.
It was a recent find in CPU reviews that Mathlab would skip running any SIMD code paths unless it detected a Intel CPU.

Later fixed a couple months by Mathworks.
 
It was a recent find in CPU reviews that Mathlab would skip running any SIMD code paths unless it detected a Intel CPU.
Ah, now see that is a developer limitation, not a problem or limitation AMD created. Crappy move IMO.

However, we're a bit off topic, let's return the tread to it's normally scheduled RTX3080 discussion...
 
Ah, now see that is a developer limitation, not a problem or limitation AMD created. Crappy move IMO.

However, we're a bit off topic, let's return the tread to it's normally scheduled RTX3080 discussion...

"Individual developers may not be aware that the Intel MKL doesn’t execute AVX2 code on non-Intel CPUs. "
It's a problem for any programs written uses intels supplied librarys, but yeah its not really relevant to gaming
 
Every one talks of CPU's comparisons and no one talks of motherboards Chip-set.
Only when two motherboards using identical Chip-set then we may compare them regarding FPS performance.
Therefore this is a comparison of entire platforms (INTEL VS AMD ).
The only significant message in my eyes, this is that one decade of competition this now makes obvious that Chip-set performance gap this is not significant factor if all that you think this is gaming performance.
 
Don't you plug your GPU in the slot that uses the CPU's and not the chipset's PCIe lanes :confused:

Yup. Even storage is done that way these days (primary NVME slot)
 
Back
Top