# NVIDIA GeForce RTX 3080 with AMD Ryzen 3900XT vs. Intel Core i9-10900K



## W1zzard (Sep 16, 2020)

What's the best processor for NVIDIA's new GeForce RTX 3080? The AMD Ryzen 3900 XT or Intel Core i9-10900K? We dedicate a whole performance review to that question, with 23 games, each tested at 1080p Full HD, 1440p, and 4K Ultra HD.

*Show full review*


----------



## HABAR (Sep 16, 2020)

That's awesome!
Difference between 3900XT and 10900K is small, however GPU's performance is amazing!


----------



## Raendor (Sep 16, 2020)

Lol grandpa Skylake ftw.  Brilliant architecture from stagnating company.


----------



## Mussels (Sep 16, 2020)

What we needed to know, Intel has roughly a 10% lead (it obviously varies between the different CPUs) but the higher the res, the less it matters.


----------



## Mats (Sep 16, 2020)

Raendor said:


> Lol grandpa Skylake ftw.  Brilliant architecture from stagnating company.


It's funny that people still think Ryzen is new compared to Skylake. Yeah, it's a whopping 18 months younger.


----------



## Charcharo (Sep 16, 2020)

Nothing surprising here. When Turing release it had an improved scheduler which worked better with Zen 1 and Zen+ (and Core!) CPUs than what Pascal had. Nvidia likely tweaked it more with Ampere too.

It is not a simple "X GPU is faster than Y GPU which is bottlenecked with A CPU, therefor bottleneck!1" and it never was. I do not understand why people think like that.


----------



## Metroid (Sep 16, 2020)

Intel 1% faster than ryzen, I'm pretty happy with my ryzen. I guess wizard will have to get used to AMD cpu's after November this year ehhe


----------



## Anymal (Sep 16, 2020)

What about mid tier cpus on 4k?


----------



## Rob94hawk (Sep 16, 2020)

So looking forward to the 3090 review next!


----------



## yeeeeman (Sep 16, 2020)

Hope that every single idiot mouth is now shut. Sick and tired about the million topics asking how bad pcie 3.0 will bottleneck things. Intel users with pcie 3.0 are just fine and actually still have the upper hand in gaming.


----------



## geon2k2 (Sep 16, 2020)

I would love to see more advanced graphics review than just plain average fps.
In 2020, such reviews should include at the minimum, the minimum fps, but preferable frametime consistency graphs, even if this means testing with lower number of games.

Nobody selling a kidney to buy such hardware cares about average fps only, but about the overal gaming experience, and stuttering which is not visible on average fps is a very big part of that gaming experience.

I'd take any day 60 fps with smooth consistent frames, than 90 fps with lots of micro stutters.


----------



## dragontamer5788 (Sep 16, 2020)

I do wonder what Anno 1800 does differently to be framelocked on Ryzen?

Otherwise, both CPUs scale down into 1080p and up to 4k. The 3080 GPU looks incredibly strong.


----------



## Selaya (Sep 16, 2020)

3733 19-23-23-42? Is that the infamous bad bin B-die? 
I wonder whether using something that can run 3733 at 14-15-15-something would change anything tho.


----------



## W1zzard (Sep 16, 2020)

Selaya said:


> 3733 at 14-15-15-something would change anything tho


Maybe 1%, not worth the trouble, and cost


----------



## Selaya (Sep 16, 2020)

According to Gamersnexus Ryzens at the very least do scale somewhat with better CL timings (so does Intel tho, iirc) - I mean, it'd all be an academic discussion at this point basically, but if you do own a B-die kit that can run 3733-C14, it'd be an interesting academic insight at the very least


----------



## Mistral (Sep 16, 2020)

Damn, Anno and a couple others really don't like Ryzen...


----------



## HenrySomeone (Sep 16, 2020)

I love how the "we only cater to AMD fanboys these days" Hardware Unboxed used a Ryzen test platform despite it being almost 10% slower even at 1440p, lmao!


----------



## dicktracy (Sep 16, 2020)

I’ve said this before. Zen2 is too slow for Ampere. Probably too slow for Big Navi as well. Now at 1440p and sometimes 4k. XD


----------



## Mats (Sep 16, 2020)

Mistral said:


> Damn, Anno and a couple others really don't like Ryzen...


That's not the only weird thing. Look what happens when running 1440p, the fps doesn't change with either CPU..


----------



## HenrySomeone (Sep 16, 2020)

Yup and all those expecting miracles with Zen3 are in for a bitter disappointment as well - I mean yes, it will probably (finally!) come close, but Skylake will still be on top, definitely at least 8700k from 3 years ago, but in many games 7700k just as well and probably even 6700k from over 5 years ago. No matter how you spin it, that's NOT impressive (at all to be honest)


----------



## mahoney (Sep 16, 2020)

The mighty pci-e 4.0 the most useless schtick ever created. And people were going but it's surely gonna be faster on AMD system cause of Pci-e 4.0
just yikes



HenrySomeone said:


> I love how the "we only cater to AMD fanboys these days" Hardware Unboxed used a Ryzen test platform despite it being almost 10% slower even at 1440p, lmao!


That's cause most of his patreon's are AMD fanboys. I mean it boggles my mind why you'd go with a slower cpu for a GPU benchmark. GURU3D similar case - they've replaced their gaming benchmark pc with a Ryzen system just for this. I can't stop laughing 

edit: it's not guru3 whats the other british guy who's doing reviews?


----------



## W1zzard (Sep 16, 2020)

Mats said:


> That's not the only weird thing. Look what happens when running 1440p, the fps doesn't change with either CPU..


CPU limited, explained in more detail in the conclusion of the RTX 3080 FE review


----------



## Selaya (Sep 16, 2020)

HenrySomeone said:


> Yup and all those expecting miracles with Zen3 are in for a bitter disappointment as well - I mean yes, it will probably (finally!) come close, but Skylake will still be on top, definitely at least 8700k from 3 years ago, but in many games 7700k just as well and probably even 6700k from over 5 years ago. No matter how you spin it, that's NOT impressive (at all to be honest)


Despite some marketing shenanigans AMD seems to be pulling at the gaming front deep down im pretty sure they dont really care - at the midrange market they already beat Intel to it (okay, Intel mainly beat themselves - make non-Z boards useful for gaming plsplspls) on a pure value-basis (which is by far the largest part of the market); besides that AMD seems to mostly have been focusing on workstation performance - the 3950X singlehandedly obsoleted all of Intel's HEDT offerings after all, and the Threadrippers are just left uncontested.


----------



## Mats (Sep 16, 2020)

HenrySomeone said:


> Yup and all those expecting miracles with Zen3 are in for a bitter disappointment as well - I mean yes, it will probably (finally!) come close, but Skylake will still be on top, definitely at least 8700k from 3 years ago, but in many games 7700k just as well and probably even 6700k from over 5 years ago. No matter how you spin it, that's NOT impressive (at all to be honest)


Intel is superior in most games, no surprise there, but 4 cores doesn't really cut it anymore.



W1zzard said:


> CPU limited, explained in more detail in the conclusion of the RTX 3080 FE review


Of course, thanks.


----------



## mahoney (Sep 16, 2020)

Hey* W1zzard*
Any chance you could do a similar benchmark but with the 1st gen and refresh cpu's included?


----------



## phill (Sep 16, 2020)

Thank you @W1zzard again for such a great review   I know that Ryzen would be better 1440P/4k, and it's surprising to see it drop in some games, but for me, as long as it's smooth, playable and more so enjoyable, I will always like my Ryzen CPUs 

That said, I've a few Intel CPUs as well, so to be honest, I'm not sure it will matter at all what I play on!  Everything just runs well


----------



## W1zzard (Sep 16, 2020)

mahoney said:


> Hey* W1zzard*
> Any chance you could do a similar benchmark but with the 1st gen and refresh cpu's included?


No plans for doing it in that much detail, I think you can extrapolate most what you want to know from my CPU reviews


----------



## simlariver (Sep 16, 2020)

We need to test with better memory, I was under the impresion that decent memory could make a difference on Ryzen. High end CPU and GPU can justify spending 50$ more on decent ram with sub 16 timings.


----------



## HenrySomeone (Sep 16, 2020)

Mats said:


> Intel is superior in most games, no surprise there, but 4 cores doesn't really cut it anymore.


Yeah going forward / as a today's buy, sure, but when the aforementioned were new, they were undoubtedly the better gaming-oriented purchases and you can still get by with either if you're not looking for the ultra-high refresh rates. It's also hilarious that after all the years of "moar coars" marketing, AMD's most competitive cpu (in its segment of course) is no other than the 3300X - yes, a quad core, lol!


----------



## coozie78 (Sep 16, 2020)

Mistral said:


> Damn, Anno and a couple others really don't like Ryzen...


It's something us Reds just have to live with. 

Was really expecting Intel to pull out more of a lead at higher resolutions with the GPU restriction lessened, though, still, more than excellent work W1zzard-cheers mate.


----------



## Mats (Sep 16, 2020)

HenrySomeone said:


> Yeah going forward / as a today's buy, sure, but when the aforementioned were new, they were undoubtedly the better gaming-oriented purchases and you can still get by with either if you're not looking for the ultra-high refresh rates. It's also hilarious that after all the years of "moar coars" marketing, AMD's most competitive cpu (in its segment of course) is no other than the 3300X - yes, a quad core, lol!


For gaming, yes. There are other uses for a CPU tho. Looking at 12 -16 cores and up, Intel is the one that's NOT impressive in work scenarios in terms of bang per buck.
I don't think the 3950X ever been advertised as the _ultimate gaming CPU_.


----------



## Chomiq (Sep 16, 2020)

No surprise here, if it would be any different I doubt Nvidia wouldn't opt in for using AMD platform to show their performance numbers.


----------



## HenrySomeone (Sep 16, 2020)

Mats said:


> For gaming, yes. There are other uses for a CPU tho. Looking at 12 -16 cores and up, Intel is the one that's NOT impressive in work scenarios in terms of bang per buck.
> I don't think the 3950X ever been advertised as the _ultimate gaming CPU_.



Hmmm, I seem to remember themselves (re)naming the level 3 cache on that very one as "gaming cache"  however that's besides the point; I've never claimed that Ryzens, especially higher core-count ones don't have their valid uses (although 3950x is in a weird spot - a "production"-focused cpu on a desktop platform), 3960x and 3070x in particular are good chips for their purposes, however also not exactly cheap anymore either...



coozie78 said:


> Was really expecting Intel to *pull out more of a lead at higher resolutions with the GPU restriction lessened*, though, still, more than excellent work W1zzard-cheers mate.


----------



## TheEmptyCrazyHead (Sep 16, 2020)

I am curious, what's the cpu load on both amd and intel and how it spreads across the cores?  What's going to be the difference if they are operating on the same frequency? I guess things will improve for amd at low resolutions with the ports from the new consoles, being better multithread-optimized.


----------



## Mats (Sep 16, 2020)

HenrySomeone said:


> Hmmm, I seem to remember themselves (re)naming the level 3 cache on that very one as "gaming cache"  however that's besides the point; I've never claimed that Ryzens, especially higher core-count ones don't have their valid uses (although 3950x is in a weird spot - a "production"-focused cpu on a desktop platform), 3960x and 3070x in particular are good chips for their purposes, however also not exactly cheap anymore either...


It was a direct response to this:


> It's also hilarious that after all the years of "moar coars" marketing...


AMD have a long way to go and they have improved since the first Ryzen. The fact is that neither AMD nor Intel can beat Skylake in gaming with their respective latest architectures, for different reasons.

Threadripper may look expensive, until you look at Intels offerings..


----------



## HenrySomeone (Sep 16, 2020)

I guess things will improve for amd at low resolutions? What kind of bizzaro logic is that?


----------



## W1zzard (Sep 16, 2020)

simlariver said:


> can justify spending 50$ more on decent ram with sub 16 timings.


You mean these four kits?





						Arbeitsspeicher (RAM) Speicher mit Speichertakt ab 3733MHz, Column Address Strobe Latency (CL): 14/15 Preisvergleich Geizhals EU
					

Preisvergleich und Bewertungen für Arbeitsspeicher (RAM) Speicher mit Speichertakt ab 3733MHz, Column Address Strobe Latency (CL): 14/15




					geizhals.eu
				



That are like 5x as expensive as the typical 3733 CL19 kits?


----------



## CrAsHnBuRnXp (Sep 16, 2020)

Another thing to consider that since this 3000 series card is a PCIe 4.0 card, intel is still beating AMD on PCIe 3.0. What will the cards performance look like with an Intel CPU that supports PCIe 4.0 and a motherboard to go along with it?


----------



## HenrySomeone (Sep 16, 2020)

Going by W1zzard's testing at least, it would still beat them even on Pcie 2.0 , though that also (most likely) means performance won't be noticeably better when they go to 4.0


----------



## dragontamer5788 (Sep 16, 2020)

CrAsHnBuRnXp said:


> Another thing to consider that since this 3000 series card is a PCIe 4.0 card, intel is still beating AMD on PCIe 3.0. What will the cards performance look like with an Intel CPU that supports PCIe 4.0 and a motherboard to go along with it?



Bottlenecks don't work like that. For now, PCIe 3.0 x8 (or ~8GBps) seems to be the inflection point, even at 4k resolutions.

Seems like PCIe 4.0's main benefit will be SSDs, not the GPU (unless you're doing scientific compute, where the improved bandwidth may be beneficial)


----------



## Bubster (Sep 16, 2020)

Intel gen 8 and 9 still got at gaming and amd zen 7's  are better for productivity...let's see what AMD Zen 4 is gonna be like?


----------



## Mats (Sep 16, 2020)

Bubster said:


> Intel gen 8 and 9 still got at gaming and amd zen 7's  are better for productivity...let's see what AMD Zen 4 is gonna be like?


Zen 3 launch is only 3 weeks away tho.


----------



## DemonicRyzen666 (Sep 16, 2020)

Mats said:


> That's not the only weird thing. Look what happens when running 1440p, the fps doesn't change with either CPU..
> View attachment 168879



W1zzard always has great reviews. 

This really pictures shows a interesting problem with Zen2. That either ALU/ALG and FPU could be maxing out. Then when people push to 5.0ghz on extreme there is almost no gain. Sure they need to lower Infinity clocks on cold, but that really shouldn't be probably to be around 1500mhz and still have the same performance. I really don't think infinity fabric can be that big of a bottleneck.


----------



## lexluthermiester (Sep 16, 2020)

W1zzard said:


> No plans for doing it in that much detail, I think you can extrapolate most what you want to know from my CPU reviews


Question, would you be interested in doing a run of benchmark to show the difference with a PCIe 2.0 based CPU? It would be interesting to see what the differences are...


----------



## Mats (Sep 16, 2020)

lexluthermiester said:


> Question, would you be interested in doing a run of benchmark to show the difference with a PCIe 2.0 based CPU? It would be interesting to see what the differences are...


You mean like this? Edit: Or you mean an actual PCIe 2 CPU?


----------



## lexluthermiester (Sep 16, 2020)

Mats said:


> You mean like this? Edit: Or you mean an actual PCIe 2 CPU?
> View attachment 168905


Yeah, I saw that article and it is very interesting for sure, but a real world test with actual platform parts would be more informative.


----------



## jonup (Sep 16, 2020)

dragontamer5788 said:


> Seems like PCIe 4.0's main benefit will be SSDs, not the GPU (unless you're doing scientific compute, where the improved bandwidth may be beneficial)



People keep repeating this like it is true. PCI-e 3.0 x4 for storage is not bottlenecking the system performance. I'm not saying there aren't scenarios that can benefit from it, just like there are scenarios the GPUs benefit from 4.0 x16. But in the scenarios where it doesn't bottleneck the GPUs it won't bottleneck the NVMe's.


----------



## Mats (Sep 16, 2020)

jonup said:


> People keep repeating this like it is true.


I thought even Nvidia said so.


----------



## Selaya (Sep 16, 2020)

DemonicRyzen666 said:


> W1zzard always has great reviews.
> 
> This really pictures shows a interesting problem with Zen2. That either ALU/ALG and FPU could be maxing out. Then when people push to 5.0ghz on extreme there is almost no gain. Sure they need to lower Infinity clocks on cold, but that really shouldn't be probably to be around 1500mhz and still have the same performance. I really don't think infinity fabric can be that big of a bottleneck.


No. It's an entirely memory-related bottleneck because games (or rather, their fps) rely on large chunks of data being fetched in a timely fashion. Ryzens with their chiplet design with IF and shit are just inherently disadvantaged to Intel's monolithic approach. That being said, you can bottleneck your Intel CPU with bad memory just as well as your AMD:


----------



## Mats (Sep 16, 2020)

lexluthermiester said:


> Yeah, I saw that article and it is very interesting for sure, but a real world test with actual platform parts would be more informative.


I'm afraid they're too old (9 years) to be considered for review.


----------



## dragontamer5788 (Sep 16, 2020)

jonup said:


> People keep repeating this like it is true. PCI-e 3.0 x4 for storage is not bottlenecking the system performance.



I sometimes edit videos, and absolutely see my streaming I/O speeds max out under those circumstances. Faster storage means faster editing, especially with lossless or low-compression formats: HuffYUV, UTVideo, 4k, 8k videos, etc. etc. Depends how keen you are about compression losses between editing steps.



DemonicRyzen666 said:


> This really pictures shows a interesting problem with Zen2. That either ALU/ALG and FPU could be maxing out. Then when people push to 5.0ghz on extreme there is almost no gain. Sure they need to lower Infinity clocks on cold, but that really shouldn't be probably to be around 1500mhz and still have the same performance. I really don't think infinity fabric can be that big of a bottleneck.



The opposite. If ALUs / FPUs were maxing out, then overclocking would *help*.

What we're seeing here is that overclocked processors do NOT help improving FPS. This means the bottleneck is elsewhere (probably RAM latency if I were to take a guess)


----------



## lexluthermiester (Sep 16, 2020)

Mats said:


> I'm afraid they're too old (9 years) to be considered for review.


Age is not the relevant factor. The comparative real world performance data is.


----------



## Foxiol (Sep 16, 2020)

Thank you very much for doing this guys. As a happy owner of a 3900X I'm glad you also did the GPU review with an AMD CPU for once.


----------



## Mats (Sep 16, 2020)

lexluthermiester said:


> Age is not the relevant factor. The comparative real world performance data is.


Yeah, but I'm not sure it would say much without also running benchmarks with a PCIe 3 system for the same era, since the CPU's have improved much since then.
Running a 2700K on both a Z68 and a Z77 board (or just limit the PCIe) would give a better picture IMO.

Age is a relevant factor since reviewing takes time, it's all about priorities.  Though I'm sure we'll see an Ampere + Sandy Bridge review somewhere soon.


----------



## SLK (Sep 16, 2020)

Wizzard, just a suggestion. CPU Limited games should not be included in GPU Benchmarks. It kinda skews the overall percentage and does not fully reflect the GPU's full potential.


----------



## GhostRyder (Sep 16, 2020)

Very interesting findings, makes it a pretty close call between the two top CPU's.  Though the findings on PCIE 4.0 are not too surprising, its probably more likely to be beneficial on newer SSD's.


----------



## r9 (Sep 16, 2020)

This question was asked every single time there was new pcie standard and it never mattered as it's always ahead in terms of bandwidth. So big shocker same thing this time around.



yeeeeman said:


> Hope that every single idiot mouth is now shut. Sick and tired about the million topics asking how bad pcie 3.0 will bottleneck things. Intel users with pcie 3.0 are just fine and actually still have the upper hand in gaming.


----------



## QUANTUMPHYSICS (Sep 16, 2020)

The only difference I'm focused on is Microsoft Flight simulator 2020.  As far as I can tell,  1440p and 4K  still can't hit 60fps on the 3080. 

That game must be poorly optimized.


----------



## DemonicRyzen666 (Sep 16, 2020)

dragontamer5788 said:


> I sometimes edit videos, and absolutely see my streaming I/O speeds max out under those circumstances. Faster storage means faster editing, especially with lossless or low-compression formats: HuffYUV, UTVideo, 4k, 8k videos, etc. etc. Depends how keen you are about compression losses between editing steps.
> 
> 
> 
> ...



I don't think so, Because 4,000G series which is Zen 2 on monolithic die, with 1/4 the L3 cache has less latency vs 3,000 series. The 4000G series is still slower than Zen 2 chiplets









						Apparent AMD Ryzen 5 4650G benchmarks put it close to the 3600
					

It looks like the 4650G won't quite be a 'Ryzen 5 3600 killer', but it should still offer great value




					www.pcgamesn.com
				












						Ryzen 7 Pro 4750G Review: Renoir Ushers in a New Era for 7nm Desktop APUs
					

Zen 2, Meet 7nm Vega




					www.tomshardware.com
				




It could be the Load/Store can't keep the cores fed.


----------



## Vecix6 (Sep 16, 2020)

Nice article but confusing graphs with too many information


----------



## xrror (Sep 16, 2020)

As an AMD fan myself... Well I mean, it'd probably be closer if Ryzen could also run 5Ghz all core... but it can't.

Credit where credit is due, no matter what else you might try and ding Intel for - Intel does know how to build a good clockin' processor.


----------



## F-man4 (Sep 16, 2020)

AMD NO!
In Far Cry 5 FullHD, 3900XT made RTX3080 worse than Intel CPU + RTX2070


----------



## Mats (Sep 16, 2020)

F-man4 said:


> AMD NO!
> In Far Cry 5 FullHD, 3900XT made RTX3080 worse than Intel CPU + RTX2070


That's because it's limited by the CPU, not the GPU. Look what happens when it gets bumped to 1440. Nothing, with either CPU.


----------



## mahoney (Sep 16, 2020)

F-man4 said:


> AMD NO!
> In Far Cry 5 FullHD, 3900XT made RTX3080 worse than Intel CPU + RTX2070


Same with Divinity. I really hope the Ryzen 4000 series is amazing in gaming or else im fecked. Got the 3700x thinking i was gonna be good for at least 3-4 years oh well


----------



## hurakura (Sep 17, 2020)

The PC - just another gaming console. No other use for it.


----------



## Mats (Sep 17, 2020)

mahoney said:


> Got the 3700x thinking i was gonna be good for at least 3-4 years oh well


If you're going to run it with anything less than 4k then yeah, I see your point, but if that's the case you might just as well buy the 3070 instead.


----------



## lexluthermiester (Sep 17, 2020)

hurakura said:


> The PC - just another gaming console. No other use for it.


An opinion not shared by anyone but you.


----------



## Prima.Vera (Sep 17, 2020)

I was always wondering. What is the performance on a non-standard resolution, such as 3440x1440? 
There are a lots of citizens out there with 21:9 monitors and 5MP resolution.


----------



## lexluthermiester (Sep 17, 2020)

Prima.Vera said:


> I was always wondering. What is the performance on a non-standard resolution, such as 3440x1440?
> There are a lots of citizens out there with 21:9 monitors and 5MP resolution.


In practice there might be some variations to a few benchmarks, but the performance scaling from one GPU to the next are going to be similar if not identical for a given resolution.


----------



## Selaya (Sep 17, 2020)

DemonicRyzen666 said:


> I don't think so, Because 4,000G series which is Zen 2 on monolithic die, with 1/4 the L3 cache has less latency vs 3,000 series. The 4000G series is still slower than Zen 2 chiplets
> 
> 
> 
> ...


Renoir APUs have way less L3 cache than Matisse CPUs, there's your bottleneck


----------



## Crackong (Sep 17, 2020)

F-man4 said:


> AMD NO!
> In Far Cry 5 FullHD, 3900XT made RTX3080 worse than Intel CPU + RTX2070



In those benchmarks the 2080Ti is neck to neck with the 3080

I would blame optimization instead of the CPU for those games...


----------



## mechtech (Sep 17, 2020)

"*which your high refresh-rate monitor can benefit from."*

If only we all did...................still rocking 60Hz here.  Maybe one of these days 120Hz+ screens will drop the 'gaming' moniker and price.


----------



## Stry (Sep 17, 2020)

Where do you get that brace for the GPU from?


----------



## W1zzard (Sep 17, 2020)

Stry said:


> Where do you get that brace for the GPU from?


It comes with the darkFlash DLX22 case


----------



## Berfs1 (Sep 17, 2020)

Dang I thought Ryzen was supposed to be better because I am a sheep that always listens to everyone



Vecix6 said:


> Nice article but confusing graphs with too many information


Then look at this: https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-amd-3900-xt-vs-intel-10900k/27.html

This tells you all you need to know. They aren't confusing, just a lot to take in.


----------



## wolf32v (Sep 17, 2020)

When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore. Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.


----------



## Seyumi (Sep 17, 2020)

Nice review. Eagerly awaiting the same review but with the RTX 3090 instead. As already mentioned, 1% or 0.1% lows are very important to me and many other people and would make your reviews much more informative.

So out of 23 modern games tested, the 10900k (PCIE 3.0) is only 2% or 1.6 FPS average faster than the 3900XT (PCIE 4.0) at 4K.

Next gen Zen 3 is only 3 weeks out. It’s safe to assume that AMD will now take the gaming crown from Intel after over a decade. I don’t know the exact specifics of Zen 3 (like if IPC will increase), but even just a modest +200mhz bump over Zen 2 without any changes should do the trick.

You also have to factor that both next-gen consoles are based off 8 core AMD Zen CPU’s.  99% of PC games are just console ports. It’s also safe to assume that new PC games will be better optimized on AMD CPU’s going forward.

I’m fairly confident once Intel releases their next gen platform with PCIE 4.0, things will be even again or possible 1-2% faster. It’s good that both companies are leapfrogging each other now.


----------



## PanicLake (Sep 17, 2020)

People, just keep in mind that 
*AMD Ryzen 3900XT vs. Intel Core i9-10900K*
have almost 200€ price difference where I live... so one have to pay around 145% the price to have have almost 10% performance gain.


----------



## Calmmo (Sep 17, 2020)

~1 month for 4900x ;o just wait.


----------



## Deleted member 24505 (Sep 17, 2020)

hurakura said:


> The PC - just another gaming console. No other use for it.



Lol no other use. you must be a console only user.


----------



## mahirzukic2 (Sep 17, 2020)

@Wizzard
I think you have a mistake in the `Test setup` page where it says the memory configuration for 9900k. It says:


> Thermaltake TOUGHRAM, 16 GB DDR4
> @ 4000 MHz 19-23-23-42


but I think it should be:


> Thermaltake TOUGHRAM, 16 GB DDR4
> @ 3733 MHz 19-23-23-42


----------



## ZILZAL (Sep 17, 2020)

wolf32v said:


> The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.


Maybe true, until the next intel CPU comes out !


----------



## Mats (Sep 17, 2020)

PanicLake said:


> People, just keep in mind that
> *AMD Ryzen 3900XT vs. Intel Core i9-10900K*
> have almost 200€ price difference where I live... so one have to pay around 145% the price to have have almost 10% performance gain.


With such high price I'd suggest you order from another country, like Germany.








						Intel Core i9-10900K, 10C/20T, 3.70-5.30GHz, boxed ohne Kühler (BX8070110900K) ab € 368,97 (2023) | Preisvergleich Geizhals EU
					

✔ Preisvergleich für Intel Core i9-10900K, 10C/20T, 3.70-5.30GHz, boxed ohne Kühler (BX8070110900K) ✔ Bewertungen ✔ Produktinfo ⇒ Kerne: 10 • Threads: 20 • Turbotakt: 5.30GHz (Thermal Velocity Boost), 5.20GHz (Turbo Boost Max 3.0), 5.3… ✔ Intel ✔ Testberichte ✔ Günstig kaufen




					geizhals.eu
				




It's only a 40€ difference in Sweden, tax included, or 108 %.

That 10 % performance gain is pretty much only in benchmarks, as I doubt the vast majority would run this card at 1080.


----------



## mahirzukic2 (Sep 17, 2020)

W1zzard said:


> You mean these four kits?
> 
> 
> 
> ...


As per this:




__





						Arbeitsspeicher (RAM) Speicher mit Speichertakt ab 3733MHz Preisvergleich Geizhals EU
					

Preisvergleich und Bewertungen für Arbeitsspeicher (RAM) Speicher mit Speichertakt ab 3733MHz




					geizhals.eu
				



The cheapest possible one is a CL17 and 75€, meaning that in worst case it's less that 2x expensive, in best case 50% more expensive.
Not a big deal, like he said, those chasing the last few FPS wouldn't mind spending 50 - 70€ more to get there. And I totally agree.


----------



## hurakura (Sep 17, 2020)

The PC - just another gaming console. No other use for it.

No one cares about benchmark scores in applications, only games matter. If a cpu is bad at games then it's no good at all?
And GPU'a are not only for gaming, but no one is doing gpu acceleration benchmarks. Because games are only thing that matters.


----------



## W1zzard (Sep 17, 2020)

mahirzukic2 said:


> CL17


CL17 will be within margin of error from my CL19


----------



## AddSub (Sep 18, 2020)

wolf32v said:


> When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore. Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.



I've been hearing about the multi threaded revolution on desktops "very soon!" for close to 20 years now. I will wait for _that_ in the same way I will/did wait for "VR revolution!" or "PhysX revolution!" or "BIG NAVI!" or any other "just you wait! (insert rnd#-of-years) for the (whatever corporate goal/marketing-nonsense/technological pie-in-the sky)!!!". By wait, I mean I will ignore the corpo-marketing speak. Sun rises and sets, still, on the single threaded performance, as of 2020, as far as consumer desktops are concerned. For better, but mostly for the worse. Which means, Intel is still king. For better, again, mostly for the worse.

...
..
.


----------



## Redwoodz (Sep 18, 2020)

AddSub said:


> I've been hearing about the multi threaded revolution on desktops "very soon!" for close to 20 years now. I will wait for _that_ in the same way I will/did wait for "VR revolution!" or "PhysX revolution!" or "BIG NAVI!" or any other "just you wait! (insert rnd#-of-years) for the (whatever corporate goal/marketing-nonsense/technological pie-in-the sky)!!!". By wait, I mean I will ignore the corpo-marketing speak. Sun rises and sets, still, on the single threaded performance, as of 2020, as far as consumer desktops are concerned. For better, but mostly for the worse. Which means, Intel is still king. For better, again, mostly for the worse.
> 
> ...
> ..
> .


  So how does it feel to spend $1000 on a graphics card and run it  at 1080p?


----------



## mahoney (Sep 18, 2020)

wolf32v said:


> *When games start going for using more cores AMD will be on top. With the game consoles all AMD systems the games will be optimized for multicore.* Which will favor the AMD desktops better. The day of single core performance are numbered. The zen 4000 chips with there 15% ipc improvements will take the Gaming Crown I believe. We will know soon.



People have been spouting this shit ever since Ryzen 1st gen came out. 3years later and even AMD's own 4c/8t is completely destroying 1st gen and refresh in gaming. There's no magic AMD finewine we've already seen the maximum on these Ryzen cpu's.  Hardware Canucks made Ryzen 1st and 2nd gen game benchmark with the 3080 and the 1st gen is bottlenecking at 4k - in some cases more than 10% difference.


----------



## awev (Sep 19, 2020)

Just a couple things to think about.

A few months ago I saw an article that mentioned MathLab, or a similar program,  ran slower on AMD than Intel for one simple reason.  The reason was that the program was compiled with a math library that only supported Intel's special instruction set - I think it was SSE2 - no AMD equivalency, so running the program on a Zen chip meant that a non-optimized routine was used. It would be so easy to test for the CPU, figure out what instruction set to use, and then branch to the proper subroutines as needed.  Yes, it will add to the file size of the program, yet it will only positively affect performance, not the negative impact for purchasing a CPU that the software publisher doesn't consider.  

Take a look at Micro$oft Flight Simulator 2020 - it is built on the DirectX 11 platform, what is a shame.  That means one core of a CPU gets hammered, and it can only utilize four cores (or two cores and two threads) total.  And it looks as if even then they did a poor job of optimizing it, as it only puts a 15% to 20% load on the cores used.  While DirectX 12 helps spread the love, it still is not designed for these high core counts, you can see that in the white papers Micro$oft released when they introduced DirectX 12.  

How many people do nothing but play a single game, and nothing else, at that time? Take me for instance, I will record gameplay, have a browser window open readings comments on another video, and playing a game, and my email program running in  the background.  No, I don't stream, yet I could while playing the game and recording it.  If the only thing you want to do is game then get a console.  If you want a darn good simulator then get a decent PC.  If you want to do more than just game, all at the same time, then you truly need a PC.  Just because a lot of publications (on-line and/or in print) do not publish GPU acceleration does not mean people are not interested in it, or can not find those numbers elsewhere.  Even Adobe is making better use of the GPU now days, even if they are lagging behind some other popular - and free - video editing software in doing so.


----------



## Selaya (Sep 19, 2020)

It's AVX512 and no, that isn't the reason why Ryzens fall behind Intel's Cores - it is basically solely related to memory controller latency; chiplet design doesnt help here (For the record, Intel's very own HEDT platform also falls behind their mainstream when it comes to CPU-bottlenecked gaming performance).


----------



## lexluthermiester (Sep 19, 2020)

awev said:


> Just a couple things to think about.


Interesting first post. Let's analyze shall we?



awev said:


> A few months ago I saw an article that mentioned MathLab, or a similar program, ran slower on AMD than Intel for one simple reason. The reason was that the program was compiled with a math library that only supported Intel's special instruction set - I think it was SSE2 - no AMD equivalency, so running the program on a Zen chip meant that a non-optimized routine was used.


AMD has included SSE2 in their CPU's since 2003 with the release of the Athlon64 series of CPU's. Whatever arcticle you read must be VERY old as AMD has utilized SSE2 for 17 years. Hardly relevant today, nor worth mentioning in a thread conversation about the Geforce RTX 3080.
Please review: https://en.wikipedia.org/wiki/SSE2



awev said:


> Take a look at Micro$oft Flight Simulator 2020 - it is built on the DirectX 11 platform, what is a shame.


True, MSFS2020 should be DirectX12 which has been available for more than 5 years. However, Microsoft knows full well that much of it's market for MSFS2020 still runs Windows 7, which is limited to DirectX11, and they likely do not want to alienate that very lucrative part of the market even if they don't officially support it. It was a very interesting choice.



awev said:


> That means one core of a CPU gets hammered, and *it can only utilize four cores* (or two cores and two threads) total.


Rubbish. DirectX11 is highly configurable and can utilize any and all resources available to the system it's running on. Any limits that are observed are limits imposed by the developers of a game, not DirectX11.

Welcome to TPU! Even though I'm being critical of your statement, that does not mean you are not welcome. Your comment didn't seem trollish, was well stated and seemingly well thought out even if it was incorrect about some points.


----------



## Selaya (Sep 19, 2020)

Wait what
I wasn't aware that MSFS 2020 would run on 7


----------



## lexluthermiester (Sep 19, 2020)

Selaya said:


> Wait what
> I wasn't aware that MSFS 2020 would run on 7


It should. I haven't tried it yet personally, but then why else would it be artificially limited to DX11? They're not advertising Windows 7 compatability, but it's implied with the exclusive use of DX11.


----------



## biffzinker (Sep 19, 2020)

lexluthermiester said:


> Whatever arcticle you read must be VERY old as AMD has utilized SSE2 for 17 years.


It was a recent find in CPU reviews that Mathlab would skip running any SIMD code paths unless it detected a Intel CPU.





						How to Bypass Matlab’s ‘Cripple AMD CPU’ Function | ExtremeTech
					

If you run Matlab on an AMD processor, you aren't getting all the performance you're entitled to. Matlab refuses to ...




					www.extremetech.com
				




Later fixed a couple months by Mathworks.








						Crippled No Longer: Matlab Now Runs on AMD CPUs at Full Speed - ExtremeTech
					

MathWorks has issued a major update for Matlab. AMD CPUs now run at full speed and use AVX2 by default if you install the 2020a update.




					www.extremetech.com


----------



## lexluthermiester (Sep 19, 2020)

biffzinker said:


> It was a recent find in CPU reviews that Mathlab would skip running any SIMD code paths unless it detected a Intel CPU.
> 
> 
> 
> ...


Ah, now see that is a developer limitation, not a problem or limitation AMD created. Crappy move IMO.

However, we're a bit off topic, let's return the tread to it's normally scheduled RTX3080 discussion...


----------



## Mussels (Sep 19, 2020)

lexluthermiester said:


> Ah, now see that is a developer limitation, not a problem or limitation AMD created. Crappy move IMO.
> 
> However, we're a bit off topic, let's return the tread to it's normally scheduled RTX3080 discussion...



"Individual developers may not be aware that the Intel MKL doesn’t execute AVX2 code on non-Intel CPUs. "
It's a problem for any programs written uses intels supplied librarys, but yeah its not really relevant to gaming


----------



## kiriakost (Sep 19, 2020)

Every one talks of CPU's comparisons and no one talks of motherboards Chip-set. 
Only when two motherboards using identical Chip-set then we may compare them regarding FPS performance. 
Therefore this is a comparison of entire platforms (INTEL VS AMD ).
The only significant message in my eyes, this is that one decade of competition this now makes obvious that Chip-set performance gap this is not significant factor if all that you think this is gaming performance.


----------



## Selaya (Sep 19, 2020)

Don't you plug your GPU in the slot that uses the CPU's and not the chipset's PCIe lanes


----------



## Mussels (Sep 19, 2020)

Selaya said:


> Don't you plug your GPU in the slot that uses the CPU's and not the chipset's PCIe lanes



Yup. Even storage is done that way these days (primary NVME slot)


----------



## awev (Sep 19, 2020)

Thank you @lexluthermiester for the feedback, and glad you found my first post interesting.  As @biffzinker shows I can remember things, just not accurately.    I would of offered a citation if I could of remembered where I saw it - to many news feeds, not enough time to study and memorize it all.

The reason I mentioned the code being compiled for a program preferring one chipset over another is that it looks like one of the test games is frame locked at about 52 FPS (Anno 1800) on AMD CPUs.  @Selaya suggests it comes down to the I/O chiplet - more to the point the IF for memory.  While I don't disagree, yes AMD made the mistake of going with a 12nm chiplet for a 7mn chip, I don't think that is the only thing.  So, are we testing to see if a game is frame locked on a CPU or a GPU?  Nice that when you are testing a GPU you are also testing the limitations of a game based on the CPU, and by extension, the coding and compilation of the said game.  

As to my statements about DirectX11 & 12, well here is some of the info: DirectX12 - What is the big deal? People have tested different games, checked Task Manager (and other software), and reported that DirectX 12 is only utilizing a few cores/thread, and ignoring about half or more of them, on games being played in the last couple years (even today's games).  DirectX 12 was developed during a time when four cores and eight threads where considered high end, now days it is only enough to run the OS and maybe M$ Office (just kidding).  I am looking to replace my video editing software because it does not properly use all of the recourses available to it - both CPU cores/threads and GPU acceleration, so this is not limited to just games. 

BIOS and chip sets can have an impact, just like the CPU can.  Just look at this article from Tom's Hardware where they test boot/reboot/shutdown times.  How does the BIOS and I/O controllers affect GPUs? I am not sure, yet might be interesting to find out.  Same thing with the OSes, how does it effect the frame rates?

The RTX 3080 is the consumer GPU king this week, that is, until the 3090 comes out next week.  And Intel still has a lead over AMD, in gaming, even when Until is still using PCIe gen 3 on the desktop as opposed to AMD's PCIe gen 4.  

While I tend to prefer AMD (I think better value for the price), I do have WinTel in the house as well.  I am not a fan boy for one or the other.

P.S.: When do I get to play games on a 120"  10K monitor with a 360 Hz refresh, with a CPU and GPU able to drive it?


----------



## Selaya (Sep 19, 2020)

Selaya said:


> Don't you plug your GPU in the slot that uses the CPU's and not the chipset's PCIe lanes


Just realised the typo, that post made no sense at all 
Obviously meant





> Don't plug your GPU in the slots that uses the chipset's PCIe lanes (use the one with direct CPU lanes, _duh_)


but I think you guys got the gist anyways 

As for the memory latency, that is probably an inherent disadvantage of the chiplet vs monolithic design - whether the I/O die is 7nm or not _probably_ isn't the dealbreaker.


----------



## kiriakost (Sep 19, 2020)

awev said:


> Same thing with the OSes, how does it effect the frame rates?



There is a simple answer and a complicate one, the simple is Free of charge.
At Windows 7 Microsoft though to run another 30 services them operating at to be *Artificial Intelligence* - operating system automatic troubleshooter.
An expert server administrator, he may disable half of them, this will offer a lesser waste of hardware related activity for pointless tasks.
At Win 10 the Microsoft system services number this is further increased, and most of them can not be disabled.
Additionally  Microsoft .NET a pile of software layers, instead this working as well oiled machine, there is a plaque of half working security patches, forcing the OS to behave as turtle.
These days it makes you wonder ( IT experts only), of how it is possible the operating system (a prisoner full of ties and hand-caps all in the name of security and safety)  this  to be free enough to deal with Gaming applications when the user wants to play a game.

If Microsoft was a serious software maker,  they should adopt as standard  a new User Profile this named as *Gaming mode*, in which all unnecessary system services they will be terminated.


----------



## AddSub (Sep 19, 2020)

kiriakost said:


> At Win 10 the Microsoft system services number this is further increased, and most of them can not be disabled.
> Additionally  Microsoft .NET a pile of software layers, instead this working as well oiled machine, there is a plaque of half working security patches, forcing the OS to behave as turtle.



Do you remember back in very early 2000s when Microsoft was pushing .NET as clean, pure, rapid and easy to deploy framework to replace the "DLL hell!!!". Oh my! 20 years later and "DLL hell!!!" seems like paradise. .NET interframework version incompatibilities are infamous. Framework itself is massive, buggy, and rarely used outside of corporate world niche uses. The JIT performance Microsoft promised would be indistinguishable from native is anything but. I CAN LITERALLY tell when an application is .NET framework based. The beautiful feeling of clicking on a executable and staring at a desktop while nothing at all happens, all the while CPU activity and disk activity spikes to near max levels. Ahhh, .NET framework application at work... trying to start itself.... give me back "DLL hell" any day.


...
..
.


----------



## lexluthermiester (Sep 19, 2020)

awev said:


> P.S.: When do I get to play games on a 120" 10K monitor with a 360 Hz refresh, with a CPU and GPU able to drive it?


Unlikely in this decade. Maybe the next.


----------



## witkazy (Sep 20, 2020)

Redwoodz said:


> So how does it feel to spend $1000 on a graphics card and run it  at 1080p?


  Exactly.


----------



## xrror (Sep 21, 2020)

lexluthermiester said:


> It should. I haven't tried it yet personally, but then why else would it be artificially limited to DX11? They're not advertising Windows 7 compatability, but it's implied with the exclusive use of DX11.


Technically Windows 8.1 is still officially supported until 2024 (Extended support until January 10, 2023 ) I think? Though also "technically" installing 8.1 on "newer hardware" (it's some arbitrary cutoff pre-Ryzen) also isn't supported by Microsoft soo... I'm not sure what they're targeting?

Is there some weird advantage to running the Flight Sims on Windows Server or something? I've no idea... you'd think MS would want to promote/push DX12 and by extension win10 outright. I love win7 too but by this point... come on.


----------



## bxcounter (Sep 21, 2020)

https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-amd-3900-xt-vs-intel-10900k/15.html

Far Cry 5 is super sensitive to system memory latency and bandwith so it's good to know my 1080ti beats wizards 3080 
This is with 4100 15-16-16-35 1T.


----------



## John Naylor (Sep 22, 2020)

It's not that the gaming performance at low resolutions doesn't matter ... the reality is as you go up in resolution the CPU stops being the bottleneck and the GPU becomes the bottleneck.  It doesn't erase the bottleneck and that's a significant distinction.  I am going to go back a few years when the prevailing wisdom was "faster speed / lower CAS RAM doesn't improve gaming".   After a while that changed to "faster RAM doesn't matter, as long as your testing was limited to average fps".   It was shown that it had a significant impact on minimum fps.  But here again ... faster RAM's impact was not being seen when the GFX card was the bottleneck.   When you put x80s in SLI, that bottleneck was removed and faster RAM / lower CAS showed significant impact.

===============================




__





						Memory Scaling on Core i7 - Is DDR3-1066 Really the Best Choice?
					






					www.anandtech.com
				




22.3 % (SLI)  increase in minimum frame rates w/ C6 instead of C8 in Far Cry 2
18% (single card) / 5% (SLI)  increase in minimum frame rates w/ C6 instead of C8 in Dawn of War
15% (single card) / 5% (SLI)  increase in minimum frame rates w/ C6 instead of C8 in World in Conflict

Also see http://www.bit-tech.net/hardware/memory/2011/01/11/the-best-memory-for-sandy-bridge/1









						Memory Scaling on Haswell CPU, IGP and dGPU: DDR3-1333 to DDR3-3000 Tested with G.Skill
					






					www.anandtech.com
				







__





						Memory Performance: 16GB DDR3-1333 to DDR3-2400 on Ivy Bridge IGP with G.Skill
					






					www.anandtech.com
				








__





						Arma 3 CPU vs RAM performance comparison 1600-2133= up to 15% FPS gain
					

The performance gains are much bigger than i expected but i triple checked the results and there is well under 1fps difference between most runs at the same settings so if somebody else would like to give this a go to confirm or disprove my results it would be interesting Arma 3 stratis 1600-2133...




					forums.bistudio.com
				




===============================

Today it's more accepted that faster speed / lower CAS does affect things ... but it won't where the bottleneck lies elsewhere.   In this article, today that bottleneck at 4k is the 3080 ... the line where that bottleneck kicks in  will move substantially when you install a 4080 or 5080.  Right now the CPU limitations can't be seen at higher reolutions because the GPU is not allowing us to see it.   Car geeks know this ... your sports car's drive train may be incapable or being blown by tour current engine, but increase engine size, add a supercharger and the HP / Torque increase will likely overstress something in the drive train.

Second, unless you never do anything else with your PC, other uses must be considered.  Unfortunately "I'm choosing this CPU because its almost as good and gaming and it kicks tail in Cinebench" is not an uncommon train of thought.  I don't know anyone who runs Cinebench on a daily or weekly basis .   It's like choosing one house over another in Nome,  Alaska because it has an air conditioner.

Even if it's primarily a gaming box, after gaming, the next consideration is application performance, and where these are close or immaterial, "other factors".  If you have 4k, the relevant questions are:

a)  Am I the guy / gal who keeps a CPU / GPU / MoB / RAM combo for 3 -5 years or am I more likely to upgrade the GPU at some point down the line ?
b)  After gaming, what do I have on my box that benefits either way from CPU choice ?   For me that's mainly AutoCAD and Office Suites, which decidedly favor Intel.  They guy I'd send my drawings to in order to get rendered if I ever needed one  ...  he'd want AMD.  Then again, given that I never needed one in the 35 years since i started my business, I could just as easily run the render overnight in which case time whether it takes 2, 4 ot 8 hours, I won't see it fir at least 12.
c)  If ya still betwixt and between on the decision by the time you get done with the list, it's time to look at the other factors.   Power consumption is insignificant but Intel has a substantial advantage in temps, the more important part of which is the fan noise assoctaed with that 20C temp difference.

If you are decidedly in favor of one brand or another and can't point to a specific application that has a marked advantage or that CPU, you're doing it wrong.   Often there will be a note on requested build components list we receive saying  "I chose an AMD CPU because it has more cores".   When I ask "Do you have any apps that benefit from more cores ?   The most common and also the most disturbing answer is "I  don't know".

Looking how the market is responding to the choice, there's this.

The 3900 XT is selling at $476 ... 95% of its original MSRP
The 10900KF is selling at $551... 103% of its original MSRP

What does that tell us and what questions does it pose:

a)  The demand for the 10900kf exceeds demand by a wee bit
b)  The supply for the 3900 XT is a wee bit under current demand.
c)   Doesn't the current $75 price difference render performance differences moot ?

Again, no definitive answer, it depends.  If it's just because you're impatient and want the new build with the new 3080 toy yesterday, then no.   I wouldn't build a new box  today under any circumstances other than "I can't work".  PSU prices are up 50 - 90% ... Case prices are up as much as 100% ...  My youngest was building a new box some 2-3 months back (3800x) and only thing he was taking was the SSHD and the GFX card.  So in order to have a spare PC in the house, I bought him a new SSHD ($85) and then he left his in the old box with a working Windows 10 Pro OS on it and all hardware drivers.  Son No. 2 wanted one last week .... it was $254.  Today its $150

Vendors can't make money selling what's not on their shelves, they can't sell what is sitting on their shelves because demand is low .... they price products to meet demand.   When supply catches up with demand prices will stabilize, not just CPUs but everything else .  The consumer is the ultimate arbiter of product pricing, manufacturers can only charge what consumers are willing to pay.  

So, .... when asked "What CPU do I want to pair with my 3080 today ?   The answer is neither.    The reason the price difference doesn't matter is, (forgetting the CPU choice for a moment), with PSU, Case, MoBo, Storage and other prices in the stratosphere right now, Im not willing to pay a $150 - $175 price premium for "I want it now".    (SSD +$0, MoBo + $10, SSHD + $65, PSU + $60, case +$40) In addition, waiting also allows the 3080 drivers to mature, less headaches, less bugs, less failed designs, benefit of production line improvements upping chances in silicon lottery.  The $75 CPU savings we'd see today with an 3900 XT will be eaten by the more than twice over.

Somewhere between Thanksgiving and New Years, prices will stabilize, we will know the winners and losers brand / model wise and the price situation will be different than it is today


----------



## W1zzard (Sep 23, 2020)

bxcounter said:


> so it's good to know my 1080ti beats wizards 3080


Congrats, but I'm not using the integrated benchmark, try some gameplay


----------



## bxcounter (Sep 23, 2020)

W1zzard said:


> Congrats, but I'm not using the integrated benchmark, try some gameplay


If you are willing to share save file or script you used to test SOTTR i'm willing to test it out.

Thanks

ps...i did test out 1440p and 4k  and average fps for 1080ti is spot on in line with your results.


----------



## Kendragon (Sep 25, 2020)

I am sorry, but I have to say something about the ram used on the 3900x. I had a 3900x and a 9900K, the Ryzen shined with DDR4 3600 with a cl of 16 and the 10900K isn't that much faster than the 9900K. I was actually able to match or beat my 990k, both with the same RTX 2080TI. Also, good luck finding a 10900K for MSRP or even near it. You could buy 2 3900X's for the price of 1 10900K. But I can throw a wrench in it if someone would like to send me an RTX 3080 to test on my Threadripper 3960X vs the 2080TI I have. And I went to threadripper for the PICE lanes mainly. No SSD's or spinning hard drives in my computer. I have 2 2tb Aorus nvme 4.0's and 2 1tb Aorus nvme 4.0 and 1 samsung 960pro Nvme 3.0. I used to have a 7820X that i missed dearly when I went to an 8700K, then the 9900K, then 3900x and now Threadripper 3.

Let us not forget that AMD cpu's use to best intel cpu's in everything (Athlon64 days). I have confidence they will do so again with Zen 3. See, not a fan boy at all. I ahve had AMD?ATI and Nvidia gpus, and intel and amd cpu's, Ryzen 3900x was my first back into AMD since Athlon 64


----------



## terrorfrog (Sep 26, 2020)

FYI, AMD didnt became better. Never where.
At some point they where close enough to intel but offered a better price for their cpus.

the ryzens didnt became faster or better over night at all, AMD just made a new form of multi socket layout (that is indeed better than traditional multisockets)
put it into one package and offered it for cheap.

yet they are only either an edge case buy or a budget one not the technical better option.
not certified for anything and a bad board and bios ecosystem and stability issues still keeps ryzen out of the ideal usecases (workstations and such)

so the only real reasonable usecase is either beeing on a very tight budget or having a workstation load but can live with stability issues.

theres a reason why there is not a single serious workstation or server board out there asside from epic.
asad but true and a good reason why intel still not in panic mode


----------



## Redwoodz (Sep 28, 2020)

terrorfrog said:


> FYI, AMD didnt became better. Never where.
> At some point they where close enough to intel but offered a better price for their cpus.
> 
> the ryzens didnt became faster or better over night at all, AMD just made a new form of multi socket layout (that is indeed better than traditional multisockets)
> ...


 You are delusional. You don't even make sense. Asside from Epic is like saying asside from Xeon. 

 You have ThreadRipper for desktops. Dell and HP are all selling AMD equipped servers.

 Clock for clock AMD has HIGHER IPS and LOWER power draw than Intel. Intel's process allows for higher clocks while AMD's allows for higher density.


And, thanks to AMD this 10900K doesn't cost $1500 like it's predecessors. 
That is why it's the BETTER option for many.


----------

