• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

What is the point of 9800X3D in 4k? Isn't 9950X better at the same price?

What is the point of 9800X3D in 4k? Isn't 9950X better at the same price?​


Cache vs Cores vs Software - that includes the Operating System and kernel

Won't read further as there are plenty of benchmark tables already posted
 
I've read instances of people claiming to have maxed out their cache. Is that more likely to be an "issue" if you're strictly running 4K > ?
 
I mean those are literally the settings I play at lol. I use both balanced and quality whichever gets me to around 150 fps - 150+ fps feels amazing.

Point is this - it depends on what you're trying to do... alot of these "in the real world" arguments are missing the point -- if you game at high fps get the 9800x3d, if you do production then get the 9950x.

If you want big numbers and want to overclock a 16 core chip for funsies, then get the 9950x -- totally valid reason. If you want to run as close to 230 gsynced fps as possible get the 9800x3d. There's not really a wrong answer.

The whole "My XXXX CPU is still good when I run fully GPU bound" -- that's great and valid, but not everyone likes to game like that -- my 10850k is probably still good if I run fully GPU bound as 30fps locked 8K, that doesn't change anything. The 3D cache can net you 30% in games, will you use it? Depends on your settings - if you're gonna DLAA w/ Max pathtracing at 4k then get whatever you want because it doesnt matter at that point.

Just don't gaslight yourself into thinking there's no difference between these cpus, because you're NEVER gonna turn on DLSS balanced at 4k... or more powerful cards aren't going to come out in 12-18 months that will put more stress on the CPU.
Well, in reality though with my old 12900k the prime bottleneck (a major one mind you) in games is my 4090. Even the 5090 would be one. When we talking about AAA games (KCD2 , cybeprunk, alan wake, TLOU etc.) the CPU can get over twice the framerate of what my 4090 can at 4k with DLSS Q.


Heavier games in the future will be heavier for the GPU as well.
 
So I am building a new PC for RTX 5090. I was looking for CPU.
9950X is the same price as 9800X3D right now.
What's the point of 9800X3D if I will use my PC only in 4k+?
Isn't it better to go X and have 8 extra cores for the same price?
I'd go extra cores any day:rolleyes::):rockout:
 
I've long-argued what's the point for really anything above an overclocked 5600x/7500f, maybe an 8-core iyw, for gaming? Everything's limited by console capability.

This will change soon (I think the PS6 will be 12c @ >/~3.85ghz), which I think will use around something similar to 10ghz 128-bit mem bandwidth. (5ghz of 32gbps GDDR7, the other 27gbps for GPU, 256-bit).
Using ~7 cores like a current console, this would be similar to x3d. Using 12c, more similar to a non x3d. Still better than AMD's current memory controller is capable of, but not maxing out bw limitations.
If you look at the construct of the current 3nm zen 5c Turin CCD etc, their lane transfer is up to 32gbps. This is what makes me think this. Anyway, think ~32GB @ ~10gbps comparatively on a desktop CPU.

There are two chains of thought comparing this to available products:
1. An overclocked 8-core (to say ~5800mhz) x3d would not be dissimilar to 12c/3.85ghz. That said, PS6 could be clocked higher (and/or 9800x3d artificially constrained) so it forces an upgrade (on purpose).
2. The parallelism of greater threads is more efficient when it can be used, and 16-cores do not need to be clocked as high to compete, but they do become bandwidth-limited bc current AMD mc speed sucks.

I don't think either thought process is wrong nor right; there are pros and cons to both...but the answer is neither is future-proof. I would simply wait for Zen 6. There are many reasons, but let's go over a few:

1. AMD will almost-certainly bump up their mc speed with Zen 6 (4nm IOD?). The why is simple: 7200mhz DDR5 has gotten extremely cheap, and it's an easy win to show a (much-needed) generational performance upgrade. For all intents and purposes, this bandwidth is the difference between a vanilla or x3d 8-core CPU. It's actually less than that at current clocks. I'd say ~7000mhz to make up the difference, give or take. Current mem controller can't really do over ~6200mhz reliably. While one could argue the current design is an artificial design constraint to sell x3d CPUs, in reality it saves a ton of cost by using an older (6/7nm) process; allows them to make the MCM cheaply and for packaging of mutiple CCDs on a package an affordable reality...so I'm cool with it. Also, until recently, ~6000mhz was the common-sense option wrt affordable ram. This has changed somewhat recently with the newer chips becoming less expensive, and those now aimed at a minimum of 7200mhz (cas 34) less than $100 for 32GB.

2. BC of this likely change, for AMD to continue selling x3d CPUs (that actually make any sense) they need to increase the CCD to 12 cores, where you could see the limitation of pure ram bw shifting from ~7000mhz to >10500mhz without v-cache. Given (current) v-cache adds something not dissimilar to ~1000mhz of bw, their memory controller kinda has to clock at least 6667mhz and the limit will likely be decided by how long they plan to keep the design (and therefore scaling v-cache designs) into the future. If they plan on ramping up v-cache amount etc to accentuate the spread. My *guess* will be it would do at least 7200mhz, maybe rated up to 8000mhz, but that's just a guess. I'll be plenty happy with ~c30/6750, but if it can scale to 8000mhz+, that would be cool. Just don't expect to let them overlap.

So what is the correct answer?

Buy what makes you happy and you think will give you the most benefit right now in your current use-cases, as both have their limitations (cores/speed vs bandwidth) for the future.

As for that future? What should you BOLO for?
Probably vanilla 12c Z6 but ram clocked as high/tight as you can. An X3D iyw, but I don't think you'll 'need' it...but you never know. Some games may use more bw but less threads beyond the pure Z6 mc.
Potentially even a lower-end model (8-10?), but nobody knows how PS6 dev capabilities will be set up yet (how many threads they can use) nor the split on the PS6 vs potential desktop clock capability.
Once we know those things (how to make a PS6 cpu as cheap as possible without any limiting factors), it will be the time to buy.

Until then, go w/ your gut and you do you.
 
Last edited:
Im more interested in this extra smoothness people are talking about despite the available benchmarks available on this website. Its really hard to believe theres any difference at all at 4K for triple A titles but guess I will find out soon :). Also taking this opportunity to use this comparison for content (my bet is a placebo effect)

1000000272.jpg
 
Im more interested in this extra smoothness people are talking about despite the available benchmarks available on this website. Its really hard to believe theres any difference at all at 4K for triple A titles but guess I will find out soon :). Also taking this opportunity to use this comparison for content (my bet is a placebo effect)

View attachment 384642
Certainly be interesting to see if things like traversal stutter are reduced with "thread agnosticism"
 
Im more interested in this extra smoothness people are talking about despite the available benchmarks available on this website. Its really hard to believe theres any difference at all at 4K for triple A titles but guess I will find out soon :). Also taking this opportunity to use this comparison for content (my bet is a placebo effect)

View attachment 384642
I doubt anybody could perceive the difference, but using TPUs data from different reviews (9800x3d and 5090, so not really apples to apples), the 9950x achieves 127 average min FPS @1080p with a 4090, the 9800x3d with a 5090 fe @4k achieves 120. Assuming these results were obtained under the same circumstances, and assuming the 9950x could keep pushing the same minimum FPS @4K, then there should be no performance loss with going with the 9950x.

But usually there is a performance loss when upping the resolution. In the 9800x3d review the ryzen 2700 goes from 62 FPS @1080p to 56 @4K, on the same GPU that can push 78 FPS @4K with a 7800x3d. The 3600 goes from 80 to 65.

If the OP intends to buy a factory OC 5090 the gap will increase a little more. Again, I don't believe the difference to be perceptible, but I also understand that it would feel less horrible knowing that your $7k GPU is being fully utilized.
 
But, but... 9950X will compile shaders faster.

@Toss look at your use case, do you do video rendering or something that will benefit from having 16-cores? If no then go for 9800X3D.
 
It is a mistake to consider these limited tests of a few minutes as representative of the overall in-game performance and experience. You can do a test in area X and measure a 5% difference and say "Meh", but later on in another denser location or with more simulations going on simultaneously, the hit goes up to 20%, and you'll certainly notice it.

Not to mention simulators or poorly optimized games that simply run much better on CPUs with 3D V-cache regardless of resolution.

Either X3D is divinely good or Arrow lake is just very bad:

In general, it's a bit of both, X3D is really divinely good for gaming, and the U9 285K unfortunately has some performance regressions vs. the 13900K and Raptor Lake in general when it comes to games, unless tweaked to the high heavens (and in general, tweaked Z790 platform will keep up).
 
I mean its 2-3FPS average better at 4K which means in the long run it will last longer.
99 vs 101.4 fps is the very definition of a negligible differential. And in the long run, games will become more highly threaded, meaning the 9950x is the better choice.

To answer the OP's question (as if they're even still reading replies) is you won't notice any difference whatsoever between the two CPUs in 4K gaming. You will, though, notice it dramatically in certain multithreaded apps.
 
But, but... 9950X will compile shaders faster.

@Toss look at your use case, do you do video rendering or something that will benefit from having 16-cores? If no then go for 9800X3D.
yes, I do sometimes, 16 cores is futureproofing a bit.
99 vs 101.4 fps is the very definition of a negligible differential. And in the long run, games will become more highly threaded, meaning the 9950x is the better choice.

To answer the OP's question (as if they're even still reading replies) is you won't notice any difference whatsoever between the two CPUs in 4K gaming. You will, though, notice it dramatically in certain multithreaded apps.
that's why I am leaning to X because I play at 4k anyways and multithreading will become better with time, so I will never get to bottleneck.

I think we need some test with 0.1% lows or even 0.01% lows between CPU's to find most stable one and if X3D benefit to stability.
Intel 14900k especially was very good
In COD 0.1% lows and R6 Siege games back in the days. After they switched to TSMC they lost that edge
 
Last edited:
So.. yeah overall there's a significant difference even in single player games that screenshots and end result numbers can't justify even in 4K, I am enlightened :). Below is my video of the difference I saw in 4K.

 
So.. yeah overall there's a significant difference even in single player games that screenshots and end result numbers can't justify even in 4K, I am enlightened :). Below is my video of the difference I saw in 4K.

In FF the 3d has much worse 0.1% lows (besides at the scene that the 265k hits those stairs). Wtf?
 
In FF the 3d has much worse 0.1% lows (besides at the scene that the 265k hits those stairs). Wtf?
In FF7 I had massive gains of 10C Degrees gaming temps lol
 
In FF the 3d has much worse 0.1% lows (besides at the scene that the 265k hits those stairs). Wtf?
Update the 3d driver (not the chipset drivers) and then new bios. My 0.1% lows are so much better now all around than the initial build.
 
So.. yeah overall there's a significant difference even in single player games that screenshots and end result numbers can't justify even in 4K, I am enlightened :). Below is my video of the difference I saw in 4K.

looks like 1% even healthier in intel LOL
Also try Dota 2, CS2, OW2, LOL and Valorant. That's where CPU matters the most
 
In FF7 I had massive gains of 10C Degrees gaming temps lol
Your fps in POE is also very low with both configs. Im getting similar fps to your 3d with a stock 12900k. What gpu do you have?
 
Probably Windows 11 24h2 with VBS turned on.
 
looks like 1% even healthier in intel LOL
Also try Dota 2, CS2, OW2, LOL and Valorant. That's where CPU matters the most
Ive tried, overwatch you hit the 600 fps cap with most cpus, dota 2 a 12900k already gets ~300 fps (huge teamfight moments, else you are looking at 400-500), the 3d is faster but its kinda pointless at that point, valorant - again im sitting at over 500 fps with the 12900k, the 3d is still faster but its kinda pointless
 
Your fps in POE is also very low with both configs. Im getting similar fps to your 3d with a stock 12900k. What gpu do you have?
Are you using DLAA and 4k too? 5090 not sure, could be the settings its 4am here too late to do anything else now lol.
 
Ive tried, overwatch you hit the 600 fps cap with most cpus, dota 2 a 12900k already gets ~300 fps (huge teamfight moments, else you are looking at 400-500), the 3d is faster but its kinda pointless at that point, valorant - again im sitting at over 500 fps with the 12900k, the 3d is still faster but its kinda pointless
yes, but it's not about averages but 0.1%
Averages are more than enough for 4k
 
Back
Top