• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9 9950X3D

Only 200 9950X3D @ Denver’s Microcenter.

5070Ti - Zotac Amp xtreme - $999.99
5080 - Zotac Amp Extreme - $1,499.99
Stateside prices without Tax
 
Well no, not really. Instead of 1 8core CCX, there is 2 8core CCXs spaced about 1cm apart. Heat density increases in such a situation. So the temps being only about 10C apart is rather remarkable.


Even accounting for that, it's still remarkable that the 9800X3D and the 9950X3D are only a few degrees apart.
That 1cm apart is enough to improve thermal density a lot in such constrained space.
Even so, it's also 2x more area for a ~42% TDP increase (120 to 170W), heat density still goes down.
 
Test Setup doesn't elaborate on the settings used for PBO Max that I could find.

I would like to understand this more because "what if" CO was applied without increasing power limits?
Would not applying more power still help in getting better boost clocks (or general performance) in that scenario without increasing heat?

( probably off topic question - daily lunch time musings )
Also what if PBO & CO were only applied in a way to match core clocks with each other instead of trying to maximize each core's frequency potential.
Does that help smooth out potential game play issues.

(edit -- back to the topic) I found the PBO settings here although (just my personal opinion) I think the overclock settings should be under Test Setup.

There are two PBO overclocking settings that I am curious about how it was decided.
  1. "Then, I removed all the power and current limits"
To my knowledge removing all power limits has never really be the smart way to use PBO because it ends up creating more heat defeating the ability to maximize performance. Typically what I have seen here in the past on these forums is a careful balancing act between the power limits is used to reach the best performance with PBO.
  • "I also set +200 MHz with a scalar of x10"
I've never known increasing scalar to help much. How is was this value determined when deciding to use it in the testing?
 
Last edited:
That 1cm apart is enough to improve thermal density a lot in such constrained space.
That depends on the cooler. I'm not disputing that the heat density won't be spread out, only that there will be a significant increase of it in the limit space under an IHS of the same size.
 
Well no, not really. Instead of 1 8core CCX, there is 2 8core CCXs spaced about 1cm apart. Heat density increases in such a situation. So the temps being only about 10C apart is rather remarkable.


Even accounting for that, it's still remarkable that the 9800X3D and the 9950X3D are only a few degrees apart.
I suspect it's an indicator that you can increase the 9800X3D efficiency even more (with minimal performance losses) by decreasing the power limit closer to 130W. But like you said before binning may also come into play. I wonder if disabling the 2nd CCD and readjusting the power limit to match 9800X3D would prove 9950X3D has better binned cores?
 
...because it won't push the frame rates gamers want
Yes it will and it does. In the rare games it doesn't, it doesn't matter because of all the reasons I outlined above.

Edit: I posted in the wrong thread lol, the post sits in the 9950x thread my bad.
 
Last edited:
$700 for a 16core top tier CPU? Seriously? IF that's too expensive to you, stay with an older platform. This is effectively AMD's "Extreme Edition" CPU at $300 lower than Intel had been charging for nearly 20 years, and at higher over-all performance.
Yeah, I don’t think a CPU on a mainstream platform should be priced as high as a mid-to-high-end GPU. I think the Ryzen 9 7950X3D has a fair price for the 'Extreme Edition,' as you call it, but I understand why AMD is pricing it so high, there's no real competition, and even the competition keeps prices high despite offering lower performance.
 
Yeah, I don’t think a CPU on a mainstream platform should be priced as high as a mid-to-high-end GPU. I think the Ryzen 9 7950X3D has a fair price for the 'Extreme Edition,' as you call it, but I understand why AMD is pricing it so high, there's no real competition, and even the competition keeps prices high despite offering lower performance.
Compared to Threadripper pricing it's a deal but disappointment when it comes to motherboard expansion slots on AM5. (sorry I had to gripe about this again :shadedshu: )
 
Yeah, I don’t think a CPU on a mainstream platform should be priced as high as a mid-to-high-end GPU. I think the Ryzen 9 7950X3D has a fair price for the 'Extreme Edition,' as you call it, but I understand why AMD is pricing it so high, there's no real competition, and even the competition keeps prices high despite offering lower performance.
Dude, $700 for this cpu is a deal.
 
It was proven the moment AMD released the 5950X... 8+8 works, 8 3D + 8 3D would work the exact same.
That's an oversimplification, not proof. No benchmarks, no proof, I'm sure you know that by now. You can't compare CPU load during AAA gameplay with just any other workload. I do believe the AMD engineer who said that it's not worth it.

If a hypothetical CPU would have unified V-cache for both CCD's it would be an improvement, but then again, few games are made for that many cores, for now. Not that it would be an argument against such product.

The problem is that resources are reported to Windows as available, but it is not aware of where they are physically present in the processor. Each CCD is effectively a different node, and there are penalties for accessing data on the adjacent node, which is what causes the loss of performance and efficiency seen with these chips under Windows, and arguably why Linux is so much faster.
That's what I meant with two CCD's. How does Linux solve this, how does it circumvent this problem when the needed data is in the L3 of the other CCD?
I'm certain Linux gives better results overall, but how often are demanding, recent, native Linux game titles being used when benchmarking? I've never seen it at Phoronix, although I haven't really looked around that much. I haven't seen any Linux game testing at L1tech with a 7950X3D or 9950X3D that shows a substantial lead over Windows.
 
Last edited:
I would like to understand this more because "what if" CO was applied without increasing power limits?
PBO limits were set properly of course, 400+, but not as high as 34856459864. In the past I noticed some issues with that

  • "I also set +200 MHz with a scalar of x10"
I've never known increasing scalar to help much. How is was this value determined when deciding to use it in the testing?
These are the max values available, no doubt, with more manual tuning you can eke out another 1% of performance. Buy a CPU, show me your Cinebench results and settings
 
PBO limits were set properly of course, 400+, but not as high as 34856459864. In the past I noticed some issues with that
LOL, yeah I learned my lesson the hard way a long time ago with that 34856459864 setting but luckily the CPU still works.

1741846966348.png


These are the max values available, no doubt, with more manual tuning you can eke out another 1% of performance. Buy a CPU, show me your Cinebench results and settings
It's mighty tempting but I tapped out on Optane so I have to wait a bit, probably a year or two to get back to you on that. :toast:
 
do i?
i really don't need it but i WANT it more than a 5090.
 
Here is my list of pros and cons:

Gaming class desktop processor for $699

Pros:
X3D cache optimized for better gaming performance
X3D cache located under CCD for better cooling and therefore higher clocks
Extra CCD for workstation class productivity
Simultaneous Multi-threading (SMT)
All P-cores for highest performance per thread
AVX-512 instructions at full speed
ECC support (requires motherboard support)

Cons:
Some applications/games might not use correct CCD
OS level configuration required to get full potential (Game Bar)

For this type of processor, I don't think power, price and efficiency are considered when purchasing.
 
Last edited:
Having to flip a switch and reboot before you can game is crazy....but the copium regarding it is even crazier.
 
@W1zzard I appreciate your review. There was something that caught my attention.
min-elden-ring-rt-3840-2160.png

How can Intel 285k performance be so much worse in Elden Ring? Is Easy anti-cheat still causing issues for the CPU?
 
Double CCD reduces the thermal density. The 9950x3D is a 170W with 2 CCDs + IOD, whereas the 9800x3D is a 120W + IOD. Assuming ~20W for the IOD, the 9800x3D has 100W for its sole CCD, whereas the 9950x3D has 75W per CCD.
I dont think so. Dual CCD does not make it run cooler. It was the same when 58X3D was new, people were saying it was an insane chip to cool, meanwhile I could run mine seme passive, even at the socket limit for that part. Shit cooling is my guess.
 
I dont think so. Dual CCD does not make it run cooler. It was the same when 58X3D was new, people were saying it was an insane chip to cool, meanwhile I could run mine seme passive, even at the socket limit for that part. Shit cooling is my guess.
A great example is comparing a 5800x to a 5950x. Both have the exact same TDP, and the dual-CCD model is way cooler.
1741905607036.png


The difference between a 9800x3D and a 9950x3D is likely due to the latter having a higher TDP. At the same TDP the delta in temperature should be even bigger.
 
143w is just getting started, you can double that on 5950x easily. My 58X3D is easier to cool than my 5600X.
 
A great example is comparing a 5800x to a 5950x. Both have the exact same TDP, and the dual-CCD model is way cooler.
View attachment 389589

The difference between a 9800x3D and a 9950x3D is likely due to the latter having a higher TDP. At the same TDP the delta in temperature should be even bigger.
143w is just getting started, you can double that on 5950x easily. My 58X3D is easier to cool than my 5600X.
I've always found it fascinating how the physics plays themselves out in situations like these.
 
Last edited:
Back
Top