• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9 7950X Posts Significantly Higher Gaming Performance with a CCD Disabled

Joined
Jan 3, 2021
Messages
3,491 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
with CCD-2 disabled, CCD-1 has all of the processor's power budget—up to 230 W—to itself, giving it much higher boost residency across its 8 cores
Regarding the concentration of heat, wouldn't it be better to disable half of the cores in each CCD? Also the full L3 would remain active, and even if it's not really acting as one 64MB block, it still seems better to have two 32MB blocks than only one.
 
Joined
Apr 19, 2018
Messages
1,227 (0.51/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
I'll put money on this being a Windows bug, or some kind of AGESA issue.
 
Joined
Jan 14, 2019
Messages
12,337 (5.76/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
So is the 7700X not single CCD? If it is should it not be outperforming the 7950X in benchmarks?

When you disable 1 CDD on the 7950X does that mean the L3 cache available is 32MB or is it still at 64MB?

1CDD with access to 64MB of L3 should be faster in games??
Yes, but the 7950X has a higher max. boost clock, so supposedly a more aggressive boosting algorithm, not to mention higher default TDP as well.

Regarding the concentration of heat, wouldn't it be better to disable half of the cores in each CCD? Also the full L3 would remain active, and even if it's not really acting as one 64MB block, it still seems better to have two 32MB blocks than only one.
That's a nice theory, but you're also gaining core-to-core latency. It needs to be tested, I guess.
 
Joined
Oct 16, 2018
Messages
966 (0.43/day)
Location
Uttar Pradesh, India
Processor AMD R7 1700X @ 4100Mhz
Motherboard MSI B450M MORTAR MAX (MS-7B89)
Cooling Phanteks PH-TC14PE
Memory Crucial Technology 16GB DR (DDR4-3600) - C9BLM:045M:E BL16G36C16U4W.M16FE1 X2 @ CL14
Video Card(s) XFX RX480 GTR 8GB @ 1408Mhz (AMD Auto OC)
Storage Samsung SSD 850 EVO 250GB
Display(s) Acer KG271 1080p @ 81Hz
Power Supply SuperFlower Leadex II 750W 80+ Gold
Keyboard Redragon Devarajas RGB
Software Microsoft Windows 10 (10.0) Professional 64-bit
Benchmark Scores https://valid.x86.fr/mvvj3a
Yes, but the 7950X has a higher max. boost clock, so supposedly a more aggressive boosting algorithm, not to mention higher default TDP as well
Yes, that is pretty obvious from just looking at the specs. the 7700X is unlocked you can get it to boost higher and to a higher TDP provided you can keep it cool using PBO etc.
In Single thread you are not going to use up all your TDP!

So to clear things up, why are we not seeing overclocked 7700X beating 7950X in games? 30% increasing in peformance is pretty high.

AMD could just cut the 7950X in half make 2 CPUs and double their sales instead of selling the 7700X. :p

Ok maybe not double their sales but a 8 core 16T CPU with that peformance would outsell most CPUs on the market today if priced right? Yes/No?
 
Last edited:
Joined
May 10, 2020
Messages
738 (0.44/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
7950X is not a gaming CPU, even if AMD marketing is pushing to it (because it is easier to sell to enthusiasts).
You don't need 16 cores to play a game, and with 2 different quality CCDs these are the results.
I experienced something similar with 3900X and 5900X, but the situation was less noticeable because even if CCD2 was worse, the boost was quite similar.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
So is the 7700X not single CCD? If it is should it not be outperforming the 7950X in benchmarks?

When you disable 1 CDD on the 7950X does that mean the L3 cache available is 32MB or is it still at 64MB?

1CDD with access to 64MB of L3 should be faster in games??

Disabled it'll still be 64MB and the CCD is binned higher as well. The L3 cache is shared it's labeled 2x32MB due to half the cache being shared from the other CCD, but even if you disable the cores you aren't disabling the shared cache.

I think what's happening is you've got less thread contention and more thermal headroom primarily, but the same amount of cache and also outside of thread contention where scheduling is dynamically changing around thread assignments the second CCD is going to cause some latency even though AMD has minimized it you can't eliminate it outright same goes with E cores which is why some have preferred disabling them for Intel likewise.

What is a little strange is why Intel/AMD aren't just including the full shared L3 cache across the lineup regardless of cut down core count parts. I wonder what the cost of the larger L3 cache is relative to additional cores to manufacturer and how that trade off balances out.
 
Last edited:

Soupladel

New Member
Joined
Oct 18, 2022
Messages
2 (0.00/day)
Is not the boost, is the latency penalty between CCDs. My 7950x is locked on all cores at 5.5 on all games and exhibits the same behavior when you set the affinity on just one CCD. For example on Riftbreaker the difference is ~30 fps. Other games respond well with affinity on physical cores only, like Cyberpunk, Battlefield 2042. In most of the games I saw an uplift in average and 1% by manually setting the affinity. The issue is old but for some reason is accentuated on this platform. Windows doesn't care, AMD doesn't care, developers doesn't care either....so don't cripple your CPU, just use Process Lasso :rolleyes:

I have noticed the 5.5GHz cap and it only seem to start happening after they introduced a BIOS for my board (Crosshair X670E Hero) that included the ComboAM5PI 1.0.0.3 patch A. Prior to this, all my cores were capable of boosting to 5.75 even on the slower CCD when gaming.
 
Joined
Jan 3, 2021
Messages
3,491 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Disabled it'll still be 64MB and the CCD is binned higher as well. The L3 cache is shared it's labeled 2x32MB due to half the cache being shared from the other CCD, but even if you disable the cores you aren't disabling the shared cache.

I think what's happening is you've got less thread contention and more thermal headroom primarily, but the same amount of cache and also outside of thread contention where scheduling is dynamically changing around thread assignments the second CCD is going to cause some latency even though AMD has minimized it you can't eliminate it outright same goes with E cores which is why some have preferred disabling them for Intel likewise.

What is a little strange is why Intel/AMD aren't just including the full shared L3 cache across the lineup regardless of cut down core count parts. I wonder what the cost of the larger L3 cache is relative to additional cores to manufacturer and how that trade off balances out.
Manufacturing defects in cores are common. Defects in L3 are also common as its surface area is similar to that of the cores. I assume that an L3 slice with a defect or two is still usable and remains active at 99% capacity - but if it isn't the case then there may be some technical reason to disable the core next to it.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
AMD needs to release a new chipset driver to fix this. And tell M$ to get their act together with OS power management plans.

You are not supposed to lose gaming performance because someone at that ecosystem doesn't know what to do.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I'm wondering if AMD could configure the second CCD get parked and just enabled for AVX-512 workloads with or without the other CCD and if there would be a latency advantage to configuring a option for that. It seems like there very much could be given it's not a monolithic core so even under perfect circumstances you'll have slight latency between CCD's though it's been minimized a good deal over earlier Ryzen chips by AMD and should continue to be refined and improved further.
 

DrGrossman

New Member
Joined
Oct 18, 2022
Messages
4 (0.01/day)
I have noticed the 5.5GHz cap and it only seem to start happening after they introduced a BIOS for my board (Crosshair X670E Hero) that included the ComboAM5PI 1.0.0.3 patch A. Prior to this, all my cores were capable of boosting to 5.75 even on the slower CCD when gaming.
Yes, weird behavior. Its only happens in gaming. On desktop environment it boost properly with loads that exceed 10% but in games that barely use 10% is just caped at 5.5. (X670E AORUS MASTER - AGESA 1.0.0.3 A)
 

Soupladel

New Member
Joined
Oct 18, 2022
Messages
2 (0.00/day)
Yes, weird behavior. Its only happens in gaming. On desktop environment it boost properly with loads that exceed 10% but in games that barely use 10% is just caped at 5.5. (X670E AORUS MASTER - AGESA 1.0.0.3 A)
Only Core 0 of mine boosts correctly regardless of the activity or load, the other cores will not go above 5.5 regardless of what i do
 
Joined
Jan 3, 2021
Messages
3,491 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
What is a little strange is why Intel/AMD aren't just including the full shared L3 cache across the lineup regardless of cut down core count parts.
Actually, AMD does. I checked all parts with Zen 3 and Zen 4 chiplets, and the only exceptions that have L3 halved seem to be the Epyc 7513 (8 chiplets by 4 cores) and the weirder case of Epyc 7453 (4 chiplets by 7 cores). Also two Zen 4 Epycs, purportedly.

It seems like there very much could be given it's not a monolithic core so even under perfect circumstances you'll have slight latency between CCD's though it's been minimized a good deal over earlier Ryzen chips by AMD and should continue to be refined and improved further.
Sadly, this doesn't seem to be true. Anandtech did their measurements and found the inter-CCD latency to be 75-80 ns in the 7950X, which is only about 5 ns less than in the 5950X. The way it is, it doesn't make sense to ask the other CCD for data, it's slower than RAM access.

Note: what is important, and affects performance, is the latency to access slices of L3, which are located at various distances from the core that wants to access data. It's impossible to reliably measure that directly. But there's an indirect way, which is to measure core-to-core latency, and we know that each core is paired with a slice of L3 (at least in x86, E-core clusters being an exception).
 

Silvinjo

New Member
Joined
Oct 4, 2022
Messages
8 (0.01/day)
AMD needs to release a new chipset driver to fix this. And tell M$ to get their act together with OS power management plans.

You are not supposed to lose gaming performance because someone at that ecosystem doesn't know what to do.
This is Microsoft problem not AGESA or chipset.
 
  • Haha
Reactions: ARF
Joined
Jun 14, 2019
Messages
4 (0.00/day)
Interesting, maybe AMD should take advantage of whatever backend changes Microsoft made for Alder Lakes performance and efficiency cores and use the thing in a way where faster CCD is treated as performance cores and slower as efficiency cores. So lightly threaded stuluff run only on faster CCD, while slower one handled background stuff. Since vast majority of games barely scale to 8 cores and already get most of performance at 6 cores, both SMT enabled. Of course assuming those changes MS made can be used for that. But Alder Lake does have no issues with games using efficiency cores by mistake, so system seems to work, at least after some initial fixes.
 
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
So basically a 7700x with better binning.
This is beyond undervolt to up your performance, it`s undercore and underCCD..

So to clear things up, why are we not seeing overclocked 7700X beating 7950X in games? 30% increasing in peformance is pretty high.
Better CCD binning for 7950x.
It appears that more than 1 CCD hurts FPS that is countered by higher frequencies with 5900\5950\7900\7950.
Might be why the 5800X3d is so much faster in some gaming beside the extra cache- just 1 CCD.
If AMD will successful engineer the 7800X3d without the frequencies limit it will dominate gaming easily.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
any plans for a tpu review/bench of that?
The plan that I'd have would be certain select companies addressing that in their software.
 
Joined
Aug 27, 2021
Messages
129 (0.11/day)
I noticed @W1zzard tested on Win 11 - even on Zen 3 the 11 scheduler is far less accommodating of 2CCD. I've been saying for a long time that it not only treats 2CCD like a big.Little CPU (CCD2 acting as "little"), but also regularly disrespects CCX hierarchy by juggling load from CCD1 preferred cores all the way onto Windows' designated CCD2 background core. Which inevitably incurs inter-CCD performance penalties. Windows 10 at the very least still kept loads within CCDs. Wouldn't be surprised if the 7950X isn't the only CPU suffering this way. Gamersnexus' review with the 7950X's abysmal showing in a few games seems to suggest that.

Can avoid some scheduler behaviours by disabling CPPC Preferred Cores on 1CCD CPUs, but for 2CCD it doesn't do much to avoid Windows picking some CCD2 core.



The 5800X3D result is not the same.

Uncore in CPU-Z is Fabric FCLK for Ryzens.

L3 runs on its own clock that usually (but not always, especially for X3D) mirrors core clocks. It doesn't share clock domain, nor voltage domain with Fabric.


I am thinking that the test is performed under Win11 22H2. If I change the system to Win10 and do not turn off the CCD, can I achieve the performance mentioned in the article (after Win11 22H2 turns off the CCD)
 

SL2

Joined
Jan 27, 2006
Messages
2,447 (0.36/day)
CapFrameX have more interesting updates, this time with a 5900X.

"Windows 11 22H2 can cause performance issues on PCs with Ryzen CPUs. This is a comparison of feature update 22H2 vs reinstallation OS (including 22H2)."

1667398568205.png

Better CCD binning for 7950x.
It appears that more than 1 CCD hurts FPS that is countered by higher frequencies with 5900\5950\7900\7950.
I'm not so sure about that, since limiting power draw even down to 65 W (see below), which should affect clock frequency much more than binning IMO, doesn't make much of a difference in 1080 gaming benchmarks compared to what's happening in the OP.

Although the CPU doesn't run nowhere near max power draw in games to begin with, TPU measured 87 W. That's why I doubt that the fully enabled 7950X is power limited in games, compared to when disabling one CCD.

A 5,6 % higher clock speed (comparing 7700X & 7950X) won't translate into the 16 % higher FPS in the OP.
1667399059263.png

1667399177849.png

1667399219607.png
 
Joined
Aug 27, 2021
Messages
129 (0.11/day)
CapFrameX have more interesting updates, this time with a 5900X.

"Windows 11 22H2 can cause performance issues on PCs with Ryzen CPUs. This is a comparison of feature update 22H2 vs reinstallation OS (including 22H2)."

View attachment 268285


I'm not so sure about that, since limiting power draw even down to 65 W (see below), which should affect clock frequency much more than binning IMO, doesn't make much of a difference in 1080 gaming benchmarks compared to what's happening in the OP.

Although the CPU doesn't run nowhere near max power draw in games to begin with, TPU measured 87 W. That's why I doubt that the fully enabled 7950X is power limited in games, compared to when disabling one CCD.

A 5,6 % higher clock speed (comparing 7700X & 7950X) won't translate into the 16 % higher FPS in the OP.
View attachment 268286
View attachment 268287
View attachment 268288
I guess the original article test was done under Win11 22H2. If I change the system to Win10 without turning off the CCD, can I achieve the 10% performance improvement mentioned in the article (Win11 22H2 7950X after turning off the CCD)

I would like to know if the performance boost from closing the CCD happens on Win10 (7950X)
 

SL2

Joined
Jan 27, 2006
Messages
2,447 (0.36/day)
I guess the original article test was done under Win11 22H2. If I change the system to Win10 without turning off the CCD, can I achieve the 10% performance improvement mentioned in the article (Win11 22H2 7950X after turning off the CCD)

I would like to know if the performance boost from closing the CCD happens on Win10 (7950X)
I dunno, since they're blaming the upgrade alone and not 22H2 itself, you might as well reinstall W11 22H2.

Recently I've seen AMD claiming that there's no issue here, but I still doubt that..
 
Top