• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen 7950X3D with One CCD Disabled

Joined
Nov 15, 2020
Messages
913 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
The 7900X3D is out right? I guess people are assuming it performs the same or thereabouts in gaming as the 7950X3D given the lack of reviews. I still am interested to know if that is true.
 
Joined
Apr 24, 2020
Messages
2,701 (1.62/day)
The 7900X3D is out right? I guess people are assuming it performs the same or thereabouts in gaming as the 7950X3D given the lack of reviews. I still am interested to know if that is true.

6 core CCX vs 8 core CCX.

I mean, it's a 12 core part but only 6 cores have the 3d cache? So windows game mode will force it down to 6 cores right?

I'd expect most games to be fine on just 6 cores. But some games (Factorio) want more cores.
 

tanaka_007

New Member
Joined
Mar 29, 2022
Messages
12 (0.01/day)
higher number of threads, easier it is for code to overflow from L3.
Most games increase number of threads they create with number of CPU cores, which may not be appropriate for 2CCD.
In case of 7950X3D, it is because threads are pushed into 8 cores.
In CCD0 Mode (Disable CCD1), the game will only issue threads for 8 cores (probably 16 threads) from the beginning.
If game applications support it in the future, there is a possibility that the performance will be further improved.

Here's an example of 5900X, but sometimes things like this happen.
48.jpg
56.jpg
 
Joined
Jan 5, 2011
Messages
23 (0.00/day)
This is why I'm hoping someone tests a partially bloated windows install, which will start really messing up the bandaids AMD is currently using to make things work properly on their double chiplet designs. If there are multiple programs all trying to get CPU time, cores will stop being parked and it'll have to carefully pick and choose which cores are being loaded with what... AMD literally has a sledgehammer to a very delicate problem right now and sterile test environments aren't going to show real world performance numbers when things start getting loaded... I would imagine most people would think they'd be capable of running 20-30 tabs open, discord, twitch, and play a game at the same time on a 12/16 core CPU without issue.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,780 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
In the UEFI/BIOS is there an option to turn off the 7950x3D's cache?
Yes, there's several options. You can turn off cores individually, or change the CPPC2 Preferred Cores policy, it's all in the review
 
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
This is why I'm hoping someone tests a partially bloated windows install, which will start really messing up the bandaids AMD is currently using to make things work properly on their double chiplet designs. If there are multiple programs all trying to get CPU time, cores will stop being parked and it'll have to carefully pick and choose which cores are being loaded with what... AMD literally has a sledgehammer to a very delicate problem right now and sterile test environments aren't going to show real world performance numbers when things start getting loaded...
Parking and unparking cores doesn't take as much time and resource as you think. The scheduler's problem of choosing which cores to use for what is a thing on Intel as well as AMD. It's not platform-dependent.

I would imagine most people would think they'd be capable of running 20-30 tabs open, discord, twitch, and play a game at the same time on a 12/16 core CPU without issue.
That is exactly the idea, yes. Apart from the fact that you don't even need 12/16 cores to do that.

Personally, I think the healthy approach to all this is thinking about dual-CCD AMD CPUs the same way as you think about heterogenous Intel ones. Those have P+E cores, while AMD has 6+6 or 8+8 "normal" cores.
 
Joined
Jan 5, 2011
Messages
23 (0.00/day)
Parking and unparking cores doesn't take as much time and resource as you think. The scheduler's problem of choosing which cores to use for what is a thing on Intel as well as AMD. It's not platform-dependent.
The whole reason they look at stuff like this is to make sure it doesn't take as much time as you think. As for my personal experience with it, I've seen it causing thread contention, oscillations back and forth between cores, turning on and off spaztically, and forcing too much work to one core causing it to hit 100% utilization when there are other cores available to share the workload. This can be observed causing stutters in .1% and 1% frame time. The more programs you have running, the more it forces windows to deal with multiple things that need CPU time and it needs to be capable of making the correct choices here without sacrificing the performance of other programmers - as a good thread scheduler should.

If all you're doing is running A game, then it wont have much of a issue with this. Most gamers don't run JUST a game and turn off their PC... they literally live on their computers.

Not sure what your second quote was about. If it doesn't address the issue because it's not tested then it doesn't take it into account. Intel and AMD do things differently; Intel has the thread director and AMD has whatever they put together in probably the last six months after they realized they had a problem. Just because they both have asymmetrical designs doesn't mean their implementations are just as good as each other and it's the job of reviewers to find these cracks, test them, and point out things when there is a problem.
 
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
The whole reason they look at stuff like this is to make sure it doesn't take as much time as you think. As for my personal experience with it, I've seen it causing thread contention, oscillations back and forth between cores, turning on and off spaztically, and forcing too much work to one core causing it to hit 100% utilization when there are other cores available to share the workload. This can be observed causing stutters in .1% and 1% frame time. The more programs you have running, the more it forces windows to deal with multiple things that need CPU time and it needs to be capable of making the correct choices here without sacrificing the performance of other programmers - as a good thread scheduler should.
I've seen those things happen on both Intel and AMD platforms. Like I said, it's a Windows scheduler thing, not a CPU architecture thing. (Not that they're real problems anyway, but that's besides the point now.)

Not sure what your second quote was about. If it doesn't address the issue because it's not tested then it doesn't take it into account. Intel and AMD do things differently; Intel has the thread director and AMD has whatever they put together in probably the last six months after they realized they had a problem. Just because they both have asymmetrical designs doesn't mean their implementations are just as good as each other and it's the job of reviewers to find these cracks, test them, and point out things when there is a problem.
That's exactly my point: they're both asymmetrical designs. People tend to treat 12 and 16-core AMD CPUs as 12 and 16-core CPUs, when in fact, they're more like 6+6 and 8+8-core parts. Sure, Intel and AMD's approaches are fundamentally different, but that doesn't mean that one is better than the other, in my opinion.

If you find out how to build a working monolithic and heterogenous 16-core CPU die with shared cache, let AMD and Intel know. Until then, E-cores and chiplets are the way, whether we like it or not.
 

Hxx

Joined
Dec 5, 2013
Messages
303 (0.08/day)
I've seen those things happen on both Intel and AMD platforms. Like I said, it's a Windows scheduler thing, not a CPU architecture thing. (Not that they're real problems anyway, but that's besides the point now.)


That's exactly my point: they're both asymmetrical designs. People tend to treat 12 and 16-core AMD CPUs as 12 and 16-core CPUs, when in fact, they're more like 6+6 and 8+8-core parts. Sure, Intel and AMD's approaches are fundamentally different, but that doesn't mean that one is better than the other, in my opinion.

If you find out how to build a working monolithic and heterogenous 16-core CPU die with shared cache, let AMD and Intel know. Until then, E-cores and chiplets are the way, whether we like it or not.
or wait for the 7800x3d and problem solved lmao provided that 8 cores is all u need
 
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
or wait for the 7800x3d and problem solved lmao provided that 8 cores is all u need
And if you need more, then inter-core latency and core parking are the least of your issues.
 
Joined
Jan 5, 2011
Messages
23 (0.00/day)
I've seen those things happen on both Intel and AMD platforms. Like I said, it's a Windows scheduler thing, not a CPU architecture thing. (Not that they're real problems anyway, but that's besides the point now.)
Sure, but because they both happen doesn't mean they happen at the same frequency or severity. That's part of the whole point, how good a solution is isn't binary, it's not on/off, there is all different inbetweens. Intel should be scrutinized just as hard as AMD, I'm not giving them a free pass, but when someone has a better solution, then you talk about it, just as much as how much a solution isn't working... then you try to get it fixed. It's the whole reason stuff is benchmarked and looked at in depth, instead of going 'welp they seem good enough, they must be equal!'

That's exactly my point: they're both asymmetrical designs. People tend to treat 12 and 16-core AMD CPUs as 12 and 16-core CPUs, when in fact, they're more like 6+6 and 8+8-core parts. Sure, Intel and AMD's approaches are fundamentally different, but that doesn't mean that one is better than the other, in my opinion.
It's not your point, you're implying they have to be identical because they're both solutions, I'm pointing out they need to be looked at in depth to see the differences, how well they do something, and whether or not one solution is better then the other. What all benchmarking and reviewing is about. It has nothing to do with 'opinion', it's figuring out which is objectively, scientifically better or worse then the other, not what you think.

Part of the problem is this isn't being tested at all yet. I still haven't seen anyone looking at specifically trying to break the thread scheduler or attempting to find shortcomings in it. Every review has been a sterile test environment. There are no gamers who literally turn on their computer, play a game, then shut it off and never install anything on it. I would question their sanity if that's the case, Dexter level methodicalness.
 
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Sure, but because they both happen doesn't mean they happen at the same frequency or severity. That's part of the whole point, how good a solution is isn't binary, it's not on/off, there is all different inbetweens. Intel should be scrutinized just as hard as AMD, I'm not giving them a free pass, but when someone has a better solution, then you talk about it, just as much as how much a solution isn't working... then you try to get it fixed. It's the whole reason stuff is benchmarked and looked at in depth, instead of going 'welp they seem good enough, they must be equal!'
I still think what you're describing is a non-issue.

It's not your point, you're implying they have to be identical because they're both solutions, I'm pointing out they need to be looked at in depth to see the differences, how well they do something, and whether or not one solution is better then the other. What all benchmarking and reviewing is about. It has nothing to do with 'opinion', it's figuring out which is objectively, scientifically better or worse then the other, not what you think.
I'm not implying anything. What I'm saying is that Intel's E-cores and AMD's chiplet design are two different solutions for the same premise: people wanting more cores. Whichever is better for you is opinion. There's no such thing as objectively better or worse, as people use their PCs for different things, and expect different outcomes. If an objectively better architecture existed, then it would be enough to run just a single benchmark to see which one it is, which is clearly not the case.

Part of the problem is this isn't being tested at all yet. I still haven't seen anyone looking at specifically trying to break the thread scheduler or attempting to find shortcomings in it.
Why would anyone want to do that? Is that something that you do on your own PC?

Every review has been a sterile test environment.
Because every home environment is different, and there is no way to test for every scenario. A sterile, to-the-point software environment is the best way to see objectively measurable differences.
 
Joined
Nov 11, 2016
Messages
3,398 (1.16/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
looks like the extra Vcache is running out of steam at higher resolution
Fastest at 720p
relative-performance-games-1280-720.png

But drop down at 4K
relative-performance-games-38410-2160.png
minimum-fps-3840-2160.png


So yeah, being the fastest gaming CPU at 720p is not the fastest gaming CPU :/, luckily TPU is the only tech outlet who test CPU at higher resolutions than 1080p
 

Hxx

Joined
Dec 5, 2013
Messages
303 (0.08/day)
And if you need more, then inter-core latency and core parking are the least of your issues.
yeah the only issue i see with these new AMD chips is the price.
Everything else looks great tbh - power perf etc but its gonna be rough dropping $450 on a 8 core chip in 2023 when intel is so much cheaper and I think many folks interested in this technology will cave in and instead grab the 12 core or the 16 core variant for longevity purposes
 

Castillan

New Member
Joined
Mar 3, 2023
Messages
5 (0.01/day)
And if you need more, then inter-core latency and core parking are the least of your issues.
Is inter-core latency even a significant thing on dual-CCD CPU's? It may be if you're running multi-threaded and thrashing the exact same memory cache-lines across multiple dies with very fine-grain memory barriers, but most (well optimised) software is generally written to avoid doing such things, even on heterogeneous architectures. As an experiment, I tried 3 configurations on my 7950x.
1. Disable CCD1 (ie. 8C/16T like a 7700x)
2. CCD0/CCD1 are enabled, but disable the last 4 cores on each (4C/8T + 4C/8T)
3. CCD0/CCD1 are enabled, but SMT is disabled (8C/8T + 8C/8T)

For productivity and gaming, 1) and 2) performance near identically. 2) was very slightly worse, but I put that down to CCD1 generally clocking lower. 3) was superior in every test I ran. For light to medium loads, 3) is generally slightly faster than the full 7950x config (8C/16T + 8C/16T), which I determined was probably due to the cores clocking slightly higher as they tended to run cooler without SMT loading up the cores more. For productivity workloads, 3) generally ran about as fast a full 7900X. For gaming, 3) outpaced everything, even if only slightly. PBO curves could also be tuned better in the non-SMT config.

I more or less came to the conclusion that unless I absolutely needed the extra speed for certain very CPU bound productivity tasks, that it was generally better to just leave SMT disabled full time. On my Asus X670E Crosshair Hero, I just have the BIOS settings saved to a USB stick, and it's a snap to switch as needed.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Joined
Jan 5, 2011
Messages
23 (0.00/day)
I still think what you're describing is a non-issue.
You're downplaying the possibility of it being a issue and being looked into. I didn't say all hardware benchmarks should be replaced with semi-bloated windows installs, I said it should be looked into in a special section. There are no sane gamers that only have a video game on their computer, turn it on, play the game, turn it off. Core parking only happens when a core is completely idle, so nothing is looking to run on it. It's the same as it basically being off.

Whichever is better for you is opinion.
The whole point of everything I suggested is this shouldn't be opinion, it should be factually checked, scrutinized, and tested... both Intel and AMD. As far as maturity of solutions right now, Intels Thread director has been out since the 12k series and they did a remarkably good job as it's a custom built solution for just this. AMDs relies on windows components they bandaged into a package. Both should be looked at more in depth, however objectively looking at things right now we can see they aren't the same in their current form.

Why would anyone want to do that? Is that something that you do on your own PC?
Yes, I run more then one program on my PC. At no point did I say this should replace normal testing, I said this should be done in addition to normal testing. It's weird in the tech sphere to find someone who is so entrenched at not looking at possible issues instead of exploring what might be a horrible misstep. If I didn't assume negligence before subterfuge, I would surmise a ulterior motive at this point.

I'll say once again that no one buys a 8+ core chip and only wants to use 8 cores and expects 8 core performance or worse when they do. If that is what is to be expected, then they should know about it.

I more or less came to the conclusion that unless I absolutely needed the extra speed for certain very CPU bound productivity tasks, that it was generally better to just leave SMT disabled full time.
I've had SMT/HT disabled on my chips for a long time. It basically comes down to being beneficial only if you're extremely CPU bound, so four cores or less or even six cores sometimes now. In those cases you need all the help you can get. It will help in programs that are very bandwidth happy (overall throughput through your CPU), but generally not something that helps gamers as it usually hurts .1% and 1%. SMT off is generally just break even when it doesn't do anything, assuming you aren't extremely CPU constrained. I currently have a 5900X for instance.
 
Joined
Jan 14, 2019
Messages
12,337 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
You're downplaying the possibility of it being a issue and being looked into. I didn't say all hardware benchmarks should be replaced with semi-bloated windows installs, I said it should be looked into in a special section. There are no sane gamers that only have a video game on their computer, turn it on, play the game, turn it off. Core parking only happens when a core is completely idle, so nothing is looking to run on it. It's the same as it basically being off.
You still haven't explained why this is a problem.

The whole point of everything I suggested is this shouldn't be opinion, it should be factually checked, scrutinized, and tested... both Intel and AMD. As far as maturity of solutions right now, Intels Thread director has been out since the 12k series and they did a remarkably good job as it's a custom built solution for just this. AMDs relies on windows components they bandaged into a package. Both should be looked at more in depth, however objectively looking at things right now we can see they aren't the same in their current form.
Again: I never said they're the same. What I said is, they're both (different) solutions for giving people more cores. You seem to be going in circles here without comprehending what I said.

Yes, I run more then one program on my PC. At no point did I say this should replace normal testing, I said this should be done in addition to normal testing. It's weird in the tech sphere to find someone who is so entrenched at not looking at possible issues instead of exploring what might be a horrible misstep. If I didn't assume negligence before subterfuge, I would surmise a ulterior motive at this point.
There definitely must be some kind of conspiracy behind review sites not testing an aspect of modern CPU architectures that almost no one ever cares about.
 
Joined
Dec 25, 2020
Messages
6,644 (4.67/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
higher number of threads, easier it is for code to overflow from L3.
Most games increase number of threads they create with number of CPU cores, which may not be appropriate for 2CCD.
In case of 7950X3D, it is because threads are pushed into 8 cores.
In CCD0 Mode (Disable CCD1), the game will only issue threads for 8 cores (probably 16 threads) from the beginning.
If game applications support it in the future, there is a possibility that the performance will be further improved.

Here's an example of 5900X, but sometimes things like this happen.
View attachment 286037View attachment 286039

I am willing to write this off as anecdotal, Android emulation is horrible and extremely inconsistent. The game I like to play, NieR Re[in]carnation, flat out overloads all Android emulators, the only one that can run it more or less decently is Noxplayer and it still freezes every now and then. Frame rates are... horrible, just unspeakably horrible, and they are the exact same on my laptop's Ryzen 5 5600H and on my old PC's 5950X. I often have to restart the emulator because the Android kernel crashes, or the game repeatedly closes because of shader compilation errors... or when it decides to run at single digit frame rates for whatever reason.

I also would not hold my breath for any game engines to specifically be optimized for this CPU's hybrid cache architecture. They're barely optimized for regular CPUs as it is. Gamers should purchase either Raptor Lake (capable of hardware-assisted thread scheduling), the 7700X or the 7800X3D to avoid scheduling woes.

Is inter-core latency even a significant thing on dual-CCD CPU's?

Yes, specifically when one CCX attempts to access cache from another CCX. The operating system's scheduler and most well written applications will attempt to avoid this at all costs, but there are benchmarks specifically designed to test this scenario. As you found out with your testing, it is exceptionally difficult for real world applications to run into this problem, though. However, this assumes CCDs with identical cache memory. It doesn't matter as much because the OS can allocate between threads and each chip's cache equally, in the X3D's scenario, one CCD would have 3x the L3 of the other. That's where some games and apps might attempt to fetch data across, because this memory simply is not available physically. An eventual X3D with two fully 3D CCDs would behave exactly like your regular 7950X does.
 

Castillan

New Member
Joined
Mar 3, 2023
Messages
5 (0.01/day)
Yes, specifically when one CCX attempts to access cache from another CCX.
Just to be clear, you're talking about only when memory barriers are being set, right? In regular memory-access operation the L1-L3 cache on each CCD operates pretty much independently of the cache on a different CCD. The only time one CCD on Thread 2 (T2) would try to access something in a cache line on a different CCD would be when a memory barrier is raised, and a dirty cache line on Thread 1 (T1) running on a different CCD has to flush its cache line, and T2 has to wait until that cache line is replicated locally before proceeding with its memory operation. eg: https://github.com/nviennot/core-to-core-latency/blob/main/src/bench/msg_passing.rs (not my code by the way).

That means that there can be two things going on here. Either the inter-core memory barrier sync, or simply a thread getting scheduled onto a core of a different CCD with a cold cache set and having to go to main memory to get what it needs. For the games that perform noticably worse on a 7950x over a 7700x I'm curious as to what they're doing to create such a significant difference.
 
Joined
Sep 23, 2008
Messages
311 (0.05/day)
Location
Richmond, VA
Processor i7-14700k
Motherboard MSI Z790 Carbon Wifi
Cooling DeepCool LS720
Memory 32gb GSkill DDR5-6400 CL32 Trident Z5
Video Card(s) Intel ARC A770 LE
Storage 990 Pro 1tb, 980 Pro 512gb, WD black 4tb
Display(s) 3 x HP EliteDisplay E273
Case Corsair 5000D Airflow
Power Supply Corsair RM850x
Mouse Logitec MK520
Keyboard Logitec MK520
Software Win 11 Pro 64bit
Benchmark Scores Cinebench R23 Multi 35805
*Hugs 13700K tighter* Thanks intel.
 

Adelgary

New Member
Joined
Apr 6, 2023
Messages
3 (0.01/day)
When you tested the 7950X3D with one CCD disabled did you do it with a fresh Windows install without the chipset drivers/scheduling software? Or can these same results be achieved on an "official" 7950X3D software setup by simply rebooting into UEFI to disable the CCD when desired?

I'm wondering if I can "have my cake and eat it too" by getting the 7950X3D and still get the 7800X3D's better performance by disabling a CDD on a game-by-game basis, without needing to stick to one config or the other, or does having AMD's scheduling software causing "core parking fail" or other issues like it does when using the 7800X3D on a system that previously had the 7950X3D installed?
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,780 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
When you tested the 7950X3D with one CCD disabled did you do it with a fresh Windows install without the chipset drivers/scheduling software? Or can these same results be achieved on an "official" 7950X3D software setup by simply rebooting into UEFI to disable the CCD when desired?
I did not, so the AMD scheduling software was still installed and could have consumed a little bit of CPU time that was not available for the games anymore, that's why the results are slightly lower than pure 7800X3D .. at least that's my theory right now

I'm wondering if I can "have my cake and eat it too" by getting the 7950X3D and still get the 7800X3D's better performance by disabling a CDD on a game-by-game basis, without needing to stick to one config or the other, or does having AMD's scheduling software causing "core parking fail" or other issues like it does when using the 7800X3D on a system that previously had the 7950X3D installed?
It seems nobody knows that yet.. and that's exactly the problem, you're not just gonna be able to turn off one CCD, reboot and start gaming. You'll also have to get rid of the AMD drivers, which right now seems to require a OS reinstall. Maybe dual boot could be an option, until you boot the wrong partition and Windows Update installs some drivers for you ;)

We're talking about single-digits percentages here .. really worth worrying that much? or just buy a 13700K/13900K
 

Adelgary

New Member
Joined
Apr 6, 2023
Messages
3 (0.01/day)
I did not, so the AMD scheduling software was still installed and could have consumed a little bit of CPU time that was not available for the games anymore, that's why the results are slightly lower than pure 7800X3D .. at least that's my theory right now


It seems nobody knows that yet.. and that's exactly the problem, you're not just gonna be able to turn off one CCD, reboot and start gaming. You'll also have to get rid of the AMD drivers, which right now seems to require a OS reinstall. Maybe dual boot could be an option, until you boot the wrong partition and Windows Update installs some drivers for you ;)

We're talking about single-digits percentages here .. really worth worrying that much? or just buy a 13700K/13900K
Thank you so much for the quick reply and the amazing work on the reviews!

I'm relieved to learn that you did this test without a fresh install, but when I compared the numbers from this review and the 7800X3D review I didn't notice that it was slightly slower as you say, maybe a couple of FPS here or there, and in some cases 7950X3D with CCD disabled was considerably ahead. I didn't compare every chart but in the ones I compared I saw the "simulated 7800X3D" pretty much match or beat the real 7800X3D, to my delight.

And I wonder if the same can be achieved using something like Process Lasso without needing to reboot to disable a CCD.
 
Top