• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900K

Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Because that 6P + E core CPU is in the exact same boat - you got two sets of cores, and when the preferred cores are ignored performance takes a hit

People will happily show me that happens on AMD yet deny massively it could ever ever ever happen on intel
The discussion started by whether or not the 2nd ccd is more useful than the ecores. And it's not. They both do the same thing, boost MT performance, useless for everyrthing else
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
I think that the discussion enters the sphere of the embarrassing. With or without E cores, AMD immediately cut the price when it saw that the 13600K humiliates the entire 7000 series in games and is very close to the 7700X in multithreading. I think a review of 13900K with E cores disabled versus 7950X one CCD can solve the dispute, although it's a waste of time because the second CCD doesn't help anything in gaming. Those from AMD also saw it, that's why they rushed the launch of the X3D series.

I found this review. In multithreading, two CCDs (16c/32t) destroy one CCD (8c/16t). In games... hmmm... I can't give a verdict. The differences are marginal and the victories are shared. It was discussed and the verdict was unanimous: the second CCD helps in gaming if the salt in the eyes.
Below you have an x950X owner who is waiting for a boost from software optimizations for the second CCD in games.
 

Attachments

  • lol.jpg
    lol.jpg
    17.9 KB · Views: 53
Last edited:
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
is very close to the 7700X in multithreading.
The 13600k annihilates the 7700x in multithreading. You meant the 12600k or???
 
Joined
Jan 14, 2019
Messages
12,681 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I think that the discussion enters the sphere of the embarrassing. With or without E cores, AMD immediately cut the price when it saw that the 13600K humiliates the entire 7000 series in games and is very close to the 7700X in multithreading. I think a review of 13900K with E cores disabled versus 7950X one CCD can solve the dispute, although it's a waste of time because the second CCD doesn't help anything in gaming. Those from AMD also saw it, that's why they rushed the launch of the X3D series.

I found this review. In multithreading, two CCDs (16c/32t) destroy one CCD (8c/16t). In games... hmmm... I can't give a verdict. The differences are marginal and the victories are shared. It was discussed and the verdict was unanimous: the second CCD helps in gaming if the salt in the eyes.
Below you have an x950X owner who is waiting for a boost from software optimizations for the second CCD in games.
Hasn't this always been the case with AMD?

I mean, if you remember Bulldozer/Piledriver with its 8 INT / 4 FP core config, AMD said it would age very nicely... which it eventually did, but by that time, it was already obsolete.

If you need a CPU for gaming in the present, buy a single CCD one, or an Intel with as many P-cores as you can. End of. :)

There's no point paying extra for a second CCD that you end up disabling to squeeze a bit more gaming performance out of it.
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Other reviewers are seeing higher results in CB23 with power limits removed.

View attachment 266304

In the real world, Blender 3D 3.4.x's free download has a higher install base when compared to the pricy Cinema 4D R25 (subscription $719 billed annually or $3495 perpetual license).

From https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-x670e-hero-model/helpdesk_bios/
BIOS 0604 is older than 25th Sep 2022.

BIOS 0611( Update AGESA version to ComboAM5PI 1.0.0.2) includes improved system performance. Dated 26th Sep 2022
BIOS 0705 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A) includes improved system performance. Dated 11th of Oct 2022.
BIOS 0805 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A + D) includes improved system performance. Dated 15th of Nov 2022

For the 21st October 2022 benchmark release, Techpowerup didn't test ASUS X670E Crosshair Hero with BIOS 0705 (11th of Oct 2022).

My retail ASUS X670E Crosshair Hero motherboard is shipped with BIOS 0705.

I think that the discussion enters the sphere of the embarrassing. With or without E cores, AMD immediately cut the price when it saw that the 13600K humiliates the entire 7000 series in games and is very close to the 7700X in multithreading. I think a review of 13900K with E cores disabled versus 7950X one CCD can solve the dispute, although it's a waste of time because the second CCD doesn't help anything in gaming. Those from AMD also saw it, that's why they rushed the launch of the X3D series.

I found this review. In multithreading, two CCDs (16c/32t) destroy one CCD (8c/16t). In games... hmmm... I can't give a verdict. The differences are marginal and the victories are shared. It was discussed and the verdict was unanimous: the second CCD helps in gaming if the salt in the eyes.
Below you have an x950X owner who is waiting for a boost from software optimizations for the second CCD in games.

Cinebench R23 doesn't use AVX-512. It needs to be updated for AMD Zen 4's AVX-512 i.e. enable AVX-512 in Intel Embree.

I think that the discussion enters the sphere of the embarrassing. With or without E cores, AMD immediately cut the price when it saw that the 13600K humiliates the entire 7000 series in games and is very close to the 7700X in multithreading. I think a review of 13900K with E cores disabled versus 7950X one CCD can solve the dispute, although it's a waste of time because the second CCD doesn't help anything in gaming. Those from AMD also saw it, that's why they rushed the launch of the X3D series.

I found this review. In multithreading, two CCDs (16c/32t) destroy one CCD (8c/16t). In games... hmmm... I can't give a verdict. The differences are marginal and the victories are shared. It was discussed and the verdict was unanimous: the second CCD helps in gaming if the salt in the eyes.
Below you have an x950X owner who is waiting for a boost from software optimizations for the second CCD in games.
Most current game simulation envelope is designed with 6 to 7 cores Zen 2 from the PS5/Xbox Series X/Series S game console world. Higher PC CPU SKUs add an extra performance on top of the game console's core experience.

Extra CPU cores are useful for multitasking (e.g. FPS Monitor, system monitor, stream encoding, multi-tabs web browser, and other background tasks) while gaming.
 
Last edited:
Joined
Jun 6, 2022
Messages
622 (0.66/day)
Multi tabs web browser loads the RAM memory, not the processor. If you load too many tasks in the background, and they are active, the limitations appear on the buses (especially on the CPU-RAM interface) and not the number of processor cores is the problem. Whoever buys a processor with 16 cores with the idea that he will use 8 for gaming and 8 for rendering (or something else) will have the shock of his life when he finds out that the 8 cores for gaming will give him the Celeron experience. Eight cores are now enough for gaming with the most powerful video cards, with which you play at least 1440p and the difference between 8 cores and 6 cores is zero.
E-core is Intel's answer to the second CCD and the more efficient node used by AMD even when only one CCD is used. It brings a plus in multithreading and something extra, which I will compare to a car park.
7950X vs 13900K
AMD has 16 trucks and 16 trailers (16c/32t)
Intel has 8 trucks, 8 trailers and 16 cars (8c/16t + 16c)
With heavy loads, AMD wins in efficiency, but when you need a pack of cigarettes (or a bottle of milk, you understand, small things: web, osd, discord, etc.), AMD sends the truck with a trailer, Intel only sends a car.
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Multi tabs web browser loads the RAM memory, not the processor. If you load too many tasks in the background, and they are active, the limitations appear on the buses (especially on the CPU-RAM interface) and not the number of processor cores is the problem. Whoever buys a processor with 16 cores with the idea that he will use 8 for gaming and 8 for rendering (or something else) will have the shock of his life when he finds out that the 8 cores for gaming will give him the Celeron experience. Eight cores are now enough for gaming with the most powerful video cards, with which you play at least 1440p and the difference between 8 cores and 6 cores is zero.
E-core is Intel's answer to the second CCD and the more efficient node used by AMD even when only one CCD is used. It brings a plus in multithreading and something extra, which I will compare to a car park.
7950X vs 13900K
AMD has 16 trucks and 16 trailers (16c/32t)
Intel has 8 trucks, 8 trailers and 16 cars (8c/16t + 16c)
With heavy loads, AMD wins in efficiency, but when you need a pack of cigarettes (or a bottle of milk, you understand, small things: web, osd, discord, etc.), AMD sends the truck with a trailer, Intel only sends a car.
I run CPU Z's benchmark with 16 threads from 32 threads, I run a game and it didn't run up like a Celeron!

I didn't state running 3D render productivity apps while gaming. Furthermore, there's a Process Lasso tool to automate the CPU thread affinity profiles for each program i.e. limit a non-gaming program to 8 of 16 CPU cores.

Since I run two 4K monitors for each of my two gaming PCs, opening new websites can place a load on the CPU, not just the memory.

The normal gaming benchmarks don't show gaming, OBS video encoding, MS Edge uploading an mkv to youtube, FSP Monitor, Discord, Skype, and ARGB lightning service (with ARGB RAMS and six ARGB fans) at the same time. I don't use Process Lasso for the mentioned concurrent task.

As for trucks vs cars analogy, Intel Gracemont E-Cores has three 128-bit FPU/Vector hardware units (with 256-bit AVX2 software compatibility). For the second sub-NUMA node, Intel supplied extra 16 E-Cores for i9 13900K, hence a total of 6,144 bits.

GracemontRevised.png

Zen 4 has four 256-bit FPU/Vector hardware units (with AVX-512 software compatibility). For the second sub-NUMA node, AMD supplied extra eight Zen 4 CPU cores for Ryzen 9 7950X, hence a total of 8,192 bits.

I don't need a kids' level trucks vs cars analogy.
 
Last edited:
Joined
Jun 6, 2022
Messages
622 (0.66/day)
Try with a cpu-intensive rendering and game, not 10 seconds of benchmark. For heavy multitasking you have Xeon or Threadripper with at least four memory channels, not two. What do you think happens to the cores intended for gaming when they have to share the memory bus with those intended for encoding or rendering? Correct, stuttering in the game.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
I run CPU Z's benchmark with 16 threads from 32 threads, I run a game and it didn't run up like a Celeron!

I didn't state running 3D render productivity apps while gaming. Furthermore, there's a Process Lasso tool to automate the CPU thread affinity profiles for each program i.e. limit a non-gaming program to 8 of 16 CPU cores.

Since I run two 4K monitors for each of my two gaming PCs, opening new websites can place a load on the CPU, not just the memory.

The normal gaming benchmarks don't show gaming, OBS video encoding, MS Edge uploading an mkv to youtube, FSP Monitor, Discord, Skype, and ARGB lightning service (with ARGB RAMS and six ARGB fans) at the same time. I don't use Process Lasso for the mentioned concurrent task.

As for trucks vs cars analogy, Intel Gracemont E-Cores has three 128-bit FPU/Vector hardware units (with 256-bit AVX2 software compatibility). For the second sub-NUMA node, Intel supplied extra 16 E-Cores for i9 13900K, hence a total of 6,144 bits.

View attachment 280665

Zen 4 has four 256-bit FPU/Vector hardware units (with AVX-512 software compatibility). For the second sub-NUMA node, AMD supplied extra eight Zen 4 CPU cores for Ryzen 9 7950X, hence a total of 8,192 bits.

I don't need a kids' level trucks vs cars analogy.

AVX-512 Intel's former sweetheart that it divorced itself from. Let's not talk about that.
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
I don't need a kids' level trucks vs cars analogy.
This is better? Focus on the 50W limit. In light tasks, Intel is definitely more efficient. And even faster. And you are even more efficient in running Discord, HWinfo, etc. applications in the background. The cigarette pack analogy is not exactly for children.
 

Attachments

  • comp.jpg
    comp.jpg
    213.7 KB · Views: 79
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
AVX-512 Intel's former sweetheart that it divorced itself from. Let's not talk about that.
Intel Sapphire Rapids has AVX-512. Unlike ARM, Intel didn't properly plan for BiG.small instruction set design i.e. Intel superglued two CPU types with different instruction sets.


This is better? Focus on the 50W limit. In light tasks, Intel is definitely more efficient. And even faster. And you are even more efficient in running Discord, HWinfo, etc. applications in the background. The cigarette pack analogy is not exactly for children.
That's a useless benchmark premise for desktop usage, it lacks the attached number values and the graph's best-case direction.

Refer to https://www.techpowerup.com/review/intel-core-i9-13900k/22.html for Techpowerup's Application Power Consumption benchmarks.

Cinebench R23 doesn't use AVX-512.
Photoshop has poor multi-threading scaling.

Without a fully disclosed benchmark source, it's useless.

For Unreal Engine 4.26, from https://www.pugetsystems.com/labs/articles/intel-core-i9-13900ks-content-creation-review/

Intel-Core-i9-13900KS-Unreal-Engine-Build-Lighting-Benchmark-Results.png


Cinebench R23 MT after 10 minute run
Intel-Core-i9-13900KS-CineBench-R23-Multi-Core-Mode-Benchmark-Results.png


Blender 3D has a wider audience when compared to paid $3495.00 USD or $719.00 BILLED ANNUALLY Cinema 4D R23.

Intel-Core-i9-13900KS-Blender-CPU-Mode-Benchmark-Results.png



(4-1b) Blender 3.3 Classroom: Compute

Blender's Class Room has a higher complexity when compared to BMW27 (Techpowerup's https://www.techpowerup.com/review/intel-core-i9-13900k/6.html only used BMW27)



For PS3 emulation, Techpowerup's https://www.techpowerup.com/review/intel-core-i9-13900k/ is using an old BIOS 0604 for ASUS X670E Crosshair Hero.

For the 21st October 2022 benchmark release, Techpowerup didn't retest ASUS X670E Crosshair Hero with BIOS 0705.

The 1st action when I obtained retail ASUS X670E Crosshair Hero (shipped with 0705 BIOS 11th of Oct 2022) and TUF X670E Plus Wifi is to update the BIOSes.

From https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-x670e-hero-model/helpdesk_bios/
BIOS 0604 is older than 25th Sep 2022.

BIOS 0611( Update AGESA version to ComboAM5PI 1.0.0.2) includes improved system performance. Dated 26th Sep 2022
BIOS 0705 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A) includes improved system performance. Dated 11th of Oct 2022. My retail ASUS X670E Crosshair Hero is shipped with BIOS 0705.
BIOS 0805 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A + D) includes improved system performance. Dated 15th of Nov 2022


Date: Nov 12th, 2022 with ASRock B650 Livemixer used BIOS version 1.11 AS03 dated 4th November 2022.

Ryzen 9 7900X vs Core i9 13900K running RPCS3's Uncharted Drake's Fortune. Ryzen 9 7900X is faster when compared to Core i9 13900K.

https://www.asrock.com/mb/AMD/B650 LiveMixer/Specification.asp#BIOS
The mentioned ASRock B650 Livemixer used BIOS version 1.11 AS03 dated 4th November 2022.
ASRock's BIOS 1.09 has AGESA version 1.0.0.3 A.

For the power efficiency argument, I'm aware of https://www.techpowerup.com/review/...er-lake-tested-at-various-power-limits/2.html

From https://www.techpowerup.com/review/intel-core-i9-13900k/22.html (using the old BIOS 0604 for ASUS X670E Crosshair Hero )



From https://www.techpowerup.com/review/amd-ryzen-9-7950x/24.html (using the old BIOS 0604 for ASUS X670E Crosshair Hero)


Your graph is not complete and your post lacks proper reference links.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Intel Sapphire Rapids has AVX-512. Unlike ARM, Intel didn't properly plan for BiG.small instruction set design i.e. Intel superglued two CPU types with different instruction sets.



That's a useless benchmark premise for desktop usage, it lacks the attached number values and the graph's best-case direction.

Refer to https://www.techpowerup.com/review/intel-core-i9-13900k/22.html for Techpowerup's Application Power Consumption benchmarks.

Cinebench R23 doesn't use AVX-512.
Photoshop has poor multi-threading scaling.

Without a fully disclosed benchmark source, it's useless.

For Unreal Engine, from https://www.pugetsystems.com/labs/articles/intel-core-i9-13900ks-content-creation-review/

View attachment 280816

Cinebench R23 MT after 10 minute run
View attachment 280817

Blender 3D has a wider audience when compared to paid $3495.00 USD or $719.00 BILLED ANNUALLY Cinema 4D R23.

View attachment 280818

For PS3 emulation, Techpowerup's https://www.techpowerup.com/review/intel-core-i9-13900k/ is using an old BIOS 0604 for ASUS X670E Crosshair Hero.

For the 21st October 2022 benchmark release, Techpowerup didn't retest ASUS X670E Crosshair Hero with BIOS 0705.

The 1st action when I obtained retail ASUS X670E Crosshair Hero (shipped with 0705 BIOS 11th of Oct 2022) and TUF X670E Plus Wifi is to update the BIOSes.

From https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-x670e-hero-model/helpdesk_bios/
BIOS 0604 is older than 25th Sep 2022.

BIOS 0611( Update AGESA version to ComboAM5PI 1.0.0.2) includes improved system performance. Dated 26th Sep 2022
BIOS 0705 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A) includes improved system performance. Dated 11th of Oct 2022. My retail ASUS X670E Crosshair Hero is shipped with BIOS 0705.
BIOS 0805 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A + D) includes improved system performance. Dated 15th of Nov 2022


Date: Nov 12th, 2022 with ASRock B650 Livemixer used BIOS version 1.11 AS03 dated 4th November 2022.

Ryzen 9 7900X vs Core i9 13900K running RPCS3's Uncharted Drake's Fortune. Ryzen 9 7900X is faster when compared to Core i9 13900K.

https://www.asrock.com/mb/AMD/B650 LiveMixer/Specification.asp#BIOS
The mentioned ASRock B650 Livemixer used BIOS version 1.11 AS03 dated 4th November 2022.
ASRock's BIOS 1.09 has AGESA version 1.0.0.3 A.

For 50 watts argument, AMD's 8C/16T Ryzen 9 7940HS and 16C/32T Ryzen 9 7045HX cover these debates. 50 watts TDP target's results can change with under-voltage.
I'm aware of https://www.techpowerup.com/review/...er-lake-tested-at-various-power-limits/2.html
Just checked the video you posted. You are clearly unbiased, as usual.


The 7900x lost in most games in RPCS3, and fun fact, it's USING AVX512! If Zen 4 is losing even with avx512 emulation, holy cow
 
Joined
Jan 14, 2019
Messages
12,681 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
This is better? Focus on the 50W limit. In light tasks, Intel is definitely more efficient. And even faster. And you are even more efficient in running Discord, HWinfo, etc. applications in the background. The cigarette pack analogy is not exactly for children.
In my opinion, that's not because Intel's cores are better, but because chiplet AMD CPUs have a higher idle power consumption (I have my theories on why), which affects efficiency in light workloads as well.

I'm not exactly sure if Intel is faster, either, though it's pretty hard to measure with these applications.
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Just checked the video you posted. You are clearly unbiased, as usual.


The 7900x lost in most games in RPCS3, and fun fact, it's USING AVX512! If Zen 4 is losing even with avx512 emulation, holy cow

Different PS3 games have different SPU usage levels e.g. 1st party Uncharted Drake's Fortune has extensive Cell SPU usage when compared to 2nd party games like Modnation racers.

For Uncharted Drake's Fortune PS3
The studio leveraged the PS3's Cell chip extensively, in particular the six available SPU satellite processors. The original engine targeted a 30fps update, based on a single processing thread consisting of game logic followed by a command buffer set-up (basically generating the instructions for the GPU). Most of the engine systems were hived off to the SPUs, with the main processor - Cell's PPU - running the majority of the actual gameplay code. - https://www.eurogamer.net/digitalfoundry-2015-the-challenge-of-remastering-uncharted


1st game, 7900X has 14%
2nd game, 13900k has 12%
3rd game, 7900X has 13%
4th game, 13900k has 9%
5th game, 13900k has 6%

They are about even.

The reason why I focused on Uncharted Drake's Fortune is the extensive six SPUs usage. The win is dependent on the context.
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.90/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
The discussion started by whether or not the 2nd ccd is more useful than the ecores. And it's not. They both do the same thing, boost MT performance, useless for everyrthing else
This!
I figured out a better way to explain it

Ryzen 3600x vs 5600
1674705950915.png

Note that the inferior 3600x has higher clock speeds, same cores, same threads, same cache, and a higher TDP


The key here is that the 3600x did fairly well when it was new, but then its performance started to tank in newer titles
It still does pretty well in older DX9 titles, but not in DX11/DX12 titles where multi threaded rendering became the norm




Their single threaded performance isnt that different, nor is purely multi threaded in a task that isn't latency sensitive (This will come back up later)
1674706404991.png
1674706466081.png



Somehow, an 11% difference in ST and MT performance jumps to 20% in game performance?
1674706779417.png


The reason for the performance loss is simple - It's a 3C+3C design vs a 6C design
The moment a game needs greater than 3 high performance threads, you get that loss from either using SMT or the higher latency delay from the second CCX

The performance loss from using SMT threads is well known to be greater than using a split CCX design here too, hence the 3300X being slower with its 4C8T design despite raving success at it's launch - because it was fantastic for games that needed <4 threads, with four cores on the one die

Any time you see a CPU with extra chiplets, be they and AMD dual/triple CCX or intel's E-cores, gaming performance only gives a damn about the performance of a single set of cores.
Games friggin hate being spread out due to the latency difference - and that's when all the cores are equal performance and clock speeds, when the E-cores are much slower.


As a summary:
ST performance 11% higher
MT % 12.8%
gaming? 20.2% higher


If you lose ~10% performance by having a games load between two sets of identical cores, what's going to happen when a game needs to spread its load over cores that have 60% speed difference?
1674706970826.png



Zen2 to Zen3 showed us that when the time comes that games shift from need 6 cores to 7 cores, we're gunna see a loss on systems with a second identical cluster, and a much MUCH bigger one on these unbalanced E-core designs

That makes the "future" of 6 core CPU's dies a negative - a 12600k, 13600k, 5900x and even the new 7900x are all going to suffer when that shift happens, with the intel side taking a bigger hit due to the performance imbalance between the primary cores and the secondary


I'd love to see someone using process lasso on 12th or 13th gen intel system, locking the CPU to 4 P-cores w/ Ecores and comparing gaming results when that spread is artificially forced - do we get the 60% loss? Somehere in between? Frametime microstutters?

MSFS would be a great example, as it's extremely multi threaded and cripples a lot of modern systems - see the 10-20% above and then realise this is at ultra settings, lower settings would space this out much farther
1674708334914.png




I just... have this loathing for E-cores and extra CCX's and how they're touted as for gamers.
If intel don't sell 8 core CPU's without paying money, wattage and heat (and associated cooling) for gaming useless E-cores - that's a goddamned negative.
AMD realised it with CPU's like the 3300X, 5600/X and 5800x3D and then 7700/x - max out a single chip design for the gamers, instead of making them pay for HEDT workstation CPU's
 

Attachments

  • 1674705993904.png
    1674705993904.png
    264.8 KB · Views: 90
  • 1674706922722.png
    1674706922722.png
    136.7 KB · Views: 79
Last edited:
Joined
Jun 6, 2022
Messages
622 (0.66/day)
That's a useless benchmark premise for desktop usage, it lacks the attached number values and the graph's best-case direction.

Your graph is not complete and your post lacks proper reference links.
You just have to pay attention to the 50W limit.
Sources: TPU reviews on 7950X and 13900K.
I don't understand why you insist on proving the importance of the second CCD in gaming when the differences between 7700X and 7950X are ZERO?
comp.jpg

In my opinion, that's not because Intel's cores are better, but because chiplet AMD CPUs have a higher idle power consumption (I have my theories on why), which affects efficiency in light workloads as well.

I'm not exactly sure if Intel is faster, either, though it's pretty hard to measure with these applications.
It really doesn't matter. It is up to each user to decide whether he only needs trucks with trailers or not. The discussion was rekindled because someone decreed that the second CCD is vital in gaming.

Regarding heavy load with gaming in parallel, even antiviruses have Gaming Mode so as not to disturb the game. A 7950X has enough cores to run without feeling this antivirus, but the communication paths of the processor with the rest of the components are the same as with an entry 7600.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,681 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
You just have to pay attention to the 50W limit.
Sources: TPU reviews on 7950X and 13900K.
I don't understand why you insist on proving the importance of the second CCD in gaming when the differences between 7700X and 7950X are ZERO?
View attachment 280895

It really doesn't matter. It is up to each user to decide whether he only needs trucks with trailers or not. The discussion was rekindled because someone decreed that the second CCD is vital in gaming.

Regarding heavy load with gaming in parallel, even antiviruses have Gaming Mode so as not to disturb the game. A 7950X has enough cores to run without feeling this antivirus, but the communication paths of the processor with the rest of the components are the same as with an entry 7600.
Let's just say that I bought a 5950X at launch with the exact same mindset: that it would be useful in gaming when antivirus, or Windows update, or something else kicks in. I sold it a couple months later because I realized that it was pointless. Then I had an 11700 (non-K) in a SFF system, and I would still have it in my main rig if I hadn't reshuffled my PCs due to a friend wanting a mini gaming rig for his daughter (so now it's in my HTPC). My main PC has a 7700X now, and I couldn't be happier.

This is why I understand those people who think that the second CCD is useful for background tasks - but this is why I'm saying that it's not. It is not possible to peg 8 modern CPU cores to 100% usage in any game, so when Windows update actually does kick in, you don't feel it because the lightly loaded threads have enough unused CPU time to handle it, not to mention there is no added latency due to the communication between CCDs.

If you only have 8 trucks with trailers, and you need to deliver a letter, there's plenty of space for it on one of those trailers, even if they're loaded with something else as well.
 
Last edited:
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
This!
I figured out a better way to explain it

Ryzen 3600x vs 5600
View attachment 280876
Note that the inferior 3600x has higher clock speeds, same cores, same threads, same cache, and a higher TDP


The key here is that the 3600x did fairly well when it was new, but then its performance started to tank in newer titles
It still does pretty well in older DX9 titles, but not in DX11/DX12 titles where multi threaded rendering became the norm




Their single threaded performance isnt that different, nor is purely multi threaded in a task that isn't latency sensitive (This will come back up later)
View attachment 280879View attachment 280880


Somehow, an 11% difference in ST and MT performance jumps to 20% in game performance?
View attachment 280882

The reason for the performance loss is simple - It's a 3C+3C design vs a 6C design
The moment a game needs greater than 3 high performance threads, you get that loss from either using SMT or the higher latency delay from the second CCX

The performance loss from using SMT threads is well known to be greater than using a split CCX design here too, hence the 3300X being slower with its 4C8T design despite raving success at it's launch - because it was fantastic for games that needed <4 threads, with four cores on the one die

Any time you see a CPU with extra chiplets, be they and AMD dual/triple CCX or intel's E-cores, gaming performance only gives a damn about the performance of a single set of cores.
Games friggin hate being spread out due to the latency difference - and that's when all the cores are equal performance and clock speeds, when the E-cores are much slower.


As a summary:
ST performance 11% higher
MT % 12.8%
gaming? 20.2% higher


If you lose ~10% performance by having a games load between two sets of identical cores, what's going to happen when a game needs to spread its load over cores that have 60% speed difference?
View attachment 280886

Zen2 to Zen3 showed us that when the time comes that games shift from need 6 cores to 7 cores, we're gunna see a loss on systems with a second identical cluster, and a much MUCH bigger one on these unbalanced E-core designs

That makes the "future" of 6 core CPU's dies a negative - a 12600k, 13600k, 5900x and even the new 7900x are all going to suffer when that shift happens, with the intel side taking a bigger hit due to the performance imbalance between the primary cores and the secondary


I'd love to see someone using process lasso on 12th or 13th gen intel system, locking the CPU to 4 P-cores w/ Ecores and comparing gaming results when that spread is artificially forced - do we get the 60% loss? Somehere in between? Frametime microstutters?

MSFS would be a great example, as it's extremely multi threaded and cripples a lot of modern systems - see the 10-20% above and then realise this is at ultra settings, lower settings would space this out much farther


I just... have this loathing for E-cores and extra CCX's and how they're touted as for gamers.
If intel don't sell 8 core CPU's without paying money, wattage and heat (and associated cooling) for gaming useless E-cores - that's a goddamned negative.
AMD realised it with CPU's like the 3300X, 5600/X and 5800x3D and then 7700/x - max out a single chip design for the gamers, instead of making them pay for HEDT workstation CPU's
Zen 2 vs Zen 3 has a load-store difference not just 4 core CCX vs 6 core CCX difference.

Zen 3 core has up to three loads and two stores per clock cycle.
Zen 2 core has up to two loads and one store per clock cycle.

On the CPU-to-CPU core latency subject:


Four sub-NUMA nodes for Ryzen 9 3950X



Two sub-NUMA nodes for Ryzen 9 5950X.



Two sub-NUMA nodes for Ryzen 9 7950X. The 1st sub-NUMA node CCD is usually the fastest i.e. refer to ASUS AM5 BIOS's SP score for silicon quality estimate.





There are multiple sub-NUMA nodes among the E-Cores. The 1st sub-NUMA node is the fastest.

References
1. https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/5
2. https://www.anandtech.com/show/1758...ryzen-5-7600x-review-retaking-the-high-end/10
3. https://www.anandtech.com/show/17601/intel-core-i9-13900k-and-i5-13600k-review/5

For six-core CCX fine-wine issues, mainstream games have the game consoles' two quad-Zen 2-core CCXs with 1 CPU core reserved for OS design parameters for the current game console generation.

----
Ryzen 7 5800X can max boost up to 4.7GHz.
Ryzen 9 5950X can max boost up to 4.9GHz.
AMD didn't release Ryzen 7 5800X CCD with 5950X's 4.9GHz. max boost silicon.

You just have to pay attention to the 50W limit.
Sources: TPU reviews on 7950X and 13900K.
I don't understand why you insist on proving the importance of the second CCD in gaming when the differences between 7700X and 7950X are ZERO?
View attachment 280895

It really doesn't matter. It is up to each user to decide whether he only needs trucks with trailers or not. The discussion was rekindled because someone decreed that the second CCD is vital in gaming.

Regarding heavy load with gaming in parallel, even antiviruses have Gaming Mode so as not to disturb the game. A 7950X has enough cores to run without feeling this antivirus, but the communication paths of the processor with the rest of the components are the same as with an entry 7600.
For single-threaded benchmarks, they are running on P-Cores on 13900K and don't reflect the multitasking split with games holding the 1st sub-NUMA node while running other desktop programs.

*Context*
Super Pi benchmark
From https://www.techpowerup.com/review/intel-core-i9-13900k/5.html

"Released in 1995, it only supports x86 floating-point instructions"

Super Pi has poor multithreading scaling.

It's better to run 16 copies SuperPi.

In the real world, the x87 floating-point instruction set is obsolete for both game console AVX-era games and modern productivity apps.

In 64-bit versions of Windows, x87 is deprecated for user-mode, and prohibited entirely in kernel mode.

-----------
From https://www.techpowerup.com/review/intel-core-i9-13900k/5.html
Y-Cruncher's version wasn't shown.



VS

Test Setup
Processor: AMD Ryzen 9 7900X
Motherboard: retail ASUS TUF Gaming X670E Plus Wifi purchased within November 2022.
BIOS version: 0821
Cooling: Corsair H115i RGB Platinum RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR AMD EXPO modules.

ZenTimings_Screenshot2.png


Y-Cruncher7900X_2_5billionD.png

With 2,500,000,000 digits. My Ryzen 9 7900X's total compute time is 52.713 seconds

Still running with Google Chrome Web browser's 8 tabs, Lightning ARGB service is still active.

1674728965463.png


--------------------

Test setup for stock Ryzen 9 7950X with ASUS Crosshair X670e HERO
Processor: AMD Ryzen 9 7950X stock settings (PBO disabled).
Motherboard: retail ASUS Crosshair X670e HERO purchased within November 2022 time period.
BIOS version: 0805
Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR. Only EXPO II mode.

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings.png


Ryzen_7950_DDR5_6000MT_CL30_Y-Cruncher.png

With 2,500,000,000 digits. My stock Ryzen 9 7950X's total compute time is 54.636 seconds.

--------------------

Test setup for Ryzen 9 7950X Auto PBO with ASUS Crosshair X670e HERO
Processor: AMD Ryzen 9 7950X with auto PBO enabled.
Motherboard: retail ASUS Crosshair X670e HERO purchased within November 2022 time period.
BIOS version: 0805
Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR with similar tighter memory timings from my TUF X670E Plus WiFi

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings_Custom1.png


Ryzen_7950_DDR5_6000MT_CL30_PBO_Y-Cruncher_Custom1.png

With 2,500,000,000 digits. My Ryzen 9 7950X's total compute time is 48.213 seconds.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
This!
I figured out a better way to explain it

Ryzen 3600x vs 5600
View attachment 280876
Note that the inferior 3600x has higher clock speeds, same cores, same threads, same cache, and a higher TDP


The key here is that the 3600x did fairly well when it was new, but then its performance started to tank in newer titles
It still does pretty well in older DX9 titles, but not in DX11/DX12 titles where multi threaded rendering became the norm




Their single threaded performance isnt that different, nor is purely multi threaded in a task that isn't latency sensitive (This will come back up later)
View attachment 280879View attachment 280880


Somehow, an 11% difference in ST and MT performance jumps to 20% in game performance?
View attachment 280882

The reason for the performance loss is simple - It's a 3C+3C design vs a 6C design
The moment a game needs greater than 3 high performance threads, you get that loss from either using SMT or the higher latency delay from the second CCX

The performance loss from using SMT threads is well known to be greater than using a split CCX design here too, hence the 3300X being slower with its 4C8T design despite raving success at it's launch - because it was fantastic for games that needed <4 threads, with four cores on the one die

Any time you see a CPU with extra chiplets, be they and AMD dual/triple CCX or intel's E-cores, gaming performance only gives a damn about the performance of a single set of cores.
Games friggin hate being spread out due to the latency difference - and that's when all the cores are equal performance and clock speeds, when the E-cores are much slower.


As a summary:
ST performance 11% higher
MT % 12.8%
gaming? 20.2% higher


If you lose ~10% performance by having a games load between two sets of identical cores, what's going to happen when a game needs to spread its load over cores that have 60% speed difference?
View attachment 280886


Zen2 to Zen3 showed us that when the time comes that games shift from need 6 cores to 7 cores, we're gunna see a loss on systems with a second identical cluster, and a much MUCH bigger one on these unbalanced E-core designs

That makes the "future" of 6 core CPU's dies a negative - a 12600k, 13600k, 5900x and even the new 7900x are all going to suffer when that shift happens, with the intel side taking a bigger hit due to the performance imbalance between the primary cores and the secondary


I'd love to see someone using process lasso on 12th or 13th gen intel system, locking the CPU to 4 P-cores w/ Ecores and comparing gaming results when that spread is artificially forced - do we get the 60% loss? Somehere in between? Frametime microstutters?

MSFS would be a great example, as it's extremely multi threaded and cripples a lot of modern systems - see the 10-20% above and then realise this is at ultra settings, lower settings would space this out much farther
View attachment 280887



I just... have this loathing for E-cores and extra CCX's and how they're touted as for gamers.
If intel don't sell 8 core CPU's without paying money, wattage and heat (and associated cooling) for gaming useless E-cores - that's a goddamned negative.
AMD realised it with CPU's like the 3300X, 5600/X and 5800x3D and then 7700/x - max out a single chip design for the gamers, instead of making them pay for HEDT workstation CPU's
Im not sure if you agree with me or not that the 2nd ccd is as useful (or useless) as ecores.

I don't think either are fundamentally bad for gaming. It's usually the OS or the game itself misusing them that creates trouble. And vice versa, a game that utilizes them nicely effectively boosts performance. Now im strictly talking about the ecores, cause I don't have a zen 4 dual ccd to test. For example, in spiderman - which basically decompresses assets on the fly, ecores get heavily utilized to do exactly that. Turning off drops perfrormance a bit. But the biggest hit by turning off ecores happpens in Cyberpunk with RT on in heavy scenes. I tested it thoroughly, in this area which is the heaviest in the game, no other CPU besides a 13900k and a 12900k with ecores ON can hold 100fps. The 3d drops to below 50.

Watch the first 25 seconds, that area is a cpu hog

 
Joined
Jan 14, 2019
Messages
12,681 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Im not sure if you agree with me or not that the 2nd ccd is as useful (or useless) as ecores.

I don't think either are fundamentally bad for gaming. It's usually the OS or the game itself misusing them that creates trouble. And vice versa, a game that utilizes them nicely effectively boosts performance. Now im strictly talking about the ecores, cause I don't have a zen 4 dual ccd to test.
The trouble with that statement in AMD terms is the core hierarchy. As I've seen, the two preferred cores are (usually) on the two different CCDs, with the rest all over the place. This way, the OS has no choice but to utilize both CCDs for the same game, inevitably resulting in a latency penalty when the cores communicate. It could be solved by the core hierarchy being #1 to #8 on CCD1 and #9 to #16 on CCD2, but AMD doesn't make it this way for some reason.

On Intel, I suppose the P-cores are higher up in the hierarchy than E-cores, so the OS knows to use P-cores first. Though on contrary to you, I don't have heterogenous Intel architectures to test (and I've only had one dual CCD AMD processor so far), so take this part of what I said with a grain of salt. :ohwell:
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
The trouble with that statement in AMD terms is the core hierarchy. As I've seen, the two preferred cores are (usually) on the two different CCDs, with the rest all over the place. This way, the OS has no choice but to utilize both CCDs for the same game, inevitably resulting in a latency penalty when the cores communicate. It could be solved by the core hierarchy being #1 to #8 on CCD1 and #9 to #16 on CCD2, but AMD doesn't make it this way for some reason.

On Intel, I suppose the P-cores are higher up in the hierarchy than E-cores, so the OS knows to use P-cores first. Though on contrary to you, I don't have heterogenous Intel architectures to test (and I've only had one dual CCD AMD processor so far), so take this part of what I said with a grain of salt. :ohwell:

For my Ryzen 9 7900X's CPU SP (silicon quality) scores

PXL_20230105_070623935.jpg


Core 0 to Core 5 = CCD 0
Core 6 to Core 11 = CCD 1

The best CPU cores are in the 1st sub-NUMA node i.e. two CPUs at 120 and one CPU at 119. The fastest CPU silicon quality is on 1st CCD while 2nd CCD's silicon quality is acting like AMD's "fat" E-Cores.

The 2nd CCD doesn't have a CPU SP rating that matched the 1st CCD's CPU 118/119/120 ratings.

1674731090580.png

My Windows 11's Task Manager CPU loads usually stay with the 1st CCD.


My other Ryzen 9 7950X setup has slightly better silicon quality for the 1st CCD.

PXL_20230126_113201051.jpg


No CPU core in the second CCD matched 1st CCD's CPU SP rating.

My Ryzen Zen 4 CPUs are retail versions during Nov 2022.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,681 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
For my Ryzen 9 7900X's CPU SP (silicon quality) scores

View attachment 280919

Core 0 to Core 5 = CCD 0
Core 6 to Core 11 = CCD 1

The best CPU cores are in the 1st sub-NUMA node i.e. two CPUs at 120 and one CPU at 119. The fastest CPU silicon quality is on 1st CCD while 2nd CCD's silicon quality is acting like AMD's "fat" E-Cores.

The 2nd CCD doesn't have a CPU SP rating that matched the 1st CCD's CPU 118/119/120 ratings.

View attachment 280911
My Windows 11's Task Manager CPU loads usually stay with the 1st CCD.


My other Ryzen 9 7950X setup has slightly better silicon quality for the 1st CCD.

View attachment 280918

No CPU core in the second CCD matched 1st CCD's CPU SP rating.

My Ryzen Zen 4 CPUs are retail versions during Nov 2022.
That's cool, but it's not what you see in the 7950X review (link - look at the "Clock speed mismatch" part). Maybe you were just lucky, and the preferred cores on your CPU are actually both on CCD1.
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
That's cool, but it's not what you see in the 7950X review (link - look at the "Clock speed mismatch" part). Maybe you were just lucky, and the preferred cores on your CPU are actually both on CCD1.
From ASUS's ROG Crosshair X670E Hero with BIOS 805 version, my 7950X's CCDs also show clock speed differences.

PXL_20230126_113201051.jpg


Ryzen Master doesn't show individual CPU SP scores.
----------
For my 7900X


Master7900X.png


PXL_20230105_070623935.jpg


My Ryzen Master shows a "gold star" for CCD0 Core 02 which corresponds to ASUS BIOS's Core 1 with 5693 Mhz and 120 SP score and a "gold star" CCD1 Core 10 which corresponds to ASUS BIOS's Core 9 with 5400 Mhz and112 SP score.

Core 10 is the best CPU quality for CCD 1 and its SP rating is lower than the CCD1's worst CPU 118 SP score.

From https://www.techpowerup.com/review/amd-ryzen-9-7950x/26.html

W1zzard: In pure stock settings, we noticed that the boosting behavior of among the two CCDs is vastly different, with cores on the second CCD boosting anywhere between 100 to 250 MHz lower than their counterparts from the first CCD. This isn't a case of power budget running out and the processor spreading its boost budget lower on the second CCD, as our testing shows, where we applied a lightly-parallelized workload to specific cores in both CCDs, and noticed that even well within the power/thermal limits, the second CCD simply isn't boosting as high as the first one, including the cores AMD marked as "preferred cores" in that CCD. We've reproduced this CCD boosting disparity on even our 7900X sample. Older-gen 5000-series chips such as the 5950X don't exhibit this.

-----

For the Zen 4 generation, it's AMD's P-Core and "fat" E-Core based on silicon quality. It's like supergluing 7700X+ for CCD0 with 7700 non-X for CCD1. This configuration may recycle Windows 11's multithreading scheduler from Intel AnderLake.

Both my 7900X and 7950X are retail SKUs from late Nov 2022.
 
Last edited:
Joined
Jun 6, 2022
Messages
622 (0.66/day)
While the seller praises his cow in scientific terms, the buyer asks simply: ok, ok, but how much milk does it give?
For the big battle here, another fresh review (last update, last BIOS, last drivers, stuff) and what do we see? Beat "2CCD" in games with the most powerful video card of the moment? Not! Not even in 1080p, with which you are laughing if you use the RTX 4090. You use it in 4K, where the results are even more dramatic because even a hexacore enters the competition at the top. If the 7700X received the best silicon and worked at the frequencies of the 7950X, it would demolish it in games (see 7950X stock versus 7700X PBO). For AMD, stick only one CCD and sell it almost 40% more expensive, priceless. They are forced by Intel to use X3D because they don't have other solutions for games at the moment. Because the crown in gaming brings a lot of money on this platform and the 7950X is not the solution.

13900KS.jpg
 
Last edited:
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
While the seller praises his cow in scientific terms, the buyer asks simply: ok, ok, but how much milk does it give?
For the big battle here, another fresh review (last update, last BIOS, last drivers, stuff) and what do we see? Beat "2CCD" in games with the most powerful video card of the moment? Not! Not even in 1080p, with which you are laughing if you use the RTX 4090. You use it in 4K, where the results are even more dramatic because even a hexacore enters the competition at the top. If the 7700X received the best silicon and worked at the frequencies of the 7950X, it would demolish it in games (see 7950X stock versus 7700X PBO). For AMD, stick only one CCD and sell it almost 40% more expensive, priceless. They are forced by Intel to use X3D because they don't have other solutions for games at the moment. Because the crown in gaming brings a lot of money on this platform and the 7950X is not the solution.

View attachment 280997
Game list
CyberPunk (not a competitive game for high fps 1080p)
FarCry 6 (not a competitive game for high fps 1080p)
F1 2021
Hitman 3 (not a competitive game for high fps 1080p)
MS Flight Simulator 2021 (not a competitive game for high fps 1080p)
Red Dead Redemption 2 (not a competitive game for high fps 1080p)
WarHammmer 3 (not a competitive game for high fps 1080p)
Watch Dogs. (not a competitive game for high fps 1080p)

Not one game is an Unreal Engine 4-based game.
Tomshardware used the old Blender 2.8.

-----------------------------------------------------------------------------

From https://www.tomshardware.com/news/intel-core-i9-13900ks-cpu-review
BrUgpYKoYWudpQ7rKa274-1200-80.png


VS

From https://hothardware.com/reviews/intel-core-i9-13900ks-6ghz-cpu-review?page=4

f1 1 core i9 13900ks results


Hothardware's 7900X is slightly faster than 7700X while Tomshardware is the opposite.


---------------------------------------------------------



From https://www.tomshardware.com/news/intel-core-i9-13900ks-cpu-review


7ubpE9Rmohxv2BKeMSWLse-1200-80.png

VS

From https://www.techspot.com/review/2552-intel-core-i9-13900k/



TechSpot's 7900X matched 7700X while Tomshardware is the opposite.







Includes Horizon Zero Dawn (Sony's 1st party title) and A Plague Tale: Requiem (UE4).

HZD-p.png


A multiplayer competitive game title for high fps 1080p example

CSGO-p.png
 
Last edited:
Top