• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900K

Joined
Jun 6, 2022
Messages
622 (0.66/day)
Not one game is an Unreal Engine 4-based game.

I found one. 7950X destroys 7700X in 1080p and 7600X in 4K. The second CCD... priceless in gaming, yes! :rockout: :rockout: :rockout: :rockout: :rockout:
Note: focus on i5-12600, hexa core without E-core. I apologize for being ironic, but how long will you insist that the second CCD brings benefits in gaming?


1080p.jpg
4k.jpg
 
Last edited:
Joined
Jan 14, 2019
Messages
12,658 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
From ASUS's ROG Crosshair X670E Hero with BIOS 805 version, my 7950X's CCDs also show clock speed differences.

View attachment 280969

Ryzen Master doesn't show individual CPU SP scores.
----------
For my 7900X


View attachment 280971

View attachment 280970

My Ryzen Master shows a "gold star" for CCD0 Core 02 which corresponds to ASUS BIOS's Core 1 with 5693 Mhz and 120 SP score and a "gold star" CCD1 Core 10 which corresponds to ASUS BIOS's Core 9 with 5400 Mhz and112 SP score.

Core 10 is the best CPU quality for CCD 1 and its SP rating is lower than the CCD1's worst CPU 118 SP score.

From https://www.techpowerup.com/review/amd-ryzen-9-7950x/26.html

W1zzard: In pure stock settings, we noticed that the boosting behavior of among the two CCDs is vastly different, with cores on the second CCD boosting anywhere between 100 to 250 MHz lower than their counterparts from the first CCD. This isn't a case of power budget running out and the processor spreading its boost budget lower on the second CCD, as our testing shows, where we applied a lightly-parallelized workload to specific cores in both CCDs, and noticed that even well within the power/thermal limits, the second CCD simply isn't boosting as high as the first one, including the cores AMD marked as "preferred cores" in that CCD. We've reproduced this CCD boosting disparity on even our 7900X sample. Older-gen 5000-series chips such as the 5950X don't exhibit this.

-----

For the Zen 4 generation, it's AMD's P-Core and "fat" E-Core based on silicon quality. It's like supergluing 7700X+ for CCD0 with 7700 non-X for CCD1. This configuration may recycle Windows 11's multithreading scheduler from Intel AnderLake.

Both my 7900X and 7950X are retail SKUs from late Nov 2022.
Having two different quality CCDs acting as P-cores and E-cores is fine, but then why aren't the two preferred cores on the same CCD?

I found one. 7950X destroys 7700X in 1080p and 7600X in 4K. The second CCD... priceless in gaming, yes! :rockout: :rockout: :rockout: :rockout: :rockout:
Note: focus on i5-12600, hexa core without E-core. I apologize for being ironic, but how long will you insist that the second CCD brings benefits in gaming?
"Destroys" isn't the right term when the two results are within margin of error away from each other (it's not even a one FPS difference).

Other than that, I agree. Nobody needs more than 8 cores purely for gaming. Even 6 is enough in most cases (if not all).
 
Joined
Aug 10, 2021
Messages
166 (0.13/day)
System Name Main
Processor 5900X
Motherboard Asrock 570X Taichi
Memory 32GB
Video Card(s) 6800XT
Display(s) Odyssey C49G95T - 5120 x 1440
Includes Horizon Zero Dawn (Sony's 1st party title) and A Plague Tale: Requiem (UE4).
great post, but slight correction, I'm pretty sure Plagues Tale: Requiem is in-house engine, not UE4
Didn't know Toms had such shit benchmarks though
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
I found one. 7950X destroys 7700X in 1080p and 7600X in 4K. The second CCD... priceless in gaming, yes! :rockout: :rockout: :rockout: :rockout: :rockout:
Note: focus on i5-12600, hexa core without E-core. I apologize for being ironic, but how long will you insist that the second CCD brings benefits in gaming?


View attachment 281014View attachment 281015
1. Your cited benchmark did NOT demonstrate concurrent game video recording while running the gaming benchmark!

2. I already posted my remarks about game console-centric game design with most mainstream games that impose a soft limit on the PC CPU's core usage count to about 6 to 8 CPU cores.

Your "but how long will you insist that the second CCD brings benefits in gaming?" assertion on me is FALSE.

great post, but slight correction, I'm pretty sure Plagues Tale: Requiem is in-house engine, not UE4
Didn't know Toms had such shit benchmarks though
My bad, a Plague Tale: Innocence runs on Unreal Engine 4 utilizing deferred rendering.

I found one. 7950X destroys 7700X in 1080p and 7600X in 4K. The second CCD... priceless in gaming, yes! :rockout: :rockout: :rockout: :rockout: :rockout:
Note: focus on i5-12600, hexa core without E-core. I apologize for being ironic, but how long will you insist that the second CCD brings benefits in gaming?


View attachment 281014View attachment 281015
intel-raptor-lake-gears-tactics.png


Gears Tactics used Unreal Engine 4.

---
Techpowerup's Ryzen 9 7950X setup's Y-Cruncher result is slower than my Ryzen 9 7950X setup as shown from https://www.techpowerup.com/forums/threads/intel-core-i9-13900k.300016/post-4937624

From https://www.techpowerup.com/review/intel-core-i9-13900k/5.html
Y-Cruncher's version wasn't shown.


Techpowerup used a very old BIOS 0604 for ASUS Crosshair X670E Hero

Y-Cruncher's different results could indicate memory latency and BIOS configuration issues.

VS

From
https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-x670e-hero-model/helpdesk_bios/
BIOS 0604 is older than 25th Sep 2022.

BIOS 0611( Update AGESA version to ComboAM5PI 1.0.0.2) includes improved system performance. Dated 26th Sep 2022
BIOS 0705 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A) includes improved system performance. Dated 11th of Oct 2022.
BIOS 0805 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A + D) includes improved system performance. Dated 15th of Nov 2022


My test setup for stock Ryzen 9 7950X with ASUS Crosshair X670E Hero
Processor: AMD Ryzen 9 7950X stock settings.
Motherboard: Retail ASUS Crosshair X670E Hero purchased within November 2022 time period and is shipped with BIOS 0705.
BIOS version: 0805 (I updated the BIOS after delivery)

Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR. Only EXPO II profile mode from the ASUS BIOS.

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings.png


Ryzen_7950_DDR5_6000MT_CL30_Y-Cruncher.png

With 2,500,000,000 digits. My stock Ryzen 9 7950X's total compute time is 54.636 seconds.



My test setup for Ryzen 9 7950X Auto PBO with ASUS Crosshair X670e HERO and tighten memory latencies which I copied from Reddit.
Processor: AMD Ryzen 9 7950X with auto PBO enabled.
Motherboard: retail ASUS Crosshair X670e HERO purchased within November 2022 time period.
BIOS version: 0805
Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR with the same tighter memory timings as my ASUS TUF X670E Plus WiFi.

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings_Custom1.png


Ryzen_7950_DDR5_6000MT_CL30_PBO_Y-Cruncher_Custom1.png

With 2,500,000,000 digits. My Ryzen 9 7950X's total compute time is 48.213 seconds.

I fully disclosed my memory timings and memory module SKU, and I would rather see gaming benchmarks to be updated with my memory timings and BIOS revision.

I would probably need to reinstall Techpowerup's game collection set and to figure out the game benchmark methods. I have Far Cry 6, Forza Horizon 5, Watch Dogs Legion, Doom Eternal, Elden Ring, Cyberpunk 2077, and Border Lands 3.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
1. Your cited benchmark did NOT demonstrate concurrent game video recording while running the gaming benchmark!

2. I already posted my remarks about game console-centric game design with most mainstream games that impose a soft limit on the PC CPU's core usage count to about 6 to 8 CPU cores.

Your "but how long will you insist that the second CCD brings benefits in gaming?" assertion on me is FALSE.


My bad, a Plague Tale: Innocence runs on Unreal Engine 4 utilizing deferred rendering.


View attachment 281038

Gears Tactics used Unreal Engine 4.

---
Techpowerup's Ryzen 9 7950X setup's Y-Cruncher result is slower than my Ryzen 9 7950X setup as shown from https://www.techpowerup.com/forums/threads/intel-core-i9-13900k.300016/post-4937624

From https://www.techpowerup.com/review/intel-core-i9-13900k/5.html
Y-Cruncher's version wasn't shown.


Techpowerup used a very old BIOS 0604 for ASUS Crosshair X670E Hero

Y-Cruncher's different results could indicate memory latency and BIOS configuration issues.

VS

From
https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-x670e-hero-model/helpdesk_bios/
BIOS 0604 is older than 25th Sep 2022.

BIOS 0611( Update AGESA version to ComboAM5PI 1.0.0.2) includes improved system performance. Dated 26th Sep 2022
BIOS 0705 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A) includes improved system performance. Dated 11th of Oct 2022.
BIOS 0805 (Update AGESA version to ComboAM5PI 1.0.0.3 patch A + D) includes improved system performance. Dated 15th of Nov 2022


My test setup for stock Ryzen 9 7950X with ASUS Crosshair X670E Hero
Processor: AMD Ryzen 9 7950X stock settings.
Motherboard: Retail ASUS Crosshair X670E Hero purchased within November 2022 time period and is shipped with BIOS 0705.
BIOS version: 0805 (I updated the BIOS after delivery)

Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR. Only EXPO II profile mode from the ASUS BIOS.

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings.png


Ryzen_7950_DDR5_6000MT_CL30_Y-Cruncher.png

With 2,500,000,000 digits. My stock Ryzen 9 7950X's total compute time is 54.636 seconds.



My test setup for Ryzen 9 7950X Auto PBO with ASUS Crosshair X670e HERO and tighten memory latencies which I copied from Reddit.
Processor: AMD Ryzen 9 7950X with auto PBO enabled.
Motherboard: retail ASUS Crosshair X670e HERO purchased within November 2022 time period.
BIOS version: 0805
Cooling: Corsair H115i Elite Capellix RGB AIO
Memory: Two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR with the same tighter memory timings as my ASUS TUF X670E Plus WiFi.

Ryzen_7950_DDR5_6000MT_CL30_Zen_Timings_Custom1.png


Ryzen_7950_DDR5_6000MT_CL30_PBO_Y-Cruncher_Custom1.png

With 2,500,000,000 digits. My Ryzen 9 7950X's total compute time is 48.213 seconds.

I fully disclosed my memory timings and memory module SKU, and I would rather see gaming benchmarks to be updated with my memory timings and BIOS revision.

I would probably need to reinstall Techpowerup's game collection set and to figure out the game benchmark methods. I have Far Cry 6, Forza Horizon 5, Watch Dogs Legion, Doom Eternal, Elden Ring, Cyberpunk 2077, and Border Lands 3.
Their ycruncher numbers are wrong for every CPU. Probably thermal throttling. The 12900k gets around 65 at stock, the 13900k gets around 52 at stock. Have them both and tested.
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
"Destroys" isn't the right term when the two results are within margin of error away from each other (it's not even a one FPS difference).
The differences are zero and there was no need for quotation marks to highlight the irony.
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Their ycruncher numbers are wrong for every CPU. Probably thermal throttling. The 12900k gets around 65 at stock, the 13900k gets around 52 at stock. Have them both and tested.
The memory module's latencies (not just five primary 36-36-36-76 2T numbers) and BIOS configuration/build can influence Y-Cruncher's results.

From https://www.techpowerup.com/review/intel-core-i9-13900k/4.html
Water Cooling is "Arctic Liquid Freezer II 420 mm" AIO and I'm using Corsair H115i Elite Capellix RGB AIO 280 mm.

Techpowerup's 36-36-36-76 DDR5-6000 for ASUS ROG Crosshair X670E Hero could be CORSAIR CMT32GX5M2D6000C36 which is a known DDR5-6000 36-36-36-76 in ASUS ROG Crosshair X670E Hero's QVL.

I'm using G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR and its QVL has ASUS ROG Crosshair X670E Hero and ASUS TUF X670E Plus WiFi listed.

My retail ASUS ROG Crosshair X670E Hero motherboard was shipped with BIOS 0705.

ROG Maximus Z690 Hero has 2 BIOS releases after Techpowerup's BIOS 2004.
ROG Maximus Z790 Hero has 2 BIOS releases after Techpowerup's BIOS 0604.

With DDR5's relaxed memory timings, I prefer to tighten memory timings over CPU overclocking. I use OCCT 's AVX2 memory checker after each tightened memory timings change.
My tighten memory timings worked on both ASUS ROG Crosshair X670E Hero and ASUS TUF X670E Plus WiFi motherboards.

For Intel's Zen Timing-like app, there's https://community.hwbot.org/topic/209738-z690-bios-and-tools/ and for MSI Z690, https://www.igorslab.de/en/the-last...unify-x-test-with-adaptive-oc-and-teardown/3/
 
Last edited:
Joined
Jun 6, 2022
Messages
622 (0.66/day)
View attachment 281038

Gears Tactics used Unreal Engine 4.
I don't see the comparison with the 7700X. I see only two CCDs versus two old generation CCDs, but that the 6P+8E beats everything. That 13600K ruins your calculations.

1. Your cited benchmark did NOT demonstrate concurrent game video recording while running the gaming benchmark!
The video card takes care of recording. The impact on the processor (CPU usage) is minor. Even better would be a video capture card because the first method alters the performance of the video card. If you use another method, which dramatically increases CPU Usage, 16 cores will not help you at all because they use the same communication lines as one of 2-4-6 or 8 cores. And, yes, you will see stuttering when all the programs will argue (for example) with the memory controller because at least two the access time is critical.
That saying: either we focus on the game, or we leave it alone. And anyway, how much do we record? Let it be 0.1% of the total time, or 0.0001%?
It's embarrassing to justify the 40% cost of the processor, because when the planets align, they get 1% more than the cheap processor.
We're talking about games, right?
 
Last edited:
Joined
Jan 14, 2019
Messages
12,658 (5.82/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
The differences are zero and there was no need for quotation marks to highlight the irony.
Irony and sarcasm don't always come across in writing. :ohwell:

I get what you meant now.
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Let's just say that I bought a 5950X at launch with the exact same mindset: that it would be useful in gaming when antivirus, or Windows update, or something else kicks in. I sold it a couple months later because I realized that it was pointless. Then I had an 11700 (non-K) in a SFF system, and I would still have it in my main rig if I hadn't reshuffled my PCs due to a friend wanting a mini gaming rig for his daughter (so now it's in my HTPC). My main PC has a 7700X now, and I couldn't be happier.

This is why I understand those people who think that the second CCD is useful for background tasks - but this is why I'm saying that it's not. It is not possible to peg 8 modern CPU cores to 100% usage in any game, so when Windows update actually does kick in, you don't feel it because the lightly loaded threads have enough unused CPU time to handle it, not to mention there is no added latency due to the communication between CCDs.

If you only have 8 trucks with trailers, and you need to deliver a letter, there's plenty of space for it on one of those trailers, even if they're loaded with something else as well.
It's not about pegging those cores - it's that one of two situations occur

1. Your tasks that need fast access to each other are delayed by being spread over different CCX/P-E cores, despite low usage

2. On high usage those extra cores when used still eat into your cache, DRAM and other system resources - such as power limits, like how the 5800x and 5950x have the same stock PPT limits and both reach them. You may not max out all the threads, but when those threads drop clocks because something fired up on those cores you STILL lose performance

Intel users are mitigating some of this with their all core overclocking and locking things in place, but when they do they throw power efficiency right out the window (or throw performance out, by all core underclocking)

1. Your cited benchmark did NOT demonstrate concurrent game video recording while running the gaming benchmark!
hows that useful to anyone?

Gamers dont record while they game, streamers do
Streamers arent dumb enough to use CPU encoded recording, they'd use GPU accelerated decoding or an external hardware recorder

6P+8E beats everything
You mean that the 6P is fast, and the 8E doesnt do anything for gaming (as we're all saying)
 
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
You mean that the 6P is fast, and the 8E doesnt do anything for gaming (as we're all saying)
Τhat is in fact not true. My 13900k with ecores off is - considerably slower in for example cyberpunk. Tom's dinner area, ecores on give me around 15-20 fps, ecores off it drops to around 80
 
Joined
Jan 5, 2006
Messages
18,584 (2.68/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
Joined
Jun 6, 2022
Messages
622 (0.66/day)
They said that there is no problem if the processor jumps to 100 degrees. For a short period, it also works at higher temperatures, it does not enter the protection when it reaches 100 degrees.
An example

1111.jpg
222.jpg


--------------------
13400 review
Differences between DDR5 and DDR4 are practically ZERO in applications, minimal in gaming. But, logically, when you buy 4090, you don't look at i5 or r5. With weaker video cards, the difference is still zero.
It is a great advantage for those who will not give up their old memories. For a little over $300/euro you can have a processor and motherboard.
Consumption is not bad either.
I would have liked a review of the 13500. With higher frequencies and four extra E-cores, I think it outperforms even the 7600X PBO and has about the same price, but you can use cheaper motherboards with DDR4.
multi.jpg

single.jpg

1080p.jpg
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
But, logically, when you buy 4090, you don't look at i5 or r5
which is silly since the gaming performance leaps from the mid range CPU's and up are minimal, as your own charts show

E-cores arent for gamers, they're for CPU benchmarks
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
hows that useful to anyone?

Gamers dont record while they game, streamers do
Streamers arent dumb enough to use CPU encoded recording, they'd use GPU accelerated decoding or an external hardware recorder
It's GPU accelerated encoder. GPU-accelerated decoding is for video playback, not video encoding.

GPU-encoder-accelerated uses GPU-related resources such as IO bandwidth and memory. It depends on the additional resources that are available on either the CPU or GPU side.

OBS has software or hardware encoding modes. I sometimes stream my games.

I don't see the comparison with the 7700X. I see only two CCDs versus two old generation CCDs, but that the 6P+8E beats everything. That 13600K ruins your calculations.


The video card takes care of recording. The impact on the processor (CPU usage) is minor. Even better would be a video capture card because the first method alters the performance of the video card. If you use another method, which dramatically increases CPU Usage, 16 cores will not help you at all because they use the same communication lines as one of 2-4-6 or 8 cores. And, yes, you will see stuttering when all the programs will argue (for example) with the memory controller because at least two the access time is critical.
That saying: either we focus on the game, or we leave it alone. And anyway, how much do we record? Let it be 0.1% of the total time, or 0.0001%?
It's embarrassing to justify the 40% cost of the processor, because when the planets align, they get 1% more than the cheap processor.
We're talking about games, right?
6P (12 threads) + 8E (8 threads) has 26 hardware threads while 7700X has 16 hardware threads. Software multithreading via context switching has higher overheads.

Each E-cores delivers three 128-bit AVX hardware, hence 8 E-cores deliver 3,072 bits of vector processing in addition to the 1st 6P-Cores group.

7700X's 2 additional Fat Cores deliver 2,048 bits of vector processing (Zen 4 has four 256-bit AVX units per Core with AVX-512 double pump and 32 register programming model features). Zen 4 has six FPU units. Against Intel 6P (12 threads) + 8E (8 threads) SKU, AMD needs to release six Cores CCD with 4 Core CCD.

7900X has 24 hardware threads. 7900X's six additional Fat Cores deliver 6,144 bits of vector processing in addition to 1st CCD's six Fat-Cores.

AMD has an SKU gap between 7700X and 7900X.

Cinebench R23 doesn't use AVX-512, hence it's friendly for Intel's E-Cores. I use freebie Blender instead of Cinema 4D.

PS; Intel Golden Cove and Raptor Cove core have three ports assigned for 256-bit hardware vector units. When available, 512-bit AVX-512 mode for Golden Cove is also a double pump method like IceLake's.

For the Raptor Lake generation, Intel dropped AVX-512 and discrete NVMe PCIe 5.0 4X lanes i.e. Intel is using trickle upgrade tactics. I don't plan to purchase multiple PCIe 5.0 motherboard generations. My next motherboard purchase is PCIe 6.0 generation.
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
I typod encoder to decoder

I can livestream 4K 120FPS off a 3090 without any FPS drops - why would anyone want to stream in lower res and bitrate from CPU encoding?
 
Joined
Nov 3, 2011
Messages
697 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
I typod encoder to decoder

I can livestream 4K 120FPS off a 3090 without any FPS drops - why would anyone want to stream in lower res and bitrate from CPU encoding?

Using OBS's software x264 encoding with high quality, medium file size mode), 7900X easily encodes 4K resolution 60 Hz in real-time. I usually record into NVMe for post-video editing. Why the GPU encoding argument when the topic is about the CPU?

Τhat is in fact not true. My 13900k with ecores off is - considerably slower in for example cyberpunk. Tom's dinner area, ecores on give me around 15-20 fps, ecores off it drops to around 80
Not all gaming workloads need fat CPU cores e.g. E-Cores are still suitable for DSP audio work since PC audio is not like on off-load PS5 DSP or Xbox Series S/X DSP.

E-Cores can handle the background tasks, soft DSP audio and reduce any potential context switching.

They said that there is no problem if the processor jumps to 100 degrees. For a short period, it also works at higher temperatures, it does not enter the protection when it reaches 100 degrees.
An example

View attachment 283020View attachment 283022

--------------------
13400 review
Differences between DDR5 and DDR4 are practically ZERO in applications, minimal in gaming. But, logically, when you buy 4090, you don't look at i5 or r5. With weaker video cards, the difference is still zero.
It is a great advantage for those who will not give up their old memories. For a little over $300/euro you can have a processor and motherboard.
Consumption is not bad either.
I would have liked a review of the 13500. With higher frequencies and four extra E-cores, I think it outperforms even the 7600X PBO and has about the same price, but you can use cheaper motherboards with DDR4.
View attachment 283023
View attachment 283024
View attachment 283025
For amateur 3D work with Unreal Engine 4/5 and DAZ 3D, would you actually purchase and use Cinema 4D R23/R25 over freebie Blender 3.x?

Purchasing Cinema 4D R25 workstation application (USD $719.00 BILLED ANNUALLY, or USD $3495.00 Perpetual License) is not for Core i5 level SKUs.

The main reason why I purchase the ASUS X670E motherboards is their potential to use UDIMM ECC memory modules from the entry-level workstation market segment.

I'm using freebie Cakewalk by BandLab, freebie Blender 3.x, freebie OBS Studio, freebie DAZ Studio 4.21 (art content is paid), CyberLink PowerDirector 16, Pinnacle Studio 25U, and Unreal Engine.
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Not all gaming workloads need fat CPU cores e.g. E-Cores are still suitable for DSP audio work since PC audio is not like on off-load PS5 DSP or Xbox Series S/X DSP.
sure but they're slower and less power efficient than the P cores at it, so why are we wanting that?
TPU's reviews on the E-cores show they're only efficient at single threaded tasks, anything else they're slower and less efficient than even ryzen 3000 cores
 
Joined
Jun 14, 2020
Messages
3,554 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
sure but they're slower and less power efficient than the P cores at it, so why are we wanting that?
TPU's reviews on the E-cores show they're only efficient at single threaded tasks, anything else they're slower and less efficient than even ryzen 3000 cores
Didnt we go through that already? For the same reason we want 2 ccds. Multithreaded performance
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
which is silly since the gaming performance leaps from the mid range CPU's and up are minimal, as your own charts show

E-cores arent for gamers, they're for CPU benchmarks
If you are willing to spend ~2000$ on a video card, you will not look at the 200-250$ you pay extra for a processor. The most efficient are pentium, i3 or r3, not i9 or r9. The most balanced, for gaming, are i5 or r5.
If these E-cores raise the scores in the benchmark, they certainly also raise the performance in the applications. Even in some games.

6P (12 threads) + 8E (8 threads) has 26 hardware threads while 7700X has 16 hardware threads. Software multithreading via context switching has higher overheads.

Each E-cores delivers three 128-bit AVX hardware, hence 8 E-cores deliver 3,072 bits of vector processing in addition to the 1st 6P-Cores group.

7700X's 2 additional Fat Cores deliver 2,048 bits of vector processing (Zen 4 has four 256-bit AVX units per Core with AVX-512 double pump and 32 register programming model features). Zen 4 has six FPU units. Against Intel 6P (12 threads) + 8E (8 threads) SKU, AMD needs to release six Cores CCD with 4 Core CCD.

7900X has 24 hardware threads. 7900X's six additional Fat Cores deliver 6,144 bits of vector processing in addition to 1st CCD's six Fat-Cores.

AMD has an SKU gap between 7700X and 7900X.

Cinebench R23 doesn't use AVX-512, hence it's friendly for Intel's E-Cores. I use freebie Blender instead of Cinema 4D.

PS; Intel Golden Cove and Raptor Cove core have three ports assigned for 256-bit hardware vector units. When available, 512-bit AVX-512 mode for Golden Cove is also a double pump method like IceLake's.

For the Raptor Lake generation, Intel dropped AVX-512 and discrete NVMe PCIe 5.0 4X lanes i.e. Intel is using trickle upgrade tactics. I don't plan to purchase multiple PCIe 5.0 motherboard generations. My next motherboard purchase is PCIe 6.0 generation.
I still don't understand why you bother to prove... what?
I said that those E-cores have their importance even in gaming and the reviews prove it. In applications that do not intensively use the processor, they also have their importance. As 1+1 = 2, when the 7950X consumes less than the 13900K in CPU killer applications, but consumes more in others, called "single-threaded", surely these E-cores have a role. This role translates into taking over P-core tasks and increasing energy efficiency. Even in applications, such as Premiere, with a processor that varies from 5-100% usage, the verdict of those who tested it was: same shit.
What is the connection between Cinebench and Cyberpunk?

PS: 6P + 8E = 20 hardware threads. Not 26.
 
Last edited:
Joined
Feb 20, 2020
Messages
9,340 (5.26/day)
Location
Louisiana
System Name Ghetto Rigs z490|x99|Acer 17 Nitro 7840hs/ 5600c40-2x16/ 4060/ 1tb acer stock m.2/ 4tb sn850x
Processor 10900k w/Optimus Foundation | 5930k w/Black Noctua D15
Motherboard z490 Maximus XII Apex | x99 Sabertooth
Cooling oCool D5 res-combo/280 GTX/ Optimus Foundation/ gpu water block | Blk D15
Memory Trident-Z Royal 4000c16 2x16gb | Trident-Z 3200c14 4x8gb
Video Card(s) Titan Xp-water | evga 980ti gaming-w/ air
Storage 970evo+500gb & sn850x 4tb | 860 pro 256gb | Acer m.2 1tb/ sn850x 4tb| Many2.5" sata's ssd 3.5hdd's
Display(s) 1-AOC G2460PG 24"G-Sync 144Hz/ 2nd 1-ASUS VG248QE 24"/ 3rd LG 43" series
Case D450 | Cherry Entertainment center on Test bench
Audio Device(s) Built in Realtek x2 with 2-Insignia 2.0 sound bars & 1-LG sound bar
Power Supply EVGA 1000P2 with APC AX1500 | 850P2 with CyberPower-GX1325U
Mouse Redragon 901 Perdition x3
Keyboard G710+x3
Software Win-7 pro x3 and win-10 & 11pro x3
Benchmark Scores Are in the benchmark section
Joined
Jun 6, 2022
Messages
622 (0.66/day)
Everything works for me, but the latest BIOS version dates from November 18, 2022.
Someone even claims that the memory controller is also blocked. Look for CPU VDDQ in the BIOS if you want undervolt for the controller.
 
Joined
Jan 29, 2021
Messages
1,879 (1.31/day)
Location
Alaska USA
Everything works for me, but the latest BIOS version dates from November 18, 2022.
Someone even claims that the memory controller is also blocked. Look for CPU VDDQ in the BIOS if you want undervolt for the controller.
Obviously you didn't download the latest bios from Reddit.
 
Joined
Jun 6, 2022
Messages
622 (0.66/day)
People are complaining long before the release of the latest version on my board (November - support for the 13th).
It seems to be a problem with XTU, but at Gigabyte there are no undervoltage problems from BIOS to memory and processor. Offset is still in the options, but hidden for Override and Adaptive. Must be set to Auto.
At RAM, I think there is confusion with System Agent. It helps with memory overclocking (XMP is overclocking) but it is not critical. I don't have access to it and I don't see what I can reduce - it works under 1V with the memories at 3600MHz.
The voltage that powers the controller is called CPU VDDQ and I have full access to it. I manually set as much as I want. On the new i5-13500 I set 1.2V (default: 1.35V with xmp, 1.2V without). With 12500 it works at 1.1V, but the memories were clocked at 3200MHz.
13500.jpg


E-cores arent for gamers, they're for CPU benchmarks
There is a review on this topic and E-core messes up in some games and helps in others. I think that where it confuses is more about the game, partly also about windows and the Intel drivers. It will take some time until the "dance" between E and P reaches 100% efficiency.
For now, synthetic tests indicate that the processor loses performance even in Single-Threaded with E-cores off.
 
Top