• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9 9950X

Joined
Mar 11, 2008
Messages
938 (0.15/day)
Location
Hungary / Budapest
System Name Kincsem
Processor AMD Ryzen 9 9950X
Motherboard ASUS ProArt X870E-CREATOR WIFI
Cooling Be Quiet Dark Rock Pro 5
Memory Kingston Fury KF560C32RSK2-96 (2×48GB 6GHz)
Video Card(s) Sapphire AMD RX 7900 XT Pulse
Storage Samsung 970PRO 500GB + Samsung 980PRO 2TB + FURY Renegade 2TB+ Adata 2TB + WD Ultrastar HC550 16TB
Display(s) Acer QHD 27"@144Hz 1ms + UHD 27"@60Hz
Case Cooler Master CM 690 III
Power Supply Seasonic 1300W 80+ Gold Prime
Mouse Logitech G502 Hero
Keyboard HyperX Alloy Elite RGB
Software Windows 10-64
Benchmark Scores https://valid.x86.fr/ilvewh https://valid.x86.fr/4d8n02 X570 https://www.techpowerup.com/gpuz/g46uc
I want this,
Since the Threadripper is way above my paygrade :D
Can someone explain how many usable lanes a X870E has?

Why is this core parking thing a problem?
You can set your stuff which core to run in the basic Windows process manager.
 
Joined
Apr 14, 2018
Messages
655 (0.27/day)
The entire point is a consistent topology, which means a lot more than just winning benchmarks (the 9950X proves that it can't win them anyway). Without having to worry about whether resources are being allocated correctly, you no longer need "drivers", deal with manual affinity and the chip will no longer be a hail Mary - except there is a side effect involved, this will also dramatically improve the tasks that are already sped up by X3D, and you'd be taking home more performance than what AMD is willing to let you have for the price that a 9950X3D would be sold for, hence their initial excuse that games didn't benefit and they decided to can that idea way back in Zen 3 (this is the true reason - not their excuse from back in the day). Even though plenty of people, myself included, would literally part with $800-$1K for one.

7900X3D and 7950X3D's problems were never their resources, but rather their topology which causes the processor's vast resources to go underutilized unless software is specifically written with them in mind (and in general, they will never be), the closest analogy I can think of is precious ore deep within a mine, out of easy reach for all but the most skilled of miners. Zen 5 still lacks a hardware thread scheduler like Intel's thread director, which further compounds the problem with a standard+3D approach. It doesn't work well in practice, proof of that is the 7800X3D just smokes the 7900X3D in practically everything that makes use of ~8 cores and the cache, such as games. Even in productivity applications, if you balance out the resources available in either of these chips and the relative performance percentage, the 7800X3D comes out as much more resource-efficient (it will do more work per core, thread, and MB of cache) than the 7900X3D ever will. And that's why the 79X3D sold poorly.

Sincerely, I would take a dual X3D part even if it had a full GHz of a clock hit vs. the standard model. It's just better.



No need, just play around with the free version if you must. The 5700X3D's topology is contiguous and you have only one CCD/CCX with full access to your processor's resources. It will not improve your performance under any circumstances.

Then buy a 9800X3D. Dual CCD 3D vcache is just going to take performance hits jumping cores through the fabric. Games will see little to no advantage being limited by current console thread counts where the vcache does it’s heavy lifting.

It would literally provide no benefit.
 
Joined
Feb 15, 2019
Messages
1,658 (0.79/day)
System Name Personal Gaming Rig
Processor Ryzen 7800X3D
Motherboard MSI X670E Carbon
Cooling MO-RA 3 420
Memory 32GB 6000MHz
Video Card(s) RTX 4090 ICHILL FROSTBITE ULTRA
Storage 4x 2TB Nvme
Display(s) Samsung G8 OLED
Case Silverstone FT04
What if it is just 3% faster than 7800X3D?
Then I will have to wait for Zen6 to give me 6%

number bert GIF by Sesame Street
 
Joined
May 3, 2018
Messages
2,881 (1.20/day)
9000 series is by far the best advertisement campaign I have seen until now, for the 7000 series models.
Intel must be pinching themselves in disbelief at what they are seeing. AMD had a chance to bury them and have managed a quadruple barrel shotgun blast to their own face with 4 highly disappointing cpu's for average users. Linux power users are happy though. I thought for sure it would be a lay down misere to replace my 5800X with a Zen 5, but I now can't wait to see what Arrow Lake brings to the table.

Powerful core been constrained by interconnect and packaging. For how long is AMD going to milk the PCB copper routing? Its high time for higher performance and energy efficient solutions.
But wait for Zen 6 they'll say now, it's beauty.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,171 (1.27/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
Anyway, early adoption of AMD products is always a bad idea, they never have working firmware, drivers, or software at least 6 months to a year into a product's lifecycle, be it Ryzen or Radeon.
Been a while since I mained a current gen AMD video card, but I can certainly attest to my AM4 platform being somewhat buggy and feeling like a beta product when I bought it. Took a solid 12-18 months for it to settle into being totally stable and bug free. I mean just look at BIOS's for example, there is the launch BIOS and 16 subsequent BIOS releases, some of course bought new AGESA versions and with that CPU compatibility, naturally that's good and I've reaped the rewards with the GOAT 5800X3D, but in the first 18 months there were several just for memory combability issues and stability, hell it even wouldn't work properly with a memory kit on the QVL...

I was quite hyped up by the gaming claims of Zen5, but unless the 9800X3D pulls a rabbit out of a hat and can distance itself from the 7800X3D more than the few expected % given zen4 vs 5, the wait will continue.
 
Joined
Sep 9, 2022
Messages
99 (0.12/day)
They'll never really go away, the 7900X3D and 7950X3D's hybrid architecture are inherently flawed due to resource imbalance, this problem is particularly nasty on the 7900X3D. With these processors you either get a full X3D or a full standard Ryzen 5/7 experience in one package, but you don't get to make the best use of both. That's why a dual-X3D processor is so badly needed this generation. I really hope AMD delivers that.

Dual X3D (3D cache on both CCDs) might even exacerbate the latency issues because on top of the inter-CCD latencies, you potentially get inter-3D-cache latencies as well.
The scheduling on multi CCD is a complex affair already. Adding another layer by distributing the cache is likely to do more harm than good. If a core on CCD1 needs some data from the 3D cache of CCD2, well, Houston we have a problem :D .

The cache logic would have to be super-optimized to make sure that everything stays as low latency as possible. AMD would basically need a hardware scheduler (thread director like Intel) to achieve the best results and maybe even include AI for the best cache and execution branch predictions. So, in the example above, maybe the core on CCD1 would "realize" that it would be faster to do its own calculations than pulling data from CCD2's 3D cache.

I'm sure this would be a nightmare to tune correctly, especially when considering that AMD is already struggling aplenty with the scheduling on the multi CCD parts. We know from some benchmark database entries that AMD apparently did have a 7950X with dual 3D cache as a test balloon at some point but it never made it into a finished product. If you ask me, they most likely dumped it because they could not (yet) overcome the inter-CCD and potential inter-cache latencies in such a configuration.
Besides, there is also the cost factor. Cache SRAM is pretty expensive. It consumes a lot of space on the chip. Dual 3D cache would drive up the costs big time.

Nope. The real solution for people who believe in "MOAR COARS" in gaming (for whatever mysterious reason at this point in time) is a 16 core single CCD which might actually be coming with Zen 6. Maybe then we'll also have more use cases (in gaming) for CPUs with more than six to eight cores...
 
Joined
Nov 6, 2008
Messages
10 (0.00/day)
At least for y-cruncher, it's being held back by memory bandwidth. Compare the single threaded speedup over Zen 4 to the minimal multithreaded speedup.

View attachment 358935


View attachment 358936
Y cruncher’s author goes into detail.
One issue with Anandtech’s (and probably TPU’s) results is that they’re not using the current version of the program with Zen 5 optimizations. His observed single thread improvement is 93% compared to Anand’s 63%. 590s vs. 1139s.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
The more I look review results across the internet on Linux and on server applications the more I stand behind my thought a week ago (on 9600X/9700X launch) that AMD designed these non 3DVcache chiplets for servers on EPYC packages. For desktop they just make it without any fuss. These will be selling by thousands/millions on EPYC on big corporations with server needs. 50~80% server/AI performance gains is crazy and they will make a sh1t tone of money from that market.
Do you think that they care so much for the DYI market? Or anyone has the delusion that AMD/Intel makes their big profits from gamers? Thats just change for soda...
We can cry, curse, be dissatisfied, disappointed or indifferent all we want... the facts is facts.

And I believe that AMD chose to do this now after 2 generations of 3DVcache CPUs. So everyone has seen what the 3DVcache can do. Let the push towards the 3DVcache CPUs begin now...
Like another thing I said a week ago.
More distinct segmentation between..
1. General purpose CPUs
2. Server applications
3. Gaming

I'm starting to believe that they know exactly what they are doing.
This will definately clear out after the X3D parts.
And I will not be surprised at all if there's going to be more than one 8core X3D SKU.
 
Joined
Nov 6, 2008
Messages
10 (0.00/day)
Looking at individual results of things tested by many outlets seems to show that the performance is about the same on Windows and Linux. The biggest difference seems to come from testing a lot of things that benefit from the improvements made to Zen5.
That's not to say that there aren't issues with Windows, as they exist, but I wouldn't expect the average performance in common workloads to change that much
Very few tests were using the same software with the same configuration, mostly Blender, Cinebench, SPEC, etc. There was one with a big difference. Both Anand and Phoronix ran SVT AV1 with the same video, target resolution, and preset. Anand saw a 1.5% improvement while Phoronix saw 23%. Attributing it to the OS doesn’t make much sense. I’m inclined to believe that Phoronix may have used the current GCC with Zen 5 optimizations and Anandtech didn’t.
 
Joined
May 24, 2023
Messages
939 (1.70/day)
... I stand behind my thought a week ago (on 9600X/9700X launch) that AMD designed these non 3DVcache chiplets for servers on EPYC packages.
Even the 3D cache chiplets were originally intended only for servers, but in the process of testing them AMD realised they are also excellent for running games.

More distinct segmentation between..
1. General purpose CPUs
2. Server applications
3. Gaming
By general purpose CPUs for PC you mean the G models, as 8700G?
 
Joined
May 3, 2018
Messages
2,881 (1.20/day)
Y cruncher’s author goes into detail.
One issue with Anandtech’s (and probably TPU’s) results is that they’re not using the current version of the program with Zen 5 optimizations. His observed single thread improvement is 93% compared to Anand’s 63%. 590s vs. 1139s.
That article is a must-read - it's not just about AVX-512. I urge everyone to take a look. TLDR the one main summary point is as follows:

The biggest achilles heel of Zen5 is the memory bandwidth and the limited adoption of AVX512 itself. There simply isn't enough memory bandwidth to feed such an overpowered vector unit. And the amount of people who use AVX512 is rounding error from zero - thus leaving the new capability largely unutilized.

Memory bandwidth will not be easy to solve. We can expect Zen6 to improve things here with the new I/O and packaging improvements. But the bottleneck on AM5 and dual-channel DDR5 will remain. Perhaps a future platform with bigger caches (dual CCD X3D?) and 4-8 channels of CAMM memory will we see some light at the end of the tunnel.


So another two years to wait folks and all may be right.
 
Joined
Nov 6, 2008
Messages
10 (0.00/day)
Memory bandwidth will not be easy to solve. We can expect Zen6 to improve things here with the new I/O and packaging improvements. But the bottleneck on AM5 and dual-channel DDR5 will remain. Perhaps a future platform with bigger caches (dual CCD X3D?) and 4-8 channels of CAMM memory will we see some light at the end of the tunnel.

So another two years to wait folks and all may be right.
Actually we might know a lot sooner than that. Strix Point Halo should give some indication of how increased bandwidth impacts performance. Hopefully it can use regular DDR5 and that someone makes a board with regular DIMM slots so we can see how it performs with low latency RAM.
 
Joined
Mar 31, 2012
Messages
860 (0.19/day)
Location
NL
System Name SIGSEGV
Processor INTEL i7-7700K | AMD Ryzen 2700X | AMD Ryzen 9 9950X
Motherboard QUANTA | ASUS Crosshair VII Hero | MSI MEG ACE X670E
Cooling Air cooling 4 heatpipes | Corsair H115i | Noctua NF-A14 IndustrialPPC Fan 3000RPM | Arctic P14 MAX
Memory Micron 16 Gb DDR4 2400 | GSkill Ripjaws 32Gb DDR4 3400(OC) CL14@1.38v | Fury Beast 64 Gb CL30
Video Card(s) Nvidia 1060 6GB | Gigabyte 1080Ti Aorus | TUF 4090 OC
Storage 1TB 7200/256 SSD PCIE | ~ TB | 970 Evo | WD Black SN850X 2TB
Display(s) 15,5" / 27" /34"
Case Black & Grey | Phanteks P400S | O11 EVO XL
Audio Device(s) Realtek
Power Supply Li Battery | Seasonic Focus Gold 750W | FSP Hydro TI 1000
Mouse g402
Keyboard Leopold|Ducky
Software LinuxMint
Benchmark Scores i dont care about scores

Can somebody enlighten me on why the energy efficiency on higher clocks (OC) translates to better efficiency than stock?
I don't care about gaming performance as long as this CPU can serve my work, but is it worth the 135EUR difference (compared to 7950X3D)?
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
By general purpose CPUs for PC you mean the G models, as 8700G?
I mean the nonX3Ds

That article is a must-read - it's not just about AVX-512. I urge everyone to take a look. TLDR the one main summary point is as follows:

The biggest achilles heel of Zen5 is the memory bandwidth and the limited adoption of AVX512 itself. There simply isn't enough memory bandwidth to feed such an overpowered vector unit. And the amount of people who use AVX512 is rounding error from zero - thus leaving the new capability largely unutilized.

Memory bandwidth will not be easy to solve. We can expect Zen6 to improve things here with the new I/O and packaging improvements. But the bottleneck on AM5 and dual-channel DDR5 will remain. Perhaps a future platform with bigger caches (dual CCD X3D?) and 4-8 channels of CAMM memory will we see some light at the end of the tunnel.
Missing the point... and the purpose of these chiplets. Its improvements never meant for desktops.
AMD probably do not care a bit about AVX performance on desktops. Apps have been stripped from AVX a while ago... (BTW, why that happened?)
Its current CPUs have really good performance for productivity that don't need much better I/O, at least its not that important... And the issue of low bandwidth, high latency that affects mostly gaming has been solved with the X3D parts. After 2 generations of X3D parts on both existing platforms (AM4/5) we took a good taste and now every(?) gamer sleeps and wakes with the X3D on mind.
On the EPYC market will thrive and for gaming there is the extra cache. Only this gen the nonX3D versions are taking the hit and "falling" for the team red and its "cause"...

Do I start to sound like a broken record?

The more I look review results across the internet on Linux and on server applications the more I stand behind my thought a week ago (on 9600X/9700X launch) that AMD designed these non 3DVcache chiplets for servers on EPYC packages. For desktop they just make it without any fuss. These will be selling by thousands/millions on EPYC on big corporations with server needs. 50~80% server/AI performance gains is crazy and they will make a sh1t tone of money from that market.
Do you think that they care so much for the DYI market? Or anyone has the delusion that AMD/Intel makes their big profits from gamers? Thats just change for soda...
We can cry, curse, be dissatisfied, disappointed or indifferent all we want... the facts is facts.

And I believe that AMD chose to do this now after 2 generations of 3DVcache CPUs. So everyone has seen what the 3DVcache can do. Let the push towards the 3DVcache CPUs begin now...
Like another thing I said a week ago.
More distinct segmentation between..
1. General purpose CPUs
2. Server applications
3. Gaming

I'm starting to believe that they know exactly what they are doing.
This will definately clear out after the X3D parts.
And I will not be surprised at all if there's going to be more than one 8core X3D SKU.
 
Joined
Nov 6, 2008
Messages
10 (0.00/day)

Can somebody enlighten me on why the energy efficiency on higher clocks (OC) translates to better efficiency than stock?
I don't care about gaming performance as long as this CPU can serve my work, but is it worth the 135EUR difference (compared to 7950X3D)?
It’s undervolted by 10 units, so it may be reaching a slightly higher clock speed and is definitely at a slightly lower voltage. The times are so close that you can’t really tell if it’s run to run variation or an actual increase.
 
Joined
Apr 14, 2022
Messages
745 (0.78/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
Y cruncher’s author goes into detail.

Reading the above, it's actually clear that AMD just created a far from gaming or general use CPU.
These instructions are so specialized to specific workloads and apps that make the 9950X more an Epyc CPU rather than a Ryzen.
I'm fine with that but it should have been marketed differently.

LOL
But how good is AMD's implementation? Let's look at AIDA64's dumps for Granite Ridge:

AVX512VL_VP2INTERSE :VP2INTERSECTD k1+1, xmm, xmm L: [diff. reg. set] T: 0.23ns= 1.00c
AVX512VL_VP2INTERSE :VP2INTERSECTD k1+1, ymm, ymm L: [diff. reg. set] T: 0.23ns= 1.00c
AVX512_VP2INTERSECT :VP2INTERSECTD k1+1, zmm, zmm L: [diff. reg. set] T: 0.23ns= 1.00c
AVX512VL_VP2INTERSE :VP2INTERSECTQ k1+1, xmm, xmm L: [diff. reg. set] T: 0.23ns= 1.00c
AVX512VL_VP2INTERSE :VP2INTERSECTQ k1+1, ymm, ymm L: [diff. reg. set] T: 0.23ns= 1.00c
AVX512_VP2INTERSECT :VP2INTERSECTQ k1+1, zmm, zmm L: [diff. reg. set] T: 0.23ns= 1.00c
Yes, that's right. 1 cycle throughput. ONE cycle. I can't... I just can't...



Intel was so bad at this that they dropped the instruction. And now AMD finally appears and shows them how it's done - 2 years too late.
 
Joined
Aug 8, 2024
Messages
22 (0.20/day)
So the 9950x averages 3-4% faster in applications/productivity than the 7950x, while running 15W less (135W vs 150W).

I don't get why there is a large performance regression (15-18%) in some apps, because even the slightly lower clocks can't account for that.

Must be a bottleneck somewhere, or some applications/games requiring a patch to better utilize Zen 5.
 
Joined
Jul 30, 2019
Messages
3,276 (1.68/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
That's extremely subjective. Because this CPU trades blows with the former 7950X, and the performance difference is in the range of the statistical error, and the natural system-to-system performance deviation.

So, extremely pathetic. Maybe it needs faster than DDR5-6000?

View attachment 358916
View attachment 358917

View attachment 358918

View attachment 358919

View attachment 358920

View attachment 358921

View attachment 358923
If they release the Lisa Su edition it should come with a better IMC.
 
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
AMD probably do not care a bit about AVX performance on desktops. Apps have been stripped from AVX a while ago... (BTW, why that happened?)
Huh? What do you mean by that? Unless you mean specifically AVX-512.
AVX and AVX2 are not uncommon in apps and games these days.
Some games like Monster Hunter World will crash with an error if your CPU does not support AVX.

Intel was so bad at this that they dropped the instruction. And now AMD finally appears and shows them how it's done - 2 years too late.
AVX-512 was disabled / fused off in 12th gen because of the E-cores. If a thread running AVX-512 gets moved from the P-cores ended up on an E-core, it will crash.
Golden Cove /12th gen P-cores actually made substantial improvements on AVX-512 that it doesn't suffer from the enornmous throttling on older Intel CPUs.
We are not living in the Skylake/Casade Lake era anymore, siginficant architechture improvements has been made since.

Also, Intel's work on AVX10.2 indicates that they want 512-bit AVX back on their Desktop CPUs in some form down the line.
https://www.techpowerup.com/311660/...-next-gen-e-cores-to-get-avx-512-capabilities
1723700914086.png
 
Last edited:
Joined
Jul 30, 2019
Messages
3,276 (1.68/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
Anyone knows this?

The new 9950X and 9900X need to be treated as X3D parts prior to review...
Steve from GN said that AMD communicate this 5 days after send the CPUs to reviewers.

View attachment 359062

View attachment 359063

This claims to fixing scheduler (at least for 3DVcache parts) without lasso
Windows should be running the service "amd3dvcacheSvc" or else scheduling with be all over the place.
And the thing is that a simple chipset driver installation is not sufficient if windows have seen a different CPU before the 3DVcache part.
Needs good driver/registry cleaning


View attachment 359065
OMG! Did inter CCD communication get so much worse they had do to that and just park half the cores? It kind of seems basically 8 cores are just fine for gaming and Intel's P-core/e-core approach was the winning strategy foiled by the oxidation and over power issues?

I remember 3950x kind of had issues but 5950x seemed fine and I didn't hear complaints about 7950x either.

Somehow I think this could be more easily fixed with the OS with some expansion to PE format so the application can set flags that say
- "Hey pin me to fast cores only"
- "Hey pin me to highest cache cores only"
- "Hey just keep me in the same CCD"
 
Last edited:
Joined
Jan 6, 2013
Messages
81 (0.02/day)
These cpus really just seem like wastes of sand. What apps are using avx-512 to really benefit from this architecture ?
 
Last edited:
Joined
May 24, 2023
Messages
939 (1.70/day)
I don't like @W1zzard reviews anymore, too biased and in contrast with most other reviews out there!!
Your post is just about your own personal feelings. If you do not add any concrete information, as how biased, in what exactly does it differ from other reviews (which reviews?), it is completelly pointless. As pointless as if you informed us that you woke up with a headache.
 
Top