• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz

Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
You didn't say it was better, but you did say "You don't need SPEC, there are hundreds of other benchmarks", in other words saying that those benchmarks are a reasonable replacement for SPEC. This is what I have argued against - none of the benchmarks you mentioned are, no single benchmark can ever be. Did I make a silly analogy about it? Yes, because IMO what you said was silly, and deserved a silly response. A single benchmark will never be representative of anything beyond itself - at best it can show a rough estimate of something more general, but with a ton of caveats. As for using a collection of various single benchmarks: sure, that's possible - but I sure do not have the time to research and put together a representative suite of freely available and unbiased benchmark applications that can come even remotely close to emulating what SPEC delivers. Do you?

The point being: I'm leaning on SPEC because it's a trustworthy, somewhat representative (outside of gaming) CPU test suite, and is the closest we get to an industry standard. And, crucially, because we have a suite of high quality reviews using it. I do not rely on things like CB as, well, the results are pretty much useless. Which chip is the fastest and/or most efficient shows us ... well, which chip is the fastest and most efficient in cinebench. Not generally. And the point here was something somewhat generalizeable, no? Heck, even GeekBench is superior to CB in that regard - at least it runs a variety of workloads.

... I have explained that, at length? If you didn't grasp that, here's a brief summary: because we have absolutely zero hope of approaching the level of control, normalization and test reilability that good professional reviewers operate at.

And, once again, you take a result from a single benchmark and present it as if it is a general truth. I mean, come on: you even link to the source showing how that is for a single, specific workload - and a relatively low intensity, low threaded one at that. Which I have acknowledged, at quite some length, is a strength of ADL.



Have you been paying attention at all? Whatsoever? I'm not interested in best case scenarios. I'm interested in actually representative results, that can tell us something resembling truth about these CPUs. I mean, the fact that you're framing it this way in the first place says quite a bit about your approach to benchmarks: you're looking to choose sides, rather than looking for knowledge. That's really, really not how you want to approach this.

And, again, unless it wasn't clear: there is no single workload that gives a representative benchmark score for a CPU. None. Even something relatively diverse with many workloads like SPEC (or GeekBench) is an approximation at best. But a single benchmark only demonstrates how the CPU performs in that specific benchmark, and might give a hint as to how it would perform in very similar workloads (i.e. 7zip gives an indication of compression performance, CB gives an indication of tiled renderer performance, etc.) - but dependent on the quirks of that particular software.

This is why I'm not interested in jumping on this testing bandwagon: because testing in any real, meaningful way would require time, software and equipment that likely none of us have. You seem to have either a woefully lacking understanding of the requirements for actually reliable testing, or your standards for what you accept as trustworthy are just far too low. Either way: this needs fixing.

Sapphire Rapids has been delayed ... what is it, four times now? Due to hardware errors, security errors, etc.? Yeah, that's not exactly a good place to start for a high performance comparison. When it comes out, it won't be competing against Zen3, it'll be competing against Zen4 - EPYC Genoa.

As for your fabulations about what a 16c SR CPU will perform like at 130W or whatever - have fun with that. I'll trust actual benchmarks when the actual product reaches the market. From what leaks I've seen so far - which, again, aren't trustworthy, but they're all we have to go by - SR is a perfectly okay server CPU, but nothing special, and nowhere near the efficiency of Milan, let alone Genoa.

And, crucially, SR will be a mesh fabric rather than a ring bus, and will have larger caches all around, so it'll behave quite differently from MSDT ADL. Unlike AMD, Intel doesn't use identical core designs across their server and consumer lineups - and the differences often lead to quite interesting differences in performance scaling, efficiency, and performance in various specific workloads.
Do you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.

Regarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I'd like to know if a 7950X would be sufficient to run a single PC capable of 10-player co-op off of it at a practical frame rate acceptance of 60FPS+ or not. That would be very impressive. You only need 4 cores so it should be fine.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Do you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.
... and you still don't get the fact that you simply can't know that a workload is a "best case scenario" for any given architecture or implementation of that architecture until you've done extensive testing across a wide variety of workloads. CB23 is absolutely not a "best case scenario" for GC - it's a benchmark it does well in. That's it. There are other benchmarks where it has a much more significant lead - and benchmarks where it falls significantly behind. Again: you're desperate for simplification, you seem dead set on wanting a single test that can somehow give a representative overview. As I apparently have to repeat myself until this sticks: this does not exist, and never will.

As for efficiency, I have linked to quite a few tests in which Zen3 is already either faster, more efficient, or both, when compared to ADL. I mean, just look at TPU's 12900K review? At stock, the 12900K loses to or ties with the 5950X in the following tests: Corona, Keyshot, V-Ray, UE4 game dev, Google Tesseract OCR, VMWare Workstation, 7-zip, AES & SHA3 encryption, H.264 & H.265 encoding. Now, we don't have power measurements for each of these tests, sadly. But we do know the stock power limits, as well as the per-core peak power of both CPUs. So, unless the 12900K has some kind of debilitating bottleneck that causes it to essentially sit idle, it is using slightly less power (in very light, low threaded workloads) as much power (in workloads that strike that balance of a few threads, but not very heavy ones) or more (in anything instruction dense or anything lightweight above ~3 active cores) than the 5950X. Some of these - rendering, compression, encryption and encoding, at least - are relatively instruction dense nT workloads, where the 12900K will be using more power than the 144W-limited 5950X. Yet it still loses. So, that kind of disproves your "It's more efficient at everything", no?

Would a low-clocked 16c ADL chip have better efficiency than the 12900K in these tests? That depends on the test, how well it utilizes E cores, and what clocks that chip could sustain at your proposed power levels - including crucial details about the specific silicon implementation that render speculation on this utterly pointless. Still, it is highly unlikely that this would represent a massive, earth-shattering efficiency improvement.
Regarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.
And you entirely missed the point that the GC cores in SR aren't the same as the GC cores in ADL, and due to the different implementations their performance will vary quite a bit. And, once again: you have absoltuely no basis for claiming that a theoretical 16 P core CPU will be more efficient than the 5950X. None.

Heck, look at TPU's 12900K testing at various power limits. Sure, it shines in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, at 125W it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even loses to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about doesn't exist. ADL scales extremely well in light, low threaded tasks, and otherwise scales fine in everything else. In MT/nT tests where it didn't already win by a ton, it loses a lot of performance as you reduce its power limits.
 
Last edited:
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
Zen 2 and 3 turned out well eventually, but had a bumpy ride with BIOS/firmware issues for several months (I believe it was 4+ months for Zen 3).
After maturity, they've been great though. My system which was built nearly one year ago has had zero crashes (if I recall correctly), and I run my computers for many months without reboot.


With the current level of inflation we (as consumers) should be happy if we see prices anywhere close to this. And if we do, and AMD can supply enogh chips, then they should move a huge volume of products.


Achieving something like this would require very good engineering on top of an unusually well performing node.
Do you remember the Zen 2 rumors? At some point the >5 GHz hype was extreme, yet it turned out to be nonsense from a YouTube channel. So we'll see if the details of this article is true or not.


IPC is just the average instructions per clock. There are many changes to CPUs which can improve IPC, yet it varies from workload to workload (sometimes even application) whether these improvements translates into increase performance. Typically, increases in execution units, SIMD, etc. have little impact on games but massive impact on video or software rendering, while improvements to prefetcher, cache, etc. typically have more impact on games, yet both of these impact IPC.

I believe Zen 4 will also increase L2 cache, so a matchup here will be quite interesting.

But as for 5800X3D being an "insane gaming chip", that's more than a little exaggerated. There are some games where the gains are very large, but for most of them the gains are marginal in realistic resolutions. We don't know whether this kind of boost from increased L3 will continue with future games, but we do know that software which exhibit this kind of behavior is caused by instruction cache misses, and any good programmer could tell you that misses in instruction cache is primarily due to software bloat. So my point is that designing a CPU with loads of L3 is a double-edged sword; it will "regain" some performance lost due to bad code, but it may also "encourage" bad software design?

I'm more interested in what AMD may use this stacking technology for in the future. If it's just to add more L3 cache, then it's almost a gimmick in the consumer space. But if this someday leads to a modular CPU design where you can have e.g. 8 cores, but you can choose between a "base" version for gaming or one with extra SIMD for multimedia etc., but seamlessly integrated through multi-layer chiplets, then I'm for it.
Are you for real? Didn't Amd show clock speeds themselves? I also don't recall zen 3 at launch ever having an issue, but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus. The 5800x3d is a beast of gaming chip, compare it to its predecessor(zen2) and its running mate(5800x).
 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
So, that kind of disproves your "It's more efficient at everything", no?
No it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8

Heck, look at TPU's 12900K testing at various power limits. Sure, it shines in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even loses to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about doesn't exist. ADL scales extremely well in light, low threaded tasks, and otherwise scales fine in everything else. In MT/nT tests where it didn't already win by a ton, it loses a lot of performance as you reduce its power limits.
Yeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption

And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers

Are you for real? Didn't Amd show clock speeds themselves? I also don't recall zen 3 at launch ever having an issue, but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus. The 5800x3d is a beast of gaming chip, compare it to its predecessor(zen2) and its running mate(5800x).
Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...
 
Last edited:
Joined
Mar 6, 2017
Messages
3,330 (1.18/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
OK seriously, do you get a paycheck from Pat Gelsinger?
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8
Sorry, but you're being wildly inconsistent here. Now you're saying your claim is that the GC core is more efficient than the Zen3 core. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.

As I have written about at length above, there is a strong argument to be made for Alder Lake, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads than Zen3 CPUs, but - repeating myself a lot here - this is not due to an advantage in core efficiency, but due to lower uncore power draw. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that despite having a less efficient core, they win in chip efficiency comparisons in these workloads because the chip is more than just cores.

I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark?
Yeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption

And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers
Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X
Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...
"Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.

It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally the best. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because the world doesn't work that way. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - generally. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be the best is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.
 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Sorry, but you're being wildly inconsistent here. Now you're saying your claim is that the GC core is more efficient than the Zen3 core. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.

As I have written about at length above, there is a strong argument to be made for Alder Lake, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads than Zen3 CPUs, but - repeating myself a lot here - this is not due to an advantage in core efficiency, but due to lower uncore power draw. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that despite having a less efficient core, they win in chip efficiency comparisons in these workloads because the chip is more than just cores.

I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark?

Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X

"Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.

It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally the best. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because the world doesn't work that way. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - generally. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be the best is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.
The TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.

Personally I tested 3 12900k at 4 different mobos and they all came back with the same results, 23500 to 24500 at 125w. Nowhere near close to TPUs numbers.

I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed. I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.
 
Joined
Oct 21, 2005
Messages
7,061 (1.01/day)
Location
USA
System Name Computer of Theseus
Processor Intel i9-12900KS: 50x Pcore multi @ 1.18Vcore (target 1.275V -100mv offset)
Motherboard EVGA Z690 Classified
Cooling Noctua NH-D15S, 2xThermalRight TY-143, 4xNoctua NF-A12x25,3xNF-A12x15, 2xAquacomputer Splitty9Active
Memory G-Skill Trident Z5 (32GB) DDR5-6000 C36 F5-6000J3636F16GX2-TZ5RK
Video Card(s) ASUS PROART RTX 4070 Ti-Super OC 16GB, 2670MHz, 0.93V
Storage 1x Samsung 970 Pro 512GB NVMe (OS), 2x Samsung 970 Evo Plus 2TB (data), ASUS BW-16D1HT (BluRay)
Display(s) Dell S3220DGF 32" 2560x1440 165Hz Primary, Dell P2017H 19.5" 1600x900 Secondary, Ergotron LX arms.
Case Lian Li O11 Air Mini
Audio Device(s) Audiotechnica ATR2100X-USB, El Gato Wave XLR Mic Preamp, ATH M50X Headphones, Behringer 302USB Mixer
Power Supply Super Flower Leadex Platinum SE 1000W 80+ Platinum White, MODDIY 12VHPWR Cable
Mouse Zowie EC3-C
Keyboard Vortex Multix 87 Winter TKL (Gateron G Pro Yellow)
Software Win 10 LTSC 21H2
I've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.
 
Joined
Apr 16, 2019
Messages
632 (0.31/day)
I've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.
13700(k) will likely be considerably more potent. Honestly, as it looks right now, only 7950x will have some merit, unless you're willing to play the waiting game of what might eventually get released on AM5 platform, but if you want your performance now...
 
Joined
Jan 29, 2021
Messages
1,852 (1.33/day)
Location
Alaska USA
Yeah the US "there might be sales tax, but we won't tell you until the second before you're paying" thing is incredibly shady and misleading.
There are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.
 
Joined
Feb 1, 2019
Messages
3,590 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
The clocks are very impressive but I hope its not at the cost of power efficiency.
 
Joined
Jun 13, 2020
Messages
12 (0.01/day)
Not going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.
Many possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.

Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.
I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed.
But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is package power. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally can't get per-core power from package power, as there's other stuff mixed in there.

There are two possible logically congruent variants of your argument:
- That The GC core is more efficient than the Zen3 core, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage

The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.

The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses.
I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.
It's been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?

There are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.
I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
 
Joined
Jan 29, 2021
Messages
1,852 (1.33/day)
Location
Alaska USA
I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
A store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online. So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Not going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.
Upgrading every generation makes no sense anyway - it just makes progress feel slower by chopping it up into tiny bits, while costing tons of money. That's a great PC you've got, and it'll be great for many years still, so no reason to upgrade for a while still.
The clocks are very impressive but I hope its not at the cost of power efficiency.
Given the increase in base clock it seems efficiency is maintained at least to some degree, though they're definitely pushing these hard. The chips should all do base clock continuously at TDP, which looks decent (from 3.4GHz @ 105W to 4.5GHz @170W), but bumping TDP from 105W to 170W and PPT from 144W to 230W is still quite a lot. PPT/TDC/EDC tuning will likely be even more useful for Zen4 than it is for Zen3 currently, and no doubt there'll be notable gains by setting lower power limits simply as the chips are scaling much higher in power than previously.

A store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online. So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.
Yes, exactly. Like I said: this is easily solved.
And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.
Through this, they could easily adjust the listed price to match with the reality of what the customer will be paying. This really isn't complicated at all.
 
Joined
Jun 10, 2021
Messages
19 (0.02/day)
System Name KAAN
Processor AMD 5950X B2
Motherboard Asus Crosshair VIII Formula
Cooling ARCTIC Liquid Freezer II 280
Memory G.SKILL 4000C16 @3666C14 - 4x16GB - Samsung B-Die
Video Card(s) MSI GeForce RTX 3080 SUPRIM X 10G
Storage Kingston KC3000 2TB
Display(s) ASUS ROG Swift PG279Q 27"
Case Phanteks ECLIPSE P600s
Audio Device(s) Audeze Mobius
Power Supply Corsair HX750i
Mouse Logitech G604 LIGHTSPEED
Keyboard Logitech G815
Software Windows 11 (VBS)
I'm really excited for the Zen4, but I have a 5900X that I built in march 2021 , and rebuilding my whole system now after a year and a half may not be justified by the assumed performance increase of a Zen4 platform ... I would have liked a 5900X3D but nope :(
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I hope they introduce some low power E variants perhaps they'll do that alongside 3DStacked Cache models!!? You'll already be paying a bit more for stacked cache may as well be a bit binned for more friendly power at the same time.
 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Many possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.

Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.
You are right, here is the link from techspots 12700 review. With 65w plimit it outscores tpus 12900k at 100w. That's simply preposterous


Also here is a 12900k at 125w from igorslab.



But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is package power. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally can't get per-core power from package power, as there's other stuff mixed in there.

There are two possible logically congruent variants of your argument:
- That The GC core is more efficient than the Zen3 core, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage

The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.

The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses.
Im talking about package power. Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?

been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?
Im just trying to tell you im pretty confident it is the case. And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big
 
Joined
Apr 14, 2022
Messages
745 (0.78/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
Why does the 12400f use the same amount of power with the 5600X?
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.

so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?

it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.

 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Why does the 12400f use the same amount of power with the 5600X?
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.

so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?

it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.

Τhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match

Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.

It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.



Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Im talking about package power.
Then please, for the love of all that's good in this world, stop going on about "core efficiency". Package power is only indirectly indicative of core efficiency, and to extract core efficiency from package power you must be able to reliably remove uncore power from package power. Without doing so, there is no way whatsoever of knowing how much power the cores are consuming.
Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?
Normalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands. And they demonstrated a wide range of power behaviours at these stock settings - in instruction dense POV-Ray, they saw a 71W increase over idle, which they estimate to be a 55-60W increase in core power. On the other hand, in the less instruction dense SPEC workloads they estimated core power at 25-30W. At (at least roughly) the same clocks. At which point it delivered marginally better performance than the 5950X, the cores in which peak at 20.6W in POV-Ray and similar to ADL likely consume a bit less across the SPEC suite.

That demonstrates that, as configured from the factory, at very similar clock speeds, Zen3 is more efficient than ADL as ADL beats it by ~5-16% while consuming notably more than 5-16% more power. Lowering the power limit will not change ADL's efficiency in this test, because the CPU is nowhere near hitting any reasonable power limit - even a 50W limit would likely deliver roughly the same performance in SPEC, and it will boost as opportunistically within this limit unless also frequency limited.
Im just trying to tell you im pretty confident it is the case.
You're so confident that you're heavily emotionally invested in the outcome, yes, I see that. Doesn't change what I said above.
And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big
But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is far more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?

Τhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match

Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.

It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.



Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage
There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.

Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is not a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.


It's also kind of telling that you're very set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - by far the most power hungry bin of Zen3, by a massive margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing. The 5800X consumes notably more power per core in an 8-core load than both of those CPUs, while also clocking lower. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?

You're also wrong about your 12400/12900K binning statements - they're not the same die, so they're not comparable bins at all. They're different silicon implementations of the same architecture, and each represents a bin of its implementation. It's entirely possible that the 12400 is a low grade bin of its silicon, but unless you've got detailed clock and power scaling data for several examples of both chips, you can't make comparisons like that.

There's also the complexities of boost algorithms and thermal/power protection systems to take into account, which can throw off simple "more power=faster" assumptions. For example, my 5800X (from testing I did way back when) runs faster in Cinebench when limited to 110W PPT than if let loose at 142W PPT. And significantly so - about 1000 points. Why? I can't say entirely for sure as I have neither the tools, skills nor time to pin-point this, but if I were to guess I'd say it's down to the higher power limit leading to higher boost power, meaning higher thermals, more leakage, and subsequent lower clocks to make up for this and protect the chip. Zen3 has a quite aggressive chip protection system that constantly monitors power, current, voltage, clock frequency, and more, and adjusts it all on the fly, meaning that tuning is complex and non-linear, and highly dependent on cooling.
 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Normalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands.
Normalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all

But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is far more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?
You think a comparison normalized for performance is deeply flawed? I mean come on, you cannot possible believe that. I don't believe you believe that. I said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.

Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it ;)

Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT

It's also kind of telling that you're very set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - by far the most power hungry bin of Zen3, by a massive margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing. The 5800X consumes notably more power per core in an 8-core load than both of those CPUs, while also clocking lower. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?
Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.

And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head


There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.

Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is not a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.
But im not using igorslab for efficiency comparisons. Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
 
Last edited:
Joined
Mar 6, 2017
Messages
3,330 (1.18/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....
And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's see you give some input on this.
 
Joined
Jun 14, 2020
Messages
3,457 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's say you give some input on this.
Go ahead, I hope he replies. I guarantee you 100% the benchmarks are flawed. Could be a bios thingy or something else, but its most definitely without a shadow of a doubt flawed. Im not the only one saying it, there is a thread on tom's hardware also making fun of that benchmarks, and even in the discussion of that very benchmark there were people doubting the results. That's cause they just don't make any sense, the 12600k cant be more efficient than the 12900k at same wattage, it's hillariously obvious. The flaw is so monumental, imagine if you clock the 5600x to 125w and suddenly it matches the 5950x. Well thats what you are looking at with those numbers...

Ive tested 3 12900k on 4 motherboards at 125w, all scored pretty much the same in CBR23, between 23500 and 24500. TPU scored 18k, lol
 
Top