• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-12900K

Joined
Mar 8, 2018
Messages
30 (0.01/day)
Location
Italy
System Name HAL9000
Processor Intel Core I7 2600K
Motherboard ASUS P8Z68-V Pro
Cooling Scythe Mugen 3
Memory Corsair Vengeance DDR3 1600 4x4GB
Video Card(s) ASUS Geforce GTX560Ti DirectCU II
Storage Seagate Barracuda 750GB
Display(s) ASUS VW248H
Case Cooler Master HAF 912 Plus
Audio Device(s) Logitech S220
Power Supply Seasonic M12II 620 EVO
Mouse Logitech G300
Keyboard Logitech K200
Software Windows 7 Professional 64bit
3200 MHz sticks JEDEC is 22-22-22

Have them in this laptop I'm typing from.
My Hyperx Fury kit have 3200 JEDEC profile with CL 19-21-21 timing

Edit: from data sheet HX432C18FB2K2/16 3200 JEDEC profile is 18-21-21
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
My Hyperx Fury kit have 3200 JEDEC profile with CL 19-21-21 timing

Edit: from data sheet HX432C18FB2K2/16 3200 JEDEC profile is 18-21-21
Hm, that's weird. I also see the datasheet lists it as "JEDEC/PnP", which might allude to it not being an actual JEDEC spec, but interpreting what exactly that means is going to be guesswork either way. Also odd to see that that profile matches the first XMP profile, at the same voltage - I guess some subtimings might be different, but that seems oddly redundant. Even more odd is the second XMP profile at 2933 - I don't think I've ever seen an XMP profile lower than a JEDEC profile.
You aren't making any sense here, ok so MS has known about these chips for at least 3+ years right? So Apple, having made the chips & OS, also has at least 5+ years, that is beside their experience of Axx chips & iOS, to make the M1 on desktops a real winner! Heck there were rumors as far back as 2016-17 that these chips were coming, not to mention they are optimizing for essentially a single closed platform.

Do you have any idea about literally gazillion different combination of hardware & software (applications) that win11 has to work on? You think Intel, MS or both combined can replicate this in a lab? I've been essentially beta testing Windows (releases) for 10+ years now & your posts just shows how easy you make it sound ~ except it's not :rolleyes:

No it's not, stop making things up you seem to be on a crusade to somehow make it look like this is child's play if MS(or Intel) had done it properly! You can have all the money in the world it won't mean a thing, it takes time ~ that's the bottom line, there's no magic pixie dust you can sprinkle to make everything work the way it should :shadedshu:
I never said it doesn't take time. I specifically argued for why MS has had time. And as someone else pointed out, they've also had Lakefield as a test bed for this, extending that time quite a bit back. Also, the amount of hardware combinations isn't especially relevant - we're talking about scheduler optimizations here. The main change is that the scheduler goes from being aware of "real" and SMT threads, i.e. high and low performance threads + preferred/faster cores (but all being the same in principle), to a further differentiation between high and low power cores. Anandtech covers this in their review, in their W10 vs. W11 testing.

Also, whether or not Apple has had more time (which they undoubtedly have) doesn't fundamentally affect whether MS has had sufficient time to make this work. And the test results show that for the most part they have. There are indeed applications where the E cores are left un(der)utilized or some other oddity, but for the most part it works as advertised. That alone speaks to the success of the scheduler changes in question. A further note here: MS already claims that this will work fine on W10 as well, without scheduler optimizations, with the major difference being run-to-run variance as the scheduler might at times shuffle threads around in a less optimal manner, as it only treats the E cores as low performance rather than low power.

But again: I never said this doesn't take time. It obviously does. If you read what I said that should really be abundantly clear.
Imagine the crazy thing, I actually use my PC too, I just don't leave Cinebench running while I paly games.
Ah, yes, the old caricatured straw man argument. Running out of actual responses? Did I say I did? Or did I say that I don't think a modern PC should necessitate you spending time managing your background processes to any significant degree?
BOINC, HandBrake, Oracle VM, 7 zip...
BOINC is an opportunistic "compute sharing" app. Yes, there are people who set up PCs purely for that, but for >99.9999% of users it's not a main workload - and it's a prioritized workload for even fewer, as the core concept is for it to use leftover, unused resources.

Handbrake is for the most part something run rarely and for relatively short periods of time. Most people don't transcode their entire media library weekly.

How many people run several VMs with heavy CPU loads at the same time on a consumer platform? Again, these people are better served with server/workstation hardware, and ST performance is likely to be of little importance to them. Or if it is, they'll run those on a separate PC.

How many long-term compression/decompression operations do you run? Yes, some people have to regularly compress and decompress massive filesets, but even then it's an intermittent and most likely "offline" workload, i.e. something either run in the background while doing other things or run overnight etc. For the most part, compression/decompression is something done intermittently and in relatively short bursts, where the performance difference between a 32c64t CPU and a 16c24t (or 16c32t) at higher clocks will be of negligible importance compared to the other tasks the PC is used for.
Um, FX 8350 existed (8C/8T), so did i7 920 (6C/12T) and Phenom X6 1055T (6C/6T).
Yep. And the FX-series at best performed on par with 4c4t i5s, and was beat soundly by 4c8t i7s (often at much lower power draws). The i7-920 was 4c8t. There were 8c and 6c Nehalem Xeons, but no Core chips above 4c8t - and those were even arguably a proto-HEDT platform due to their 130W TDPs and triple-channel memory. And the Phenom X6 was also quite slow compared to contemporary Core chips, rendering its core advantage rather irrelevant - but some people did indeed prioritize it for running more MT tasks, as AMD delivered better value in those cases.
We have problems with your statements. First of all, it's AWARD that develops BIOS with help from AMD, CPU package is mostly made by TSMC or previously Global Foundries with minimal help from AMD, many things are also not exclusively AMD's business. Just to make TR platform while they already have Zen architecture likely doesn't take an army of senior engineers. Same goes for anything else. You said it takes millions of dollars, that's true, but you seem to imply that it takes hundreds of millions, which is most likely not true.
Wait, "we" have problems? Who is "we"? And secondly, where on earth are you getting "hundreds of millions from"? I have said no such thing, so please get that idea out of your head. It's purely your own invention.

I'm saying that spending millions of R&D dollars on low-volume products, even in high-margin segments, is often a poor financial choice, and the size of the applicable market is extremely important in whether or not this happens. And as the real-world advantages of HEDT platforms have shrunk dramatically since the launch of Zen1 (8c16t) and then Zen2 (16c32t) for MSDT, making these products is bound to drop ever lower on the list of priorities. The launch of the TR-W series underscores this, as these are essentially identical to Epyc chips, cutting the need for new packaging and developing new trace layouts for a 4000+ pin socket, while also addressing the remaining profitable part of the HEDT market: high end professional workstation users who can make use of tons of threads and/or memory bandwidth.

Also, are you actually implying that AMD using external companies to develop parts of their solutions makes it noticeably cheaper? Because that's nonsense. It makes it easier and lets them have a lower number of specialized staff (which might not have tasks at all times). This is a cost savings, but not one that actually goes into the equation for R&D for a product like this - it still takes the same number of people the same time - plus, of course, these external companies need consultants and supervisors from AMD to ensure that they're sticking to specifications and following the guidelines correctly.
I literally told you that beyond making new arch, you just scale it for different product lines and SKUs. It's not that hard to make TR, when they make Ryzen and EPYC already.
And I told you that this is wrong, for reasons detailed above. The chiplet approach makes this a lot cheaper and simpler, but it does not make developing a whole new platform based on these chiplets cheap or easy.
Still easier to cool at stock speeds with air cooler than 12900K.
Lol, no. Both are manageable, but pushing it. Also, though this is sadly not followed up in the article, Anandtech's review indicates that thermal readings for the 12900K are erroneous:
Don’t trust thermal software just yet, it says 100C but it’s not
I'm very consistent, you are sloshing around from one argument to another.
As demonstrated above, you are clearly not. You keep shifting between several points of reference and ignoring the differences between them.
I said it's doable with phase change cooler.
And a 12900K can't be OC'd to hell and back with a phase change cooler? Again, for some reason you're insisting on unequal comparisons to try and make your points. You see how that undermines them, right?
It's quite likely that the Thread ... Director? is sub-optimal and will be improved in future generations Sure as hell you did. What else "thread director" is supposed to mean? OS scheduler? or CPU's own thread management logic?
I can't believe I have to spell this out, but here goes: "Sub-optimal" means not perfect. "Not perfect" is not synonymous with "has significant issues. It means that there is room for improvement. So, what I am saying: it's quite likely that the Thread Director (or whatever it's called) can be improved, yet we have no evidence of it failing significantly. The outliers we can see in reviews are more likely to be fixable in software, OS or microcode updates as requiring hardware changes.
I literally used search function in web browser.
So, for a page with 99% of its content in pictured graphs, you do a word search. 10/10 for effort.
Couldn't you pick any less straight forward link? I just needed a riddle today.
Seriously, if that's too hard for you to parse - I was specifically referencing final scores, not part scores - I can't help you there.
Anyway, that's just a single benchmark and it's super synthetic.
I'm sorry, but you're making it obvious that you have zero idea what you're saying here. SPEC is not a single benchmark, it's a collection of dozens of benchmarks. And it's not "super synthetic", it is purely based on real-world applications. You can read all about the details for every single sub-test here. There is some overlap between the applications used as the basis for SPEC subtests and AT's benchmark suite as well, for example POVray - though of course the specific workloads are different.
Basically like Passmark and Passmark's scores rarely translate to real world performance or performance even in other synthetic tasks.
Again: no. Not even close. See above. We could always discuss the validity of SPEC as a benchmark, as it clearly doesn't cover every concievable (or even common) use case of a PC, but that's not what you're doing here.
This is the link that you should have used:

New i9 is % faster than old 10900K:
In Agisoft 41%
In 3Dpm -AVX 0%
In 3Dpm +AVX 503%
In yCruncher 250m Pi 30%
In yCruncher 2.5b Pi 66%
In Corona 39%
In Crysis 1%
In Cinebench R23 MT 69% (where did you get 30% performance difference here?)
Did I mention a specific Cinebench score? Where? Are you confusing me with someone else?

Also, the tests you posted above average out to a 91% advantage for the 12900K, so ... if anything, that further undermines your postulation that an overclocked 10900K would be faster? I really don't see what you're getting at. Even in the cases where they are tied, you could OC both CPUs roughly equally, and the differences would be negligible.
And I'm too lazy to calculate the rest. So you were full of shit and could argue well,
Nice to see you're so invested in having a constructive and civil discussion.
I literally had to provide an argument to myself to realize that I'm full of shit too. Fail. 10900K is more like 60% behind i9 12900K, no reasonable overclock will close gap like that.
.... so, what I was saying all along was actually correct? Oh, no, of course, we were somehow both wrong, it's just that your new conclusion somehow aligns with what I've been saying all along. I mean, come on, man. Chill out. Being wrong is fine. It happens to all of us, all the time. Chill out.
Not at all, it's just you who can't follow simple reasoning. If you buy 12900K, you have strong single core perf, but weaker multicore perf.
No, you're the one failing to grasp that this weaker MT perf only applies in a relatively limited number of use cases. You tried to address this above, yet you listed four workloads. There are clearly more, but very few of them are likely to be significant time expenditures for even an enthusiast user. Hence my point of the ST perf being more important, and the relative importance of the lower MT thus being lower, and the 12900K providing better overall performance even if it does lose out in applications that scale well above 16 cores.
If you buy 3970X you get weaker single core perf and strong multicore perf. You want the best of both, you overclock 3970X to make it balanced. Simple. Except that I found out that 12900K's advantage is much bigger than I thought and 3970X is actually more antiquated than I thought and it's Zen, meaning it doesn't overclock that well.
So ... you agree with what I've been saying, then? Because that's exactly what I've been arguing. (Also, the 3970X is Zen2, not Zen, so it OC's better than TR 1000 and 2000, but it still doesn't OC well due to the massive power requirements of OCing such a massive core and the difficulty of powering and cooling such a dense arch).
Intel could have just launched it on HEDT platform, those guys don't care about power usage, heat output as much and that will surely mean cheaper LGA 1700 motherboards. Would have been more interesting as 16P/32E part.
But they didn't. Which tells us what? I'd say several things:
- The HEDT market is much smaller than MSDT, and less important
- This is reinforced by increasing MSDT core counts, which have taken away one of the main advantages of HEDT over the past decade
- Launching this on HEDT would leave Intel without a real competitor to the 5950X, making them look bad
- HEDT today, to the degree that it exists, only makes sense in either very heavily threaded applications or memory-bound applications

I don't doubt that Intel has some kind of plans for a new generation of HEDT at some point, but it clearly isn't a priority for them.
Nothing Alder Lake is sold where I live. i5 12600K only makes sense for wealthy buyer. i5 12400 will deliver most performance at much lower cost, Ryzen 5600X complete and utter failure as value chip since day one, but Lisa said that it's "best value" chip on the market, while ignoring 10400 and 11400 and people bought it in droves.
This makes it seem like you're arguing not on the basis of the actual merits of the products, but rather based on some specific regional distribution deficiencies. Which is of course a valid point in and of itself, but not a valid argument as to the general viability of these chips, nor how they compare in performance or efficiency to other alternatives. You might have less trouble being understood if you make the premises for your positions clearer? I also completely agree that the 5600X is expensive for what it is and that AMD needs to start prioritizing lower end products (preferably yesterday) but at least it has come down in price somewhat in many regions - but the 11400 and 11600 series that Intel launched as a response are indeed better value. Quite the reversal from a few years ago! I'm also looking forward to seeing lower end Zen3 and ADL chips, as we finally have a truly competitive CPU market.
 
Joined
Mar 8, 2018
Messages
30 (0.01/day)
Location
Italy
System Name HAL9000
Processor Intel Core I7 2600K
Motherboard ASUS P8Z68-V Pro
Cooling Scythe Mugen 3
Memory Corsair Vengeance DDR3 1600 4x4GB
Video Card(s) ASUS Geforce GTX560Ti DirectCU II
Storage Seagate Barracuda 750GB
Display(s) ASUS VW248H
Case Cooler Master HAF 912 Plus
Audio Device(s) Logitech S220
Power Supply Seasonic M12II 620 EVO
Mouse Logitech G300
Keyboard Logitech K200
Software Windows 7 Professional 64bit
Hm, that's weird. I also see the datasheet lists it as "JEDEC/PnP", which might allude to it not being an actual JEDEC spec, but interpreting what exactly that means is going to be guesswork either way. Also odd to see that that profile matches the first XMP profile, at the same voltage - I guess some subtimings might be different, but that seems oddly redundant. Even more odd is the second XMP profile at 2933 - I don't think I've ever seen an XMP profile lower than a JEDEC profile.
I searched alot for a kit of 3200 jedec for the Ryzen 2400G, i found it, i bought it in the 2018 and never installed (i have bought a new home and all the pc parts are still in a box).
I found this on internet, seem like the single channel version of the same memory.
 

Attachments

  • Kingston 3200.png
    Kingston 3200.png
    68.5 KB · Views: 109
Joined
Dec 12, 2012
Messages
777 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.

And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.

Maybe the 10 nm process can mature a bit before the 13th gen and they can do what AMD does, which is to have top-quality silicon in flagship models.
The 5800X, 5900X and 5950X basically have the same power consumption, with the 5950X being ~70% faster, and much cooler because it has two dies instead of one. This is what I want to see from Intel, amazing performance at 125 watts.
 
Joined
Mar 8, 2018
Messages
30 (0.01/day)
Location
Italy
System Name HAL9000
Processor Intel Core I7 2600K
Motherboard ASUS P8Z68-V Pro
Cooling Scythe Mugen 3
Memory Corsair Vengeance DDR3 1600 4x4GB
Video Card(s) ASUS Geforce GTX560Ti DirectCU II
Storage Seagate Barracuda 750GB
Display(s) ASUS VW248H
Case Cooler Master HAF 912 Plus
Audio Device(s) Logitech S220
Power Supply Seasonic M12II 620 EVO
Mouse Logitech G300
Keyboard Logitech K200
Software Windows 7 Professional 64bit
This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.

And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.
The same here, the time for triple A gaming is gone, now I look for efficency and integration, for this AMD are still on the top for me.
The real star of AlderLake seem to be the goldmont core.
I would really like to see 4/8 Ecores in SOC with cheap socket motherboard like AMD did for Jaguar on AM1 platform.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I would really like to see 4/8 Ecores in SOC with cheap socket motherboard like AMD did for Jaguar on AM1 platform.
That would be really interesting. It could get away with a very small (and thus efficient) ring bus as well - just the IMC, PCIe, and two core clusters. Knowing Intel though, this would be a soldered-only product, and likely a Xeon of some sort. But if they made a DIY/consumer version of this, it would be really cool.
 
Joined
Mar 8, 2018
Messages
30 (0.01/day)
Location
Italy
System Name HAL9000
Processor Intel Core I7 2600K
Motherboard ASUS P8Z68-V Pro
Cooling Scythe Mugen 3
Memory Corsair Vengeance DDR3 1600 4x4GB
Video Card(s) ASUS Geforce GTX560Ti DirectCU II
Storage Seagate Barracuda 750GB
Display(s) ASUS VW248H
Case Cooler Master HAF 912 Plus
Audio Device(s) Logitech S220
Power Supply Seasonic M12II 620 EVO
Mouse Logitech G300
Keyboard Logitech K200
Software Windows 7 Professional 64bit
That would be really interesting. It could get away with a very small (and thus efficient) ring bus as well - just the IMC, PCIe, and two core clusters. Knowing Intel though, this would be a soldered-only product, and likely a Xeon of some sort. But if they made a DIY/consumer version of this, it would be really cool.
I'm sure it will be a BGA platform and i'm not a big fan of them, i rushed to buy my last laptop with the 4th core series (now with DirectX 11 only iGPU) before they go to all BGA cpu.
I still a big fan of all the Atom-type cpu even the first potato generation but i never bought any of them i opted for the AMD for the better GPU.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,582 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.

And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.

Maybe the 10 nm process can mature a bit before the 13th gen and they can do what AMD does, which is to have top-quality silicon in flagship models.
The 5800X, 5900X and 5950X basically have the same power consumption, with the 5950X being ~70% faster, and much cooler because it has two dies instead of one. This is what I want to see from Intel, amazing performance at 125 watts.
As much as I agree with you, there are a few key points about efficiency that need to be cleared (not necessarily for you, but for anyone else on this forum).
  1. AMD's TDP figures have nothing to do with power consumption. While Zen 3 is still ahead of Intel in terms of efficiency, their 65 W CPUs have a power limit of 88 W, and their 105 W ones a limit of 142 W. It's easy to be ahead when your TDP numbers mean nothing (for the consumer at least). With the ASUS Core Optimiser enabled in BIOS, my (sold) 5950X consumed somewhere around 180-190 W in Cinebench all-core, and heated slightly above 80 °C with a 240 mm AIO. It was crazy fast, but still... Just saying.
    • My Core i7-11700 at stock scores the same as a R5 3600. One could say that it's a terrible result considering that we're putting an 8-core chip against an older 6-core. My argument is that the i7 achieves this score while being limited to 65 W at 2.8 GHz, while the 3600 maxes out its 88 W limit at around 4 GHz. The point is, Rocket Lake's efficiency at 14 nm can exceed that of Zen 2 at 7 nm with the proper settings.
  2. Intel's larger chips (compared to AMD's) are easier to cool when configured to consume the same power as Zen 2/3, due to the larger area to dissipate the heat (and better contact with the IHS, maybe?).
  3. Intel does have well-performing chips at 125 W. They achieve this by limiting long-term power consumption (PL1) to that value. At least this has been the case before Alder Lake. Sure, performance at this level doesn't match Zen 3, but it's plenty for gaming and everyday use.
    • Intel has claimed back the gaming crown by ignoring their own TDP number, and configuring the 12900K to sit way up high on the efficiency curve with a ridiculously high PL1 - at a place where no chip should sit, in my opinion. I'd be curious to see how it performs when it's actually being limited to 125 W.
With all that said, I think this is an exciting time in CPU history, with both Intel and AMD pumping out some very interesting architectures - and very compelling ones from two very different points of view. :)
 
Last edited:
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
This is progress, you cannot take that away from them. But man, they are years behind AMD when it comes to efficiency.

And for me the days of chasing the highest performance are long gone. All I care about is efficiency. I have no interest in running a CPU at 200+ watts and a GPU at 300+ watts. No way.

Maybe the 10 nm process can mature a bit before the 13th gen and they can do what AMD does, which is to have top-quality silicon in flagship models.
The 5800X, 5900X and 5950X basically have the same power consumption, with the 5950X being ~70% faster, and much cooler because it has two dies instead of one. This is what I want to see from Intel, amazing performance at 125 watts.
I think you dont understand what efficiency is. The 12900k is pushed to the absolute limits, made to consume 240 Watts, way outside its efficiency curve. That doesnt make the cpu inefficienct, it just makes the stock configuration inefficient. Likewise, if you try to push the 5950x to 240watts, it will be inefficient as well. Thankfully a 12900k is an unlocked chip, meaning you can tinker it

I said it 50 times, ill repeat it once more. Alder lake cpus are extremely efficient in 99.9% of productivity or entertainment workloads. They are not efficient mainly in rendering cause of that huge power limit under full load. Thankfully it takes around 5 seconds to lower that power limit to whatever you feel comfortable with. At around 150w you should only be losing 10% performance at most under those full load 100% peg cinebench scenarios.

The upside is if you want that 10 or 15% you CAN push it to 240 or 300 watts and get that last drop of performance, which you cannot do with zen 3 since they hit a wall pretty early. It is a positive thing being able to do that, not a negative
 
Joined
Dec 12, 2012
Messages
777 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.

Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.

I cannot estimate performance at a lower TDP, because how could I? Intel wants to show max performance, and that means nothing to me.

And I do not care that you can push AMD CPUs out of spec. The reviews show stock settings at ~125 W real power consumption.
 
Joined
Feb 1, 2019
Messages
3,667 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
I know, I never said that XMP wasn't OC after all. What generation of Ryzen was that, btw? With XMP currently, as long as you're using reasonably specced DIMMs on a platform with decent memory support, it's a >99% chance of working. I couldn't get my old 3200c16 kit working reliably above 2933 on my previous Ryzen 5 1600X build, but that was solely down to it having a crappy first-gen DDR4 IMC, which 1st (and to some degree 2nd) gen Ryzen was famous for. On every generation since you've been able to run 3200-3600 XMP kits reliably on the vast majority of CPUs. But I agree that I should have added "as long as you're not running a platform with a known poor IMC" to the statement you quoted. With Intel at least since Skylake and with AMD since the 3000-series, XMP at 3600 and below is nearly guaranteed stable. Obviously not 100% - there are always outliers - but as close as makes no difference. And, of course, if memory stability is that important to you, you really should be running ECC DIMMs in the first place.
It's a 2600X on a b450 board. I had been looking into going to a newer gen Ryzen but the used market is horrible right now probably due to the problems in the brand new market. The bios after the one I am using added memory compatibility fixes for zen+ but since proxmox is now stable, I decided to not rock the boat.

Also it is a 4 dimm setup and when it was stable on windows it was only 2 dimms (should have mentioned), so take that into account. The official spec sheets for zen+ and the original zen show a huge supported memory speed drop for 4 dimms, if I remember right original zen only officially supports 1866mhz for 4 dimms?

My current 9900k handles the same ram that my 8600k couldnt manage at XMP speeds fine, I suspect i5's might have lower binned imc's vs i7 and i9.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,582 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.
Because 1. that's the new Intel default - or at least that's what the tested motherboards default to, and 2. reviews tend to focus on the best case scenario, that is: one that's not restricted by cooling or power.

Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.
That takes a few clicks in the BIOS to change.

I cannot estimate performance at a lower TDP, because how could I? Intel wants to show max performance, and that means nothing to me.
That's true, and I sympathise with you on this. Although, by looking at review data, you can expect pretty good performance at humanly acceptable power consumption levels, too. By lowering the power target by 50% on an average chip (regardless of manufacturer), you don't get a decrease in performance anywhere near that 50% value.

By comparison, if I set the power limit on my 2070 to 125 W instead of the default 175 (71%), I get about 5-7% lower performance. That is nowhere near noticeable without an FPS counter on screen.

Edit: If your main goal is gaming, you most likely won't notice any change by lowering the power target on a 12900K, as games don't push it anywhere near its power limit anyway.

And I do not care that you can push AMD CPUs out of spec. The reviews show stock settings at ~125 W real power consumption.
Did you read what I wrote above? AMD CPUs do not have a 125 W real power consumption. Their 105 W TDP CPUs have a default power target of 142 W. With PBO and different "core optimiser" features on certain motherboards, the 5950X easily sips 180-190 W. TDP doesn't mean power with AMD.
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.49/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Or did I say that I don't think a modern PC should necessitate you spending time managing your background processes to any significant degree?
Doesn't matter to me anyway. I clearly said, that if you just leave stuff opened in background, that's not a problem, problems arise, if you try to do something really stupid like running Photoshop and playing a game at the same time. Common sense says that you shouldn't be doing something like that and that's why people don't do that. 2E cores are enough for background gunk, unless you are trying to achieve something that you clearly shouldn't.

BOINC is an opportunistic "compute sharing" app. Yes, there are people who set up PCs purely for that, but for >99.9999% of users it's not a main workload - and it's a prioritized workload for even fewer, as the core concept is for it to use leftover, unused resources.

Handbrake is for the most part something run rarely and for relatively short periods of time. Most people don't transcode their entire media library weekly.

How many people run several VMs with heavy CPU loads at the same time on a consumer platform? Again, these people are better served with server/workstation hardware, and ST performance is likely to be of little importance to them. Or if it is, they'll run those on a separate PC.

How many long-term compression/decompression operations do you run? Yes, some people have to regularly compress and decompress massive filesets, but even then it's an intermittent and most likely "offline" workload, i.e. something either run in the background while doing other things or run overnight etc. For the most part, compression/decompression is something done intermittently and in relatively short bursts, where the performance difference between a 32c64t CPU and a 16c24t (or 16c32t) at higher clocks will be of negligible importance compared to the other tasks the PC is used for.
These are workloads that scale well and I have done for considerable amount of time. They are far more useful benchmarks than what Anandtech actually tests. I like BOINC and contribute from time to time. From CPU loads alone I have accumulated over 1 million points, most of which were achieved with very modest chips like FX 6300, Athlon X4 845 or even Turion X2 TL-60. Took me months to achieve that and purchase of i5 10400F, so far only makes up 10% of all effort, despite being the fastest chip I have. For me that's very meaningful workload. Next is Handbrake. You argue that you only need it for short bursts of time, but I personally found out that it's useful in converting seasons of shows and if you want that done with high quality, good compression ratio, it can take days. Want to do this for several shows? It can take weeks. Obviously it would be a good trade-off to just use GPU, but then you can't achieve quality as good, compression as good and even then it may take half of day to transcode a whole show. So if someone does this stuff in certain frequency, they should think about getting chip with stronger MT performance or just buy a fast graphics card or consider Quadro (now RTX A series). Next load are VMs. For me VMs are cool to test out operating systems, but besides that some BOINC projects require to use BOINC in VMs, even projects that aren't exclusively VM only, sometimes give more projects to linux rather than Windows. And then you need RAM, cores and you can expect to keep some cores permanently pegged to 100% utilization. CPU with more cores (not threads) allows you to also use your computer, instead of leaving it working as server. Better yet, you have enough cores to run BOINC and to run BOINC in VM. And then we have 7zip. I will be brief, if you download files from internet, you most likely will need it very often and often for big files. Some games from Steam are compressed and have to be decompressed. You may also use NTFS compression on SSD.

All in all, depending on user, MT tasks and their performance can be very important and to them, certainly not a rare need. And if they have a need to do it professionally, then faster chip is most likely a financial no-brainer. I personally found that the most demanding tasks are multi threaded well and even then take ages to complete. Just liek I thought in 2014, that multithreaded performance was very important and maybe even at cost of single threaded performance, so I think today, but today that's even more obvious.


Yep. And the FX-series at best performed on par with 4c4t i5s, and was beat soundly by 4c8t i7s (often at much lower power draws). The i7-920 was 4c8t. There were 8c and 6c Nehalem Xeons, but no Core chips above 4c8t - and those were even arguably a proto-HEDT platform due to their 130W TDPs and triple-channel memory. And the Phenom X6 was also quite slow compared to contemporary Core chips, rendering its core advantage rather irrelevant - but some people did indeed prioritize it for running more MT tasks, as AMD delivered better value in those cases.
FX chips were very close to i7s in multithreaded performance and since they were a lot cheaper, literally 2 or maybe even 3 times cheaper, they were no brainer chips for anyone seriously interested in those workloads. They were also easy to overclock. As long as you have cooling, 5Ghz was nothing to them, for nearly 100 USD prices of FX 8320 chips, i7 was complete no go.

lol I made a mistake about i7 920, but yeah there were 6C/12T chips available. Maybe i7 960 was the lowest end hexa core. Still, those were somewhat affordable if you needed something like that.

Phenom II X6 chips were great high core count chips, lower end models like 1055T were really affordable. If you overclocked one of those, you could have had exceptional value rendering rig for cheap. They sure did costs a lot less than i7 4C/8T parts and were seriously competitive against them. Obviously, later released FX chips were even better value.

Anyway, my point was that things like high core count chips existed back then and were quite affordable.


I'm saying that spending millions of R&D dollars on low-volume products, even in high-margin segments, is often a poor financial choice, and the size of the applicable market is extremely important in whether or not this happens. And as the real-world advantages of HEDT platforms have shrunk dramatically since the launch of Zen1 (8c16t) and then Zen2 (16c32t) for MSDT, making these products is bound to drop ever lower on the list of priorities. The launch of the TR-W series underscores this, as these are essentially identical to Epyc chips, cutting the need for new packaging and developing new trace layouts for a 4000+ pin socket, while also addressing the remaining profitable part of the HEDT market: high end professional workstation users who can make use of tons of threads and/or memory bandwidth.
And that's still better than making 5950X or 5900X. As consumer platforms are made to be cheaper have only small range or power requirements, if they make 5950X and say that it's compatible with AM4 socket and that any board supports it and then some guy does that on cheapest A520 board, most likely it will throttle badly. If they want to avoid lawsuits, then they better make their CPU range limited or make motherboard makers only produce more expensive boards, but that's what they can't really do since AM4 is supposed to be cheap, affordable and flexible platform. Watt marketing from FX era is seemingly not done anymore, even if it would make perfect sense. Intel really suffers from that with shit tier i9 K chips making H510 VRMs burn. I'm surprised that they still don't have lawsuits to deal with, considering that this is blatant case of advertising something that can't happen. Anyway, those are the reasons to not make HEDT chips compatible with mainstream sockets. Intel in Sandy, Ivy and Haswell era managed to do that. That was great for consumers. All this bullshit with pushing HEDT chips to consumer platform does nothing good for anyone, except Intel and AMD.

And a 12900K can't be OC'd to hell and back with a phase change cooler? Again, for some reason you're insisting on unequal comparisons to try and make your points. You see how that undermines them, right?
Barely, it's already running obscenely hot and has clocks cranked to the moon. There's very little potential. I wouldn't overclock it, as it has two types of cores with different voltages with many frequency and voltage stages + many power settings in BIOS. 12900K is hardly tweakable unless you spend obscene amount of time to do it and then spend weeks if not months stability testing it in various loads. That's stupid and makes no sense. Might as well just leave it as it is. 3970X is not much better than i9, but at least it has same type of cores. And potential to benefit from raised power limits (whatever they are called on AMD side). i9 12900K has them set better, therefore less potential for gains.

I'm sorry, but you're making it obvious that you have zero idea what you're saying here. SPEC is not a single benchmark, it's a collection of dozens of benchmarks. And it's not "super synthetic", it is purely based on real-world applications. You can read all about the details for every single sub-test here. There is some overlap between the applications used as the basis for SPEC subtests and AT's benchmark suite as well, for example POVray - though of course the specific workloads are different.
Strong disagree, most tasks are super niche and quite synthetic. I wouldn't consider it a realistic test suite. I consider practical testing with most common, widely used software. Anything else may be still practical, but due to nature of being niche, can't be honestly said to be so.

.... so, what I was saying all along was actually correct?
To some extent yes


No, you're the one failing to grasp that this weaker MT perf only applies in a relatively limited number of use cases. You tried to address this above, yet you listed four workloads. There are clearly more, but very few of them are likely to be significant time expenditures for even an enthusiast user. Hence my point of the ST perf being more important, and the relative importance of the lower MT thus being lower, and the 12900K providing better overall performance even if it does lose out in applications that scale well above 16 cores.
That's going to depend on person.

So ... you agree with what I've been saying, then? Because that's exactly what I've been arguing. (Also, the 3970X is Zen2, not Zen, so it OC's better than TR 1000 and 2000, but it still doesn't OC well due to the massive power requirements of OCing such a massive core and the difficulty of powering and cooling such a dense arch).
Maybe. By Zen I mean Zen as architecture family, not Zen 1. At this point, I'm not sure if Zen 2 is really that dense. New Intel chips might be denser.

But they didn't. Which tells us what? I'd say several things:
- The HEDT market is much smaller than MSDT, and less important
- This is reinforced by increasing MSDT core counts, which have taken away one of the main advantages of HEDT over the past decade
- Launching this on HEDT would leave Intel without a real competitor to the 5950X, making them look bad
- HEDT today, to the degree that it exists, only makes sense in either very heavily threaded applications or memory-bound applications

I don't doubt that Intel has some kind of plans for a new generation of HEDT at some point, but it clearly isn't a priority for them.
Some points you make can be solved with marketing, like making people see HEDT Intel platform as 5950X+Threadripper competitor. and the main reason why HEDT is losing ground is due to Intel pushing HEDT parts into mainstream segment (where they arguably don't belong). It's not that HEDT is not important, it's just how business is done by Intel.


This makes it seem like you're arguing not on the basis of the actual merits of the products, but rather based on some specific regional distribution deficiencies. Which is of course a valid point in and of itself, but not a valid argument as to the general viability of these chips, nor how they compare in performance or efficiency to other alternatives. You might have less trouble being understood if you make the premises for your positions clearer? I also completely agree that the 5600X is expensive for what it is and that AMD needs to start prioritizing lower end products (preferably yesterday) but at least it has come down in price somewhat in many regions - but the 11400 and 11600 series that Intel launched as a response are indeed better value. Quite the reversal from a few years ago! I'm also looking forward to seeing lower end Zen3 and ADL chips, as we finally have a truly competitive CPU market.
Speaking about regional deals, pretty much since C19 lockdown star in my region (Lithuania) there is a great shortage of Athlons, quad core Ryzens, Ryzen APUs in general, Celerons. Lithuanian market is now seemingly flooded with i5 10400Fs and i3 10100Fs. Anything Ryzen has Ryzen tax, seemingly making Intel more competitive here, but in terms of sales, Ryzen is winning, despite having prices inflated and only having liek 2-3 different SKUs available per store. Idiots still think that it's better value than Intel. Ironically, 5950X is seemingly mainstream chip as it is sold the best. Yet at the same time brand new Pentium 4 chips are sold. Pentium 4s outsell i3 10100Fs and i5 11400Fs. That happened in one store, but it's still incredibly fucked up. In other store, most sold chip is 2600X, meanwhile second is 5950X. That second store doesn't have Pentium 4s, but they have refurbished Core 2 Duos. They don't sell well at all there. In Lithuania most computers sold are local prebuilts or laptops, but DIY builders are going bonkers for some reason.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Then why are reviewers showing performance at max power consumption? I tried to find results at 125 W, but I was unsuccessful.

Just look at the review on this site. The 12900K consumes over 100 W more than any AMD CPU.

I cannot estimate performance at a lower TDP, because how could I? Intel wants to show max performance, and that means nothing to me.

And I do not care that you can push AMD CPUs out of spec. The reviews show stock settings at ~125 W real power consumption.
Igorslab has tested with 125w PL as well. That results in an insanely efficient 12900k, although I personally wouldn't limit it that low. Around 150-170w should be the sweet spot for those heavily demanding workloads

 
Joined
Dec 12, 2012
Messages
777 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
Igorslab has tested with 125w PL as well. That results in an insanely efficient 12900k, although I personally wouldn't limit it that low. Around 150-170w should be the sweet spot for those heavily demanding workloads


So this time AMD CPUs are pushed way out of spec. They are faster than the 12900K, but their power consumption is also very high.

I dislike this very much. I feel it should be mandatory to test all CPUs in two ways, one observing power limits, the other ignoring them.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
So this time AMD CPUs are pushed way out of spec. They are faster than the 12900K, but their power consumption is also very high.

I dislike this very much. I feel it should be mandatory to test all CPUs in two ways, one observing power limits, the other ignoring them.

K chips are meant to be configured, enthusiast chips.

You can run it flat out for performance, which is what they're really for, or if efficiency is your thing you can do that too.

12900K matching an M1 Max for efficiency :

1636425749937.png


12900K power limited to 125W and 241W :

1636427574034.png


5950X PBO2 (power unlocked) - drawing 238W peak score 28621 :

1636427432036.png
 
Joined
Dec 12, 2012
Messages
777 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
K chips are meant to be configured, enthusiast chips.

You can run it flat out for performance, which is what they're really for, or if efficiency is your thing you can do that too.

12900K matching an M1 Max for efficiency :
These results are pretty crazy.

This is why I stand by my comment about unifying reviews. These factory overclocks are distorting the actual results in my view. In the past reviews showed one default setting, and then you could overclock that any way you wanted.
But now you get these official boost modes, some reviews use it, some do not, it is a mess. Same with reviewing 65 W CPUs on high-end boards and ignoring power limits. Those CPUs will get nowhere near as good performance on entry level boards, which they are meant to be used with.

For now I will wait to see AMD's response. They have to lower the prices of their CPUs, but they will also introduce the 3D cache models early next year I think. We are looking at some exciting competition.
When I upgrade, I will definitely want high multi-threaded performance because I want to use CPU encoding when streaming. As good as NVENC is, CPU encoding offers better quality with those demanding presets.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,965 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
But now you get these official boost modes, some reviews use it, some do not, it is a mess
Intel is 100% crystal clear on what the default is. If some reviewers choose to underclock or overclock the CPU then it's their own fault? Changing the power limits is just like OC, only playing with a different dial
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Doesn't matter to me anyway. I clearly said, that if you just leave stuff opened in background, that's not a problem, problems arise, if you try to do something really stupid like running Photoshop and playing a game at the same time. Common sense says that you shouldn't be doing something like that and that's why people don't do that. 2E cores are enough for background gunk, unless you are trying to achieve something that you clearly shouldn't.
And a) I never brought that up as a plausible scenario, so please put your straw man away, and b) I explained how even with a completely average setup and workload, including a relatively normal amount of common background applications, even 2 E cores can be low enough to cause intermittent issues. Which, given their very small die area requirements, makes four a good baseline. Removing two more gives you room for slightly more than half of another P core. So, a 2P4E die will be much smaller than a 4P2E die in terms of area spent on CPU cores. A bit simplified (the 4 E cores look slightly larger than a P core), but let's say 1 E core w/cache is X area; 1 P core w/cache is 4X area. That makes the 2P4E layout 12X, while the 4P2E layout is 18X - 50% larger.

As was brought up above there are questions regarding the latency characteristics of a layout like this, but latency benchmarks indicate that things might not be as bad as some might fear.
These are workloads that scale well and I have done for considerable amount of time.
Yes, I never said there weren't. I just said they are relatively few and relatively rare, especially in an end-user usage scenario.
They are far more useful benchmarks than what Anandtech actually tests.
That's your opinion, and as seen below, an opinion that seems rather uninformed.
I like BOINC and contribute from time to time. From CPU loads alone I have accumulated over 1 million points, most of which were achieved with very modest chips like FX 6300, Athlon X4 845 or even Turion X2 TL-60. Took me months to achieve that and purchase of i5 10400F, so far only makes up 10% of all effort, despite being the fastest chip I have. For me that's very meaningful workload.
Cool for you, I guess? As I said: niche workload, with niche hardware, for niche users. No mainstream or mass-market applicability.
Next is Handbrake. You argue that you only need it for short bursts of time, but I personally found out that it's useful in converting seasons of shows and if you want that done with high quality, good compression ratio, it can take days. Want to do this for several shows? It can take weeks. Obviously it would be a good trade-off to just use GPU, but then you can't achieve quality as good, compression as good and even then it may take half of day to transcode a whole show. So if someone does this stuff in certain frequency, they should think about getting chip with stronger MT performance or just buy a fast graphics card or consider Quadro (now RTX A series).
*clears throat* Apparently I have to repeat myself:
Most people don't transcode their entire media library weekly.
Which is essentially what you're positing here. And, as you bring up yourself, if this is a relevant workload for you, buy an Intel CPU with QuickSync, an AMD APU or GPU with VCN, or an Nvidia GPU with NVENC. You'll get many times the performance for less power draw, and even a lower cost than one of these CPUs (in a less insane GPU market, that is).

And again: niche workload for niche users. Having this as an occasional workload is common; having this as a common workload (in large quantities) is not.
Next load are VMs. For me VMs are cool to test out operating systems, but besides that some BOINC projects require to use BOINC in VMs, even projects that aren't exclusively VM only, sometimes give more projects to linux rather than Windows. And then you need RAM, cores and you can expect to keep some cores permanently pegged to 100% utilization. CPU with more cores (not threads) allows you to also use your computer, instead of leaving it working as server.
Wait, you have 100% CPU utilization in your VMs from trying out OSes? That sounds wrong. You seem to be contradicting yourself somewhat here. And again: if your workload is "I run many VMs with heavy multi-core workloads", you're well and truly into high end workstation tasks. That is indeed a good spot for HEDT (or even higher end) hardware, but ... this isn't common. Not even close.
Better yet, you have enough cores to run BOINC and to run BOINC in VM.
A niche within a niche! Even better!
And then we have 7zip. I will be brief, if you download files from internet, you most likely will need it very often and often for big files. Some games from Steam are compressed and have to be decompressed. You may also use NTFS compression on SSD.
I have never, ever, ever heard of anyone needing a HEDT CPU for decompressing their Steam downloads. I mean, for this to be relevant you would need to spend far more time downloading your games than actually playing them. Any run-of-the-mill CPU can handle this just fine. Steam decompresses on the fly, and your internet bandwidth is always going to bottleneck you more than your CPU's decompression rate (unless you're Linus Tech Tips and use a local 10G local cache for all your Steam downloads). The same goes for whatever other large-scale compressed downloads even an enthusiast user is likely to do as well.
All in all, depending on user, MT tasks and their performance can be very important and to them, certainly not a rare need.
Yes, I have said the whole time this depends on the use case. But you're completely missing the point here: actually seeing a benefit from a massively MT CPU requires you to spend a lot of time on these tasks every day, especially when accounting for the high core count CPU being slower for all other tasks. Let's say you use your PC for both work and fun, and your work includes running an MT workload that scales perfectly with added cores and threads. Let's say this workload takes 2h/day on a 3970X. Let's say that workload is a Cinema4D render, which the TR performs well in overall. Going from the relative Cinebench R20 scores, the same job would take 54% more time on the 12900K, or slightly over 3h. That's a significant difference, and the choice of the HEDT CPU would likely be warranted overall, as it would eat into either work hours or possibly free time.

But then let's consider the scenario at hand:
-how many people use a single PC for both rendering workloads this frequent and on their free time?
-how many people render things this large this frequently at all?
-how many people with these needs wouldn't just get a second PC, including the redundancy and stability this would bring? (Or hire time on a render farm?)
-how many people would care if their end-of-the-day render took an extra hour, when they would likely be doing something else (eating dinner or whatever)?
-how many people with this specialized a use-case wouldn't just set the render to run when they go to bed, which renders any <8h-ish render time acceptable?

This is of course not the only use case, and there are other similar ones (compiling etc. - where similar questions would typically be relevant), but ultimately, running these workloads frequently enough and with sufficiently large workloads for this to be a significant time savings, and to make up for the worse performance in general usage? That's quite a stretch. You're looking at a very small group of users.

(You can also see from the linked comparison above that the 12900K significantly outperforms the 3970X in several of your proposed workloads, such as Handbrake video transcoding.)
And if they have a need to do it professionally, then faster chip is most likely a financial no-brainer.
Yes, and in that case they have a workstation workload, and are relatively likely to buy a workstation to do so. That's expensive, but at that point you need the reliability and likely want a service agreement. And at that point HEDT is likely the budget/DIY option, with pre-made workstations (TR-X or Xeon) being the main choice. This of course depends on whether you're a freelancer or working for a larger company etc, but for the vast majority of freelancers anything above a 5950X would be silly levels of overkill.
I personally found that the most demanding tasks are multi threaded well and even then take ages to complete. Just liek I thought in 2014, that multithreaded performance was very important and maybe even at cost of single threaded performance, so I think today, but today that's even more obvious.
But you're treating all MT performance as if it scales perfectly. It does not. There are many, many real-world applications that fail to scale meaningfully above a relatively low core count, while those that scale massively are overall quite few.
FX chips were very close to i7s in multithreaded performance and since they were a lot cheaper, literally 2 or maybe even 3 times cheaper, they were no brainer chips for anyone seriously interested in those workloads.
... again:
I already said that. You're arguing as if I'm making black-and-white distinctions here, thereby overlooking huge portions of what I'm saying. Please take the time to actually read what I'm saying before responding.
They were also easy to overclock. As long as you have cooling, 5Ghz was nothing to them, for nearly 100 USD prices of FX 8320 chips, i7 was complete no go.
But even at those clock speeds they underperformed. That's an FX-8350 at 4.8GHz roughly matching an i7-3770K (stock!) in a workload that scales very well with cores and threads (video encoding), at nearly 3x the power consumption. Is that really a good value proposition?
lol I made a mistake about i7 920, but yeah there were 6C/12T chips available. Maybe i7 960 was the lowest end hexa core. Still, those were somewhat affordable if you needed something like that.
Lowest end hex core was the i7-970, there were also 980, 980X and 990X. And these were the precursor to Intel's HEDT lineup, which launched a year later.
Phenom II X6 chips were great high core count chips, lower end models like 1055T were really affordable. If you overclocked one of those, you could have had exceptional value rendering rig for cheap. They sure did costs a lot less than i7 4C/8T parts and were seriously competitive against them. Obviously, later released FX chips were even better value.
Sure, they were good for those very MT-heavy tasks. They were also quite terrible for everything else. Again: niche parts for niche use cases.
Anyway, my point was that things like high core count chips existed back then and were quite affordable.
And quite bad for the real-world use cases of most users. I still fail to see the overall relevance here, and how this somehow affects whether a TR 3970X is a better choice overall than a 12900K or 5950X for a large segment of users. Intel's HEDT customer base mainly came from the absence of high-performing many-core alternatives. There were budget many-core alternatives that beat their low core count MSDT parts, but again their HEDT parts drastically outperformed these again - at a higher cost, of course. Horses for courses, and all that.
And that's still better than making 5950X or 5900X. As consumer platforms are made to be cheaper have only small range or power requirements, if they make 5950X and say that it's compatible with AM4 socket and that any board supports it and then some guy does that on cheapest A520 board, most likely it will throttle badly.
That's nonsense. Any AM4 board needs to be able to run any AM4 chip (of a compatible generation) at stock speeds, unless the motherboard maker has really messed up their design (in which case they risk being sanctioned by AMD for not being compliant with the platform spec). A low end board might not allow you to sustain the 144W boost indefinitely, but the spec only guarantees 3.4GHz, which any board should be able to deliver (and if it doesn't, that is grounds for a warranty repair). If you're not able to understand what the spec sheet is telling you and get the wrong impression, that is on you, not AMD. You could always blame the motherboard manufacturer for making a weak VRM, but then that also reflects on you for being dumb enough to pair a $750 CPU with a likely $100-ish motherboard for what must then be a relatively heavy MT workload.
If they want to avoid lawsuits, then they better make their CPU range limited or make motherboard makers only produce more expensive boards, but that's what they can't really do since AM4 is supposed to be cheap, affordable and flexible platform.
Wait, lawsuits? What lawsuits? Given that this platform has been out for a year (and much longer than that if you count 16-core Zen2), those ought to have shown up by now if this was an actual problem. Looks to me like you're making up scenarios that don't exist in the real world.
Watt marketing from FX era is seemingly not done anymore, even if it would make perfect sense.
Because CPUs today have high boost clocks to get more performance out of the chip at stock. A high delta between base and boost clock means a high power delta as well, and as TDP (or its equivalents) to the degree that they relate to power draw at all (they don't really - that's not how TDP is defined, but it tends to be equal to the separate rating for guaranteed max power draw at sustained base clock) relates to base clock and not boost, this becomes more complicated overall. Having two separate ratings is a much better idea - one for base, one for boost. Intel is onto something here, though I really don't like how they're making "PL1=PL2=XW" the default for K-series SKUs. If you were to mandate a single W rating for CPUs today you'd be forcing one of two things: either leaving performance on the table due to lower boost clocks, or forcing motherboard prices up as you'd force every motherboard to be able to maintain the full boost clock of even teh highest end chip on the platform. Both of these are bad ideas.
Intel really suffers from that with shit tier i9 K chips making H510 VRMs burn. I'm surprised that they still don't have lawsuits to deal with, considering that this is blatant case of advertising something that can't happen. Anyway, those are the reasons to not make HEDT chips compatible with mainstream sockets.
Yes, and have I ever argued for that? No. A 141W 5950X is not an HEDT CPU, nor is a 125W Intel CPU. Their 240W boost on these chips is quite insane, and I think configuring them this way out of the box is rather desperate, but there's also an argument for the sheer idiocy of pairing a K-SKU chip with a H510(ish) board. If you think you're gaming the system by buying a dirt-cheap motherboard for your high-end CPU and then pelting that CPU with sustained high-power MT workloads, you're only fooling yourself, as you're buying equipment fundamentally unsuited for the task at hand.

I still think the power throttling we've seen on B560 boards (and below) is unacceptable, but that's on Intel for not mandating strong enough VRMs and power profiles, not on the CPUs themselves - CPUs are flexible and configurable in their power and boost behaviour.
Intel in Sandy, Ivy and Haswell era managed to do that. That was great for consumers. All this bullshit with pushing HEDT chips to consumer platform does nothing good for anyone, except Intel and AMD.
Except that that era was extremely hostile to consumers, limiting them to too-low core counts and forcing them into buying overpriced and unnecessary motherboards for the "privilege" of having more than four cores. I entirely agree that most consumers don't need 12 or 16 cores, but ... so what? It doesn't harm anyone that these chips are available on mainstream platforms. Quite the opposite.
Barely, it's already running obscenely hot and has clocks cranked to the moon. There's very little potential. I wouldn't overclock it, as it has two types of cores with different voltages with many frequency and voltage stages + many power settings in BIOS. 12900K is hardly tweakable unless you spend obscene amount of time to do it and then spend weeks if not months stability testing it in various loads. That's stupid and makes no sense. Might as well just leave it as it is. 3970X is not much better than i9, but at least it has same type of cores. And potential to benefit from raised power limits (whatever they are called on AMD side). i9 12900K has them set better, therefore less potential for gains.
The E cores can't be OC'd at all, so ... you don't seem to have even read about the CPU you're discussing? And yes, this runs hot and consumes tons of power, but so does a TR 3970X. There isn't anything significant left in the tank for either of these.
Strong disagree, most tasks are super niche and quite synthetic. I wouldn't consider it a realistic test suite. I consider practical testing with most common, widely used software. Anything else may be still practical, but due to nature of being niche, can't be honestly said to be so.
So ... video encoding, code compilation, 3D rendering, 3D rendering with RT, image manipulation are more niche workloads than "running several VMs at 100% CPU"? You're joking, right? Yes, SPEC CPU is also mainly geared towards scientific computation and workstation tasks, but it still represents an overall good mix of ST and MT workloads and is a decent gauge for a platform's mixed use performance - especially as it's open, controllable, and can even be compiled by the person running the workload to avoid hidden biases from the developer's side (unlike similar but closed workloads like GeekBench). Is it perfect? Of course not. What it is is possibly the best, and certainly the most controllable pre-packaged benchmark suite available, and the most widely comparable across different operating systems, architectures and so on. It has clear weaknesses - it's a poor indicator of gaming performance, for example, as there are few highly latency-sensitive workloads in it. But it is neither "super niche" nor synthetic in any way. A benchmark based on real-world applications and real-world workloads literally cannot be synthetic, as the definition of a synthetic benchmark is that it is neither of those things.
To some extent yes
Thank you. Took a while, but we got there.
That's going to depend on person.
And I've never said that it doesn't. I've argued for what is broadly, generally applicable vs. what is limited and niche - and my issue with your arguments is that you are presenting niche points as if they have broad, general applicability.
Maybe. By Zen I mean Zen as architecture family, not Zen 1. At this point, I'm not sure if Zen 2 is really that dense. New Intel chips might be denser.
Comparable, at least. But Zen cores are (at least judging by die shots) much smaller than ADL P cores, which makes for increased thermal density.
Some points you make can be solved with marketing, like making people see HEDT Intel platform as 5950X+Threadripper competitor. and the main reason why HEDT is losing ground is due to Intel pushing HEDT parts into mainstream segment (where they arguably don't belong). It's not that HEDT is not important, it's just how business is done by Intel.
But ... marketing doesn't solve that. It would be an attempt at alleviating that. But if HEDT customers have been moving to MSDT platforms because those platforms fulfill their needs, no amount of marketing is going to convince them to move to a more expensive platform that doesn't deliver tangible benefits to their workflow. And the main reason why HEDT is losing ground is not what you're saying, but rather that AMD's move to first 8 then 16 cores completely undercut the USP of Intel's HEDT lineup, and suddenly we have MSDT parts now that do 90% of what HEDT used to, and a lot of it better (due to higher clocks and newer architectures), and the remaining 10% (lots of PCIe, lots of memory bandwidth) are very niche needs. Arguing for some artificial segregation into MSDT and HEDT along some arbitrary core count (what would you want? 6? 8? 10?) is essentially not tenable today, as modern workloads can scale decently to 8-10 cores, especially when accounting for multitasking, while not going overboard on cores keeps prices "moderate" including platform costs. I still think we'll find far better value in a couple of years once things have settled down a bit, but 16-core MSDT flagships are clearly here to stay. If anything, the current AMD and Intel ranges demonstrate that these products work very well in terms of actual performance in actual workloads for the people who want/need them as well as in terms of what the MSDT platforms can handle (even on relatively affordable motherboards - any $200 AM4 motherboard can run a 5950X at 100% all day every day).
Speaking about regional deals, pretty much since C19 lockdown star in my region (Lithuania) there is a great shortage of Athlons, quad core Ryzens, Ryzen APUs in general, Celerons. Lithuanian market is now seemingly flooded with i5 10400Fs and i3 10100Fs. Anything Ryzen has Ryzen tax, seemingly making Intel more competitive here, but in terms of sales, Ryzen is winning, despite having prices inflated and only having liek 2-3 different SKUs available per store. Idiots still think that it's better value than Intel. Ironically, 5950X is seemingly mainstream chip as it is sold the best. Yet at the same time brand new Pentium 4 chips are sold. Pentium 4s outsell i3 10100Fs and i5 11400Fs. That happened in one store, but it's still incredibly fucked up. In other store, most sold chip is 2600X, meanwhile second is 5950X. That second store doesn't have Pentium 4s, but they have refurbished Core 2 Duos. They don't sell well at all there. In Lithuania most computers sold are local prebuilts or laptops, but DIY builders are going bonkers for some reason.
Low end chips have generally been in short supply globally for years - Intel has been prioritizing their higher priced parts since their shortage started back in ... 2018? And AMD is doing the same under the current shortage. Intel has been very smart at changing this slightly to target the $150-200 market with their 400 i5s, which will hopefully push AMD to competing better in those ranges - if the rumored Zen3 price cuts come true those 5000G chips could become excellent value propositions pretty soon.

That sounds like a pretty ... interesting market though. At least it demonstrates the power of image and public perception. The turnaround in these things in the past few years has been downright mind-boggling, from people viewing AMD at best as the value option to now being (for many) the de-facto choice due to a perception of great performance and low pricing, which ... well, isn't true any more :p Public perception is never accurate, but this turnaround just shows how slow it can be to turn, how much momentum and inertia matters in these things, and how corporations know to cash out when they get the opportunity.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Intel is 100% crystal clear on what the default is. If some reviewers choose to underclock or overclock the CPU then it's their own fault? Changing the power limits is just like OC, only playing with a different dial

It may be the default within the chip - which doesn't last beyond power on, but their data sheet says PL1 / PL2 / PL3 / PL4 should be set based on capabilities of the platform (VRMs) and cooling solution.

 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,965 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
It may be the default within the chip - which doesn't last beyond power on, but their data sheet says PL1 / PL2 / PL3 / PL4 should be set based on capabilities of the platform (VRMs) and cooling solution.

Yeah that same document only talks about 125 W and never mentions the new defaults

Intel marketing in their presentation was 100% clear PL1=PL2=241 W, I posted their presentation slide a few days ago

I suspect what happened is that someone in marketing really wanted to win Cinebench R23 (which heats up the CPU first and usually runs at PL1 without turbo on Intel), so they pushed for that change last minute
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yeah that same document only talks about 125 W and never mentions the new defaults

Intel marketing in their presentation was 100% clear PL1=PL2=241 W, I posted their presentation slide a few days ago

I suspect what happened is that someone in marketing really wanted to win Cinebench R23 (which heats up the CPU first and usually runs at PL1 without turbo on Intel), so they pushed for that change last minute
Ugh, marketing people should never be allowed near a spec sheet.
 
Joined
May 8, 2021
Messages
1,978 (1.49/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Wait, you have 100% CPU utilization in your VMs from trying out OSes? That sounds wrong. You seem to be contradicting yourself somewhat here. And again: if your workload is "I run many VMs with heavy multi-core workloads", you're well and truly into high end workstation tasks. That is indeed a good spot for HEDT (or even higher end) hardware, but ... this isn't common. Not even close.

A niche within a niche! Even better!
Well, used to. At some points I ran 3 machines all day with 2 out of 3 with native Windows BOINC loads and one with linux VM and with BOINC loads in both linux and Windows. I don't do that anymore, but when you start out in crunching, you quickly realize how generally decent everyday CPU now suddenly becomes relatively inadequate. And soon you start to want Opterons or Xeons and then you you realize in what rabbit hole you end up.

I have never, ever, ever heard of anyone needing a HEDT CPU for decompressing their Steam downloads. I mean, for this to be relevant you would need to spend far more time downloading your games than actually playing them. Any run-of-the-mill CPU can handle this just fine. Steam decompresses on the fly, and your internet bandwidth is always going to bottleneck you more than your CPU's decompression rate (unless you're Linus Tech Tips and use a local 10G local cache for all your Steam downloads). The same goes for whatever other large-scale compressed downloads even an enthusiast user is likely to do as well.
That's just one type of decompressing.


But even at those clock speeds they underperformed. That's an FX-8350 at 4.8GHz roughly matching an i7-3770K (stock!) in a workload that scales very well with cores and threads (video encoding), at nearly 3x the power consumption. Is that really a good value proposition?
Or did it? Overclocked FX 83xx was slower at first pass, but faster at second pass. Don't forget that FX octa core chips costed nearly 3 tiems less than i7. And that was close to ideal situation for i7 as that workload clearly benefited from HT, some workloads have negative performance impact from using HT. And FX has real cores, therefore performance of overclocked FX should had been far more predictable than with i7.But power consumption... :D yeah, that was rough. But still even at stock speeds, FX was close to i7 and at second pass benchmark beat i7. Also FX chips were massively overvolted from factory. 0.3V undervolts were achievable on nearly any chip. Despite being more power hungry chip, AMD did no favours by setting voltage so unreasonably high.


Sure, they were good for those very MT-heavy tasks. They were also quite terrible for everything else. Again: niche parts for niche use cases.
Phenom II X6 was decent. Phenom II X6 had single core performance of first gen FX chips, which roughly translates as somewhere in between Core 2 Quad and first gen Core i series. It was closer to i7 in that regard than 3970X is to 5950X today. And Phenom II X6 1055T sold for nearly 2 times lower price than i7, so value preposition was great.


That's nonsense. Any AM4 board needs to be able to run any AM4 chip (of a compatible generation) at stock speeds, unless the motherboard maker has really messed up their design (in which case they risk being sanctioned by AMD for not being compliant with the platform spec). A low end board might not allow you to sustain the 144W boost indefinitely, but the spec only guarantees 3.4GHz, which any board should be able to deliver (and if it doesn't, that is grounds for a warranty repair). If you're not able to understand what the spec sheet is telling you and get the wrong impression, that is on you, not AMD. You could always blame the motherboard manufacturer for making a weak VRM, but then that also reflects on you for being dumb enough to pair a $750 CPU with a likely $100-ish motherboard for what must then be a relatively heavy MT workload.

Seems very sketchy, boards clearly overheated, but I'm not sure if it's just boost that got cut or even bellow base speed. In 3900X video CPU clearly throttled bellow base clock, that's a fail by any definition. On Intel side it's even worse:

Gigabyte B560 D2V board throttled i9 way bellow base speed and some boards were just so so. On Intel Z490 side:

Asrock Phantom Gaming 4 was only legally not throttling. I guess it's a pass, but in hotter climate it would be a fail. And that's not really a cheap board, there are many H410 boards with even worse VRMs, which are complete gamble if they would work with i9 or not. I wouldn't have much confidence that they would.

All in all, there are plenty of shit claims from motherboard manufacturers, they mostly don't get flak for that because media only uses mid tier or high end boards, but if media cared about low end stuff, board makers would face lawsuits. I guess that's an improvement from AM3+ era, when certain MSI boards melted or caught on fire and many low end boards throttling straight to 800MHz


Yes, and have I ever argued for that? No. A 141W 5950X is not an HEDT CPU, nor is a 125W Intel CPU. Their 240W boost on these chips is quite insane, and I think configuring them this way out of the box is rather desperate, but there's also an argument for the sheer idiocy of pairing a K-SKU chip with a H510(ish) board. If you think you're gaming the system by buying a dirt-cheap motherboard for your high-end CPU and then pelting that CPU with sustained high-power MT workloads, you're only fooling yourself, as you're buying equipment fundamentally unsuited for the task at hand.
I don't see anything a bad about putting i9K on H510 board. After all, manufacturers claim that they are compatible. If you are fine with less features, lower end chipset and etc, you may as well not pay for fancier board. Also, some people upgrade old system which had i3 later to i7s. Today that would be throttlefest (with i9). I don't see anything unreasonable about upgrading CPU later and I don't think that those people deserve to have their VRMs burning.

I still think the power throttling we've seen on B560 boards (and below) is unacceptable, but that's on Intel for not mandating strong enough VRMs and power profiles, not on the CPUs themselves - CPUs are flexible and configurable in their power and boost behaviour.
It was literally the same on B460, it's just that HWUB didn't test as many boards and didn't have a video with sensational title. In fact it may have been even worse in terms of board quality.

Except that that era was extremely hostile to consumers, limiting them to too-low core counts and forcing them into buying overpriced and unnecessary motherboards for the "privilege" of having more than four cores. I entirely agree that most consumers don't need 12 or 16 cores, but ... so what? It doesn't harm anyone that these chips are available on mainstream platforms. Quite the opposite.
On the other hand, you could have bought non K i5 or i7 and see it last for nearly decade with excelent performance. It was unprecedented stagnation, but it wasn't entirely good or bad. Even Core 2 Quad or Phenom II X4 users saw their chips last a lot longer than expected. Game makers made games runable on that hardware too, now core race got restarted and I don't think that we will see chips with usable lifespan as long as Sandy, Ivy or Haswels. You may say that's good. Maybe for servers and HEDT it is, but for average consumer that means more unnecessary upgrading.


But ... marketing doesn't solve that. It would be an attempt at alleviating that. But if HEDT customers have been moving to MSDT platforms because those platforms fulfill their needs, no amount of marketing is going to convince them to move to a more expensive platform that doesn't deliver tangible benefits to their workflow. And the main reason why HEDT is losing ground is not what you're saying, but rather that AMD's move to first 8 then 16 cores completely undercut the USP of Intel's HEDT lineup, and suddenly we have MSDT parts now that do 90% of what HEDT used to, and a lot of it better (due to higher clocks and newer architectures), and the remaining 10% (lots of PCIe, lots of memory bandwidth) are very niche needs. Arguing for some artificial segregation into MSDT and HEDT along some arbitrary core count (what would you want? 6? 8? 10?) is essentially not tenable today, as modern workloads can scale decently to 8-10 cores, especially when accounting for multitasking, while not going overboard on cores keeps prices "moderate" including platform costs. I still think we'll find far better value in a couple of years once things have settled down a bit, but 16-core MSDT flagships are clearly here to stay. If anything, the current AMD and Intel ranges demonstrate that these products work very well in terms of actual performance in actual workloads for the people who want/need them as well as in terms of what the MSDT platforms can handle (even on relatively affordable motherboards - any $200 AM4 motherboard can run a 5950X at 100% all day every day).
Well, I made a point about VRMs, more RAM channels, more PCIe lanes and etc. HEDT boards were clearly made to professional use and those who migrated to mainstream are essentially not getting a full experiences minus performance. Is that really good? Or is it just some people pinching pennies and buying by performance only?

Low end chips have generally been in short supply globally for years - Intel has been prioritizing their higher priced parts since their shortage started back in ... 2018? And AMD is doing the same under the current shortage.
Not at all, there used to be Ryzen 3100s, Ryzen 3200G-3400Gs, various Athlons. On Intel side Celerons and Pentiums were always available without issues, now they became unobtanium, well except Pentium 4 :D. Budget CPUs are nearly wiped out as a concept, along with GPUs. They don't really exist anymore, but they did in 2018.

Intel has been very smart at changing this slightly to target the $150-200 market with their 400 i5s, which will hopefully push AMD to competing better in those ranges - if the rumored Zen3 price cuts come true those 5000G chips could become excellent value propositions pretty soon.
Maybe, but AMD has fanboys, never underestimate fanboys and their appetite for being ripped off.

That sounds like a pretty ... interesting market though. At least it demonstrates the power of image and public perception.
Or is it? I find my country's society mind boggling at times. I was reading comments in phone store about various phones and found out that S20 FE is "budget" phone and that A52 is basically poverty phone. Those people were talking about Z Flips and Folds as if they were somewhat expensive but normal, meanwhile average wage in this country is more than 2 times lower than what latest Z flip costs. And yet this exactly same country loves to bitch and whine how everything is bad, how everyone is poor or close to poverty. I really don't understand Lithuanians. It makes me think that buying 5950X may be far more common than I would like to admit and that those two stores may be a reasonable reflection of society.

The turnaround in these things in the past few years has been downright mind-boggling, from people viewing AMD at best as the value option to now being (for many) the de-facto choice due to a perception of great performance and low pricing, which ... well, isn't true any more :p Public perception is never accurate, but this turnaround just shows how slow it can be to turn, how much momentum and inertia matters in these things, and how corporations know to cash out when they get the opportunity.
I generally dislike public perception, but you have to admit that whatever AMD did with marketing was genius. People drank AMD's koolaid about those barely functional buggy excuses of CPUs and hyped Zen 1 to the moon. Despite it being really similar to how FX launched. Focus on more cores, poor single threaded performance, worse power consumption than Intel. I genuinely thought that Ryzen will be FX v2, but nah. Somehow buying several kits of RAM to find one compatible, having computer turning on once in 10 tries and overall being inferior than Intel suddenly became acceptable and not only that, but desirable. Later gens were better, but it was the first gen that built most of Ryzen's reputation. And people quickly became militant of idea that Intel is still better, some of them would burn you at stake for saying such heresy. And now people are surprised that Intel is good again, as if Intel was a clear market leader and hasn't been for half century.
 
Top