Doesn't matter to me anyway. I clearly said, that if you just leave stuff opened in background, that's not a problem, problems arise, if you try to do something really stupid like running Photoshop and playing a game at the same time. Common sense says that you shouldn't be doing something like that and that's why people don't do that. 2E cores are enough for background gunk, unless you are trying to achieve something that you clearly shouldn't.
And a) I never brought that up as a plausible scenario, so please put your straw man away, and b) I explained how even with a completely average setup and workload, including a relatively normal amount of common background applications, even 2 E cores can be low enough to cause intermittent issues. Which, given their very small die area requirements, makes four a good baseline. Removing two more gives you room for slightly more than half of another P core. So, a 2P4E die will be
much smaller than a 4P2E die in terms of area spent on CPU cores. A bit simplified (the 4 E cores look
slightly larger than a P core), but let's say 1 E core w/cache is X area; 1 P core w/cache is 4X area. That makes the 2P4E layout 12X, while the 4P2E layout is 18X - 50% larger.
As was brought up above there are questions regarding the latency characteristics of a layout like this, but latency benchmarks indicate that things might not be as bad as some might fear.
These are workloads that scale well and I have done for considerable amount of time.
Yes, I never said there weren't. I just said they are relatively few and relatively rare, especially in an end-user usage scenario.
They are far more useful benchmarks than what Anandtech actually tests.
That's your opinion, and as seen below, an opinion that seems rather uninformed.
I like BOINC and contribute from time to time. From CPU loads alone I have accumulated over 1 million points, most of which were achieved with very modest chips like FX 6300, Athlon X4 845 or even Turion X2 TL-60. Took me months to achieve that and purchase of i5 10400F, so far only makes up 10% of all effort, despite being the fastest chip I have. For me that's very meaningful workload.
Cool for you, I guess? As I said: niche workload, with niche hardware, for niche users. No mainstream or mass-market applicability.
Next is Handbrake. You argue that you only need it for short bursts of time, but I personally found out that it's useful in converting seasons of shows and if you want that done with high quality, good compression ratio, it can take days. Want to do this for several shows? It can take weeks. Obviously it would be a good trade-off to just use GPU, but then you can't achieve quality as good, compression as good and even then it may take half of day to transcode a whole show. So if someone does this stuff in certain frequency, they should think about getting chip with stronger MT performance or just buy a fast graphics card or consider Quadro (now RTX A series).
*clears throat* Apparently I have to repeat myself:
Most people don't transcode their entire media library weekly.
Which is essentially what you're positing here. And, as you bring up yourself, if this is a relevant workload for you, buy an Intel CPU with QuickSync, an AMD APU or GPU with VCN, or an Nvidia GPU with NVENC. You'll get many times the performance for less power draw, and even a lower cost than one of these CPUs (in a less insane GPU market, that is).
And again: niche workload for niche users. Having this as an occasional workload is common; having this as a common workload (in large quantities) is not.
Next load are VMs. For me VMs are cool to test out operating systems, but besides that some BOINC projects require to use BOINC in VMs, even projects that aren't exclusively VM only, sometimes give more projects to linux rather than Windows. And then you need RAM, cores and you can expect to keep some cores permanently pegged to 100% utilization. CPU with more cores (not threads) allows you to also use your computer, instead of leaving it working as server.
Wait, you have 100% CPU utilization in your VMs from trying out OSes? That sounds wrong. You seem to be contradicting yourself somewhat here. And again: if your workload is "I run many VMs with heavy multi-core workloads", you're well and truly into high end workstation tasks. That is indeed a good spot for HEDT (or even higher end) hardware, but ... this isn't common. Not even close.
Better yet, you have enough cores to run BOINC and to run BOINC in VM.
A niche within a niche! Even better!
And then we have 7zip. I will be brief, if you download files from internet, you most likely will need it very often and often for big files. Some games from Steam are compressed and have to be decompressed. You may also use NTFS compression on SSD.
I have never, ever, ever heard of anyone needing a HEDT CPU for
decompressing their Steam downloads. I mean, for this to be relevant you would need to spend far more time downloading your games than actually playing them. Any run-of-the-mill CPU can handle this just fine. Steam decompresses on the fly, and your internet bandwidth is always going to bottleneck you more than your CPU's decompression rate (
unless you're Linus Tech Tips and use a local 10G local cache for all your Steam downloads). The same goes for whatever other large-scale compressed downloads even an enthusiast user is likely to do as well.
All in all, depending on user, MT tasks and their performance can be very important and to them, certainly not a rare need.
Yes, I have said the whole time this depends on the use case. But you're completely missing the point here: actually seeing a benefit from a massively MT CPU requires you to spend
a lot of time on these tasks
every day, especially when accounting for the high core count CPU being
slower for all other tasks. Let's say you use your PC for both work and fun, and your work includes running an MT workload that scales perfectly with added cores and threads. Let's say this workload takes 2h/day on a 3970X.
Let's say that workload is a Cinema4D render, which the TR performs well in overall. Going from the relative Cinebench R20 scores, the same job would take 54% more time on the 12900K, or slightly over 3h. That's a significant difference, and the choice of the HEDT CPU would likely be warranted overall, as it would eat into either work hours or possibly free time.
But then let's consider the scenario at hand:
-how many people use a single PC for both rendering workloads this frequent
and on their free time?
-how many people render things this large this frequently at all?
-how many people with these needs wouldn't just get a second PC, including the redundancy and stability this would bring? (Or hire time on a render farm?)
-how many people would care if their end-of-the-day render took an extra hour, when they would likely be doing something else (eating dinner or whatever)?
-how many people with this specialized a use-case wouldn't just set the render to run when they go to bed, which renders any <8h-ish render time acceptable?
This is of course not the only use case, and there are other similar ones (compiling etc. - where similar questions would typically be relevant), but ultimately, running these workloads frequently enough and with sufficiently large workloads for this to be a significant time savings, and to make up for the worse performance in general usage? That's quite a stretch. You're looking at a
very small group of users.
(You can also see from the linked comparison above that the 12900K significantly outperforms the 3970X in several of your proposed workloads, such as Handbrake video transcoding.)
And if they have a need to do it professionally, then faster chip is most likely a financial no-brainer.
Yes, and in that case they have a workstation workload, and are relatively likely to buy a workstation to do so. That's expensive, but at that point you need the reliability and likely want a service agreement. And at that point HEDT is likely the budget/DIY option, with pre-made workstations (TR-X or Xeon) being the main choice. This of course depends on whether you're a freelancer or working for a larger company etc, but for the vast majority of freelancers anything above a 5950X would be silly levels of overkill.
I personally found that the most demanding tasks are multi threaded well and even then take ages to complete. Just liek I thought in 2014, that multithreaded performance was very important and maybe even at cost of single threaded performance, so I think today, but today that's even more obvious.
But you're treating all MT performance as if it scales perfectly. It does not. There are many, many real-world applications that fail to scale meaningfully above a relatively low core count, while those that scale massively are overall quite few.
FX chips were very close to i7s in multithreaded performance and since they were a lot cheaper, literally 2 or maybe even 3 times cheaper, they were no brainer chips for anyone seriously interested in those workloads.
... again:
I already said that. You're arguing as if I'm making black-and-white distinctions here, thereby overlooking
huge portions of what I'm saying. Please take the time to actually read what I'm saying before responding.
They were also easy to overclock. As long as you have cooling, 5Ghz was nothing to them, for nearly 100 USD prices of FX 8320 chips, i7 was complete no go.
But even at those clock speeds they underperformed. That's an FX-8350 at 4.8GHz roughly matching an i7-3770K (stock!) in a workload that scales very well with cores and threads (video encoding), at nearly 3x the power consumption. Is that really a good value proposition?
lol I made a mistake about i7 920, but yeah there were 6C/12T chips available. Maybe i7 960 was the lowest end hexa core. Still, those were somewhat affordable if you needed something like that.
Lowest end hex core was the i7-970, there were also 980, 980X and 990X. And these were the precursor to Intel's HEDT lineup, which launched a year later.
Phenom II X6 chips were great high core count chips, lower end models like 1055T were really affordable. If you overclocked one of those, you could have had exceptional value rendering rig for cheap. They sure did costs a lot less than i7 4C/8T parts and were seriously competitive against them. Obviously, later released FX chips were even better value.
Sure, they were good for those very MT-heavy tasks. They were also quite terrible for everything else. Again: niche parts for niche use cases.
Anyway, my point was that things like high core count chips existed back then and were quite affordable.
And quite bad for the real-world use cases of most users. I still fail to see the overall relevance here, and how this somehow affects whether a TR 3970X is a better choice overall than a 12900K or 5950X for a large segment of users. Intel's HEDT customer base mainly came from the absence of high-performing many-core alternatives. There were budget many-core alternatives that beat their low core count MSDT parts, but again their HEDT parts drastically outperformed these again - at a higher cost, of course. Horses for courses, and all that.
And that's still better than making 5950X or 5900X. As consumer platforms are made to be cheaper have only small range or power requirements, if they make 5950X and say that it's compatible with AM4 socket and that any board supports it and then some guy does that on cheapest A520 board, most likely it will throttle badly.
That's nonsense. Any AM4 board needs to be able to run any AM4 chip (of a compatible generation) at stock speeds, unless the motherboard maker has
really messed up their design (in which case they risk being sanctioned by AMD for not being compliant with the platform spec). A low end board might not allow you to sustain the 144W boost indefinitely, but the spec only guarantees 3.4GHz, which any board should be able to deliver (and if it doesn't, that is grounds for a warranty repair). If you're not able to understand what the spec sheet is telling you and get the wrong impression, that is on you, not AMD. You could always blame the motherboard manufacturer for making a weak VRM, but then that also reflects on you for being dumb enough to pair a $750 CPU with a likely $100-ish motherboard for what must then be a relatively heavy MT workload.
If they want to avoid lawsuits, then they better make their CPU range limited or make motherboard makers only produce more expensive boards, but that's what they can't really do since AM4 is supposed to be cheap, affordable and flexible platform.
Wait, lawsuits? What lawsuits? Given that this platform has been out for a year (and much longer than that if you count 16-core Zen2), those ought to have shown up by now if this was an actual problem. Looks to me like you're making up scenarios that don't exist in the real world.
Watt marketing from FX era is seemingly not done anymore, even if it would make perfect sense.
Because CPUs today have high boost clocks to get more performance out of the chip at stock. A high delta between base and boost clock means a high power delta as well, and as TDP (or its equivalents) to the degree that they relate to power draw at all (they don't really - that's not how TDP is defined, but it tends to be equal to the separate rating for guaranteed max power draw at sustained base clock) relates to base clock and not boost, this becomes more complicated overall. Having two separate ratings is a much better idea - one for base, one for boost. Intel is onto something here, though I really don't like how they're making "PL1=PL2=XW" the default for K-series SKUs. If you were to mandate a single W rating for CPUs today you'd be forcing one of two things: either leaving performance on the table due to lower boost clocks, or forcing motherboard prices up as you'd force every motherboard to be able to maintain the full boost clock of even teh highest end chip on the platform. Both of these are bad ideas.
Intel really suffers from that with shit tier i9 K chips making H510 VRMs burn. I'm surprised that they still don't have lawsuits to deal with, considering that this is blatant case of advertising something that can't happen. Anyway, those are the reasons to not make HEDT chips compatible with mainstream sockets.
Yes, and have I ever argued for that? No. A 141W 5950X is not an HEDT CPU, nor is a 125W Intel CPU. Their 240W boost on these chips is quite insane, and I think configuring them this way out of the box is rather desperate, but there's also an argument for the sheer idiocy of pairing a K-SKU chip with a H510(ish) board. If you think you're gaming the system by buying a dirt-cheap motherboard for your high-end CPU and then pelting that CPU with sustained high-power MT workloads, you're only fooling yourself, as you're buying equipment fundamentally unsuited for the task at hand.
I still think the power throttling we've seen on B560 boards (and below) is unacceptable, but that's on Intel for not mandating strong enough VRMs and power profiles, not on the CPUs themselves - CPUs are flexible and configurable in their power and boost behaviour.
Intel in Sandy, Ivy and Haswell era managed to do that. That was great for consumers. All this bullshit with pushing HEDT chips to consumer platform does nothing good for anyone, except Intel and AMD.
Except that that era was extremely hostile to consumers, limiting them to too-low core counts and forcing them into buying overpriced and unnecessary motherboards for the "privilege" of having more than four cores. I entirely agree that most consumers don't need 12 or 16 cores, but ... so what? It doesn't harm anyone that these chips are available on mainstream platforms. Quite the opposite.
Barely, it's already running obscenely hot and has clocks cranked to the moon. There's very little potential. I wouldn't overclock it, as it has two types of cores with different voltages with many frequency and voltage stages + many power settings in BIOS. 12900K is hardly tweakable unless you spend obscene amount of time to do it and then spend weeks if not months stability testing it in various loads. That's stupid and makes no sense. Might as well just leave it as it is. 3970X is not much better than i9, but at least it has same type of cores. And potential to benefit from raised power limits (whatever they are called on AMD side). i9 12900K has them set better, therefore less potential for gains.
The E cores can't be OC'd at all, so ... you don't seem to have even read about the CPU you're discussing? And yes, this runs hot and consumes tons of power, but so does a TR 3970X. There isn't anything significant left in the tank for either of these.
Strong disagree, most tasks are super niche and quite synthetic. I wouldn't consider it a realistic test suite. I consider practical testing with most common, widely used software. Anything else may be still practical, but due to nature of being niche, can't be honestly said to be so.
So ...
video encoding,
code compilation,
3D rendering,
3D rendering with RT,
image manipulation are
more niche workloads than "running several VMs at 100% CPU"? You're joking, right? Yes, SPEC CPU is also mainly geared towards scientific computation and workstation tasks, but it still represents an overall good mix of ST and MT workloads and is a decent gauge for a platform's mixed use performance - especially as it's open, controllable, and can even be compiled by the person running the workload to avoid hidden biases from the developer's side (unlike similar but closed workloads like GeekBench). Is it perfect? Of course not. What it is is possibly the best, and certainly the most controllable pre-packaged benchmark suite available, and the most widely comparable across different operating systems, architectures and so on. It has clear weaknesses - it's a poor indicator of gaming performance, for example, as there are few highly latency-sensitive workloads in it. But it is neither "super niche" nor synthetic in any way. A benchmark based on real-world applications and real-world workloads literally cannot be synthetic, as the definition of a synthetic benchmark is that it is neither of those things.
Thank you. Took a while, but we got there.
That's going to depend on person.
And I've never said that it doesn't. I've argued for what is broadly, generally applicable vs. what is limited and niche - and my issue with your arguments is that you are presenting niche points as if they have broad, general applicability.
Maybe. By Zen I mean Zen as architecture family, not Zen 1. At this point, I'm not sure if Zen 2 is really that dense. New Intel chips might be denser.
Comparable, at least. But Zen cores are (at least judging by die shots) much smaller than ADL P cores, which makes for increased thermal density.
Some points you make can be solved with marketing, like making people see HEDT Intel platform as 5950X+Threadripper competitor. and the main reason why HEDT is losing ground is due to Intel pushing HEDT parts into mainstream segment (where they arguably don't belong). It's not that HEDT is not important, it's just how business is done by Intel.
But ... marketing doesn't
solve that. It would be an attempt at alleviating that. But if HEDT customers have been moving to MSDT platforms because those platforms fulfill their needs, no amount of marketing is going to convince them to move to a more expensive platform that doesn't deliver tangible benefits to their workflow. And the main reason why HEDT is losing ground is
not what you're saying, but rather that AMD's move to first 8 then 16 cores completely undercut the USP of Intel's HEDT lineup, and suddenly we have MSDT parts now that do 90% of what HEDT used to, and a lot of it better (due to higher clocks and newer architectures), and the remaining 10% (lots of PCIe, lots of memory bandwidth) are very niche needs. Arguing for some artificial segregation into MSDT and HEDT along some arbitrary core count (what would you want? 6? 8? 10?) is essentially not tenable today, as modern workloads can scale decently to 8-10 cores, especially when accounting for multitasking, while not going overboard on cores keeps prices "moderate" including platform costs. I still think we'll find far better value in a couple of years once things have settled down a bit, but 16-core MSDT flagships are clearly here to stay. If anything, the current AMD and Intel ranges demonstrate that these products work very well in terms of actual performance in actual workloads for the people who want/need them as well as in terms of what the MSDT platforms can handle (even on relatively affordable motherboards - any $200 AM4 motherboard can run a 5950X at 100% all day every day).
Speaking about regional deals, pretty much since C19 lockdown star in my region (Lithuania) there is a great shortage of Athlons, quad core Ryzens, Ryzen APUs in general, Celerons. Lithuanian market is now seemingly flooded with i5 10400Fs and i3 10100Fs. Anything Ryzen has Ryzen tax, seemingly making Intel more competitive here, but in terms of sales, Ryzen is winning, despite having prices inflated and only having liek 2-3 different SKUs available per store. Idiots still think that it's better value than Intel. Ironically, 5950X is seemingly mainstream chip as it is sold the best. Yet at the same time brand new Pentium 4 chips are sold. Pentium 4s outsell i3 10100Fs and i5 11400Fs. That happened in one store, but it's still incredibly fucked up. In other store, most sold chip is 2600X, meanwhile second is 5950X. That second store doesn't have Pentium 4s, but they have refurbished Core 2 Duos. They don't sell well at all there. In Lithuania most computers sold are local prebuilts or laptops, but DIY builders are going bonkers for some reason.
Low end chips have generally been in short supply globally for years - Intel has been prioritizing their higher priced parts since their shortage started back in ... 2018? And AMD is doing the same under the current shortage. Intel has been very smart at changing this slightly to target the $150-200 market with their 400 i5s, which will hopefully push AMD to competing better in those ranges - if the rumored Zen3 price cuts come true those 5000G chips could become excellent value propositions pretty soon.
That sounds like a pretty ... interesting market though. At least it demonstrates the power of image and public perception. The turnaround in these things in the past few years has been downright mind-boggling, from people viewing AMD at best as the value option to now being (for many) the de-facto choice due to a perception of great performance
and low pricing, which ... well, isn't true any more
Public perception is never accurate, but this turnaround just shows how slow it can be to turn, how much momentum and inertia matters in these things, and how corporations know to cash out when they get the opportunity.