There has been references in early official documentation/drivers/etc. referring to a "Ice Lake X", though it never materialized. It's most likely due to the fact that the Ice Lake-SP/X core was unable to reach decent clock speeds (as seen with the Xeon W-3300 family), in fact lower than the predecessor Cascade Lake-SP/X, making it fairly uninteresting for the workstation/HEDT market. Ice Lake has worked well for servers though.
X699 will be based on Sapphire Rapids, which is in the same Golden Cove family as Alder Lake. Hopefully it will boost >4.5 GHz reliably.
Just goes to show that high end MSDT is taking over where HEDT used to have its niche. The space between "server/datacenter chip" and "16c24t high clocking new arch MSDT chip" is pretty tiny, both in relevant applications and customers. There are a few, but a fraction of the old HEDT market back when MSDT capped out at 4-6 cores.
Really?
Kingston, Corsair, Crucial and most of the rest have 3200 kits. These are big sellers and usually at great prices.
At JEDEC speeds? That's weird. I've literally never seen a consumer-facing kit at those speeds. Taking a quick look at Corsair DDR4-3200 kits (any capacity, any series) on a Swedish price comparison site doesn't give a single result that isn't c16 in the first page of results (48 different kits) when using the default sorting (popularity). Of course there will also be c14 kits for the higher end stuff. Looking at one of the biggest Swedish PC retailers (inet.se), all DDR4-3200, and sorting by popularity (i.e. sales), the first result that isn't c16 is the 9th one, at c14. Out of 157 listed results, the only ones at JEDEC speeds were SODIMMs, with the rest being c16 (by far the most), c14 (quite a few), and a single odd G.Skill TridentZ at c15. Of course this is just one retailer, but it confirms my previous experiences at least.
My current home development machine (5900X, Asus ProArt B550-Creator, Crucial 32 GB CT2K16G4DFD832A), runs 3200 MHz at CL22 flawlessly. CL20 would be better, but that's what I could find in stock at the time. But running overclocked memory for a work computer would be beyond stupid, I've seen how much file corruption and compilation fails it causes over time. An overclock isn't 100% stable just because it passes a few hours of stress tests.
XMP is generally 100% stable though, unless you buy something truly stupidly fast. Of course you should always do thorough testing on anything mission critical, and running JEDEC for that is perfectly fine - but then you have to work to actually find those DIMMs in the first place.
There is one flaw in your reasoning;
While the E cores are theoretically capable to a lot of lighter loads, games are super sensitive to timing issues. So even though most games only have 1-2 demanding threads and multiple light threads, the light threads may still be timing sensitive. Depending on which thread it is, delays may cause audio glitches, networking issues, IO lag etc. Any user application should probably "only" run on P cores to ensure responsiveness and reliable performance. Remember that the E cores share L2, which means the worst case latency can be quite substantial.
You have a point here, though that depends if the less intensive threads for the game are latency-intensive or not. They don't necessarily have to be - though audio processing definitely is, and tends to be one such thing.
Core-to-core latencies for E-to-E transfers aren't that bad though, at a ~15ns penalty compared to P-to-P, P-to-E or E-to-P. Memory latency also isn't that bad at just an extra 8ns or so. The biggest regression is the L2 latency (kind of strangely?) that is nearly doubled. I guess we'll see how this plays out when mobile ADL comes around - from current testing it could go either way. There's definitely the potential for a latency bottleneck there though, you're right about that.
This is what I mean. Therefore, to achieve Intel/AMD's recommended maximum RAM speed of 3200 MHz, you need XMP/DOCP. You don't have a choice. Or you could go with your DIMM's standard speeds of 2400-2666 MHz, which is also advised against.
You could always track down one of the somewhat rare JEDEC kits - they are out there. OEM PCs also use them pretty much exclusively (high end stuff like Alienware might splurge on XMP). System integrators using off-the-shelf parts generally use XMP kits though, as they don't get the volume savings of buying thousands of JEDEC kits.
Why not just clamp down on background junk then? Seems cheaper and easier to do that than try to buy it in form of CPU. I personally would be more than fine with 2E cores.
...because I want to
use my PC rather than spend my time managing background processes? One of the main advantages of a modern multi-core system is that it can handle these things without sacrificing too much performance. I don't run a test bench for benchmarking, I run a general-purpose PC that gets used for everything from writing my dissertation to gaming to photo and video editing to all kinds of other stuff. Keeping the background services for the various applications used for all of this running makes for a much smoother user experience.
I still want 4P/2E chip. Won't change my mind that it's not the best budget setup.
We'll see if they make one. I have my doubts.
No, I just showed you why I don't care about server archs and why they have no place in HEDT market yet and on top of that, I clearly say that TR 3970X is my go to choice for HEDT chip right now, not anything Intel.
And that has literally the same meaning. You need big MT performance for big tasks. Only consumers cares excessively about single threaded stuff. Prosumer may be better served by 3970X, rather than 5950X. More lanes, more RAM, HEDT benefits and etc.
But there aren't that many relevant tasks that scale well past 8 cores, let alone 16. That's what we've seen in Threadripper reviews since they launched: if you have the workload for it they're great, but those workloads are quite limited, and outside of those you're left with rather mediocre performance, often beat by MSDT parts.
Ultimate workhorse can be an expensive toy. Some people use Threadrippers for work, meanwhile others buy them purely for fun. Nothing opposite about that.
Did you even read what I wrote, the sentences you just quoted? I literally said that they
can be the same, but that they aren't
necessarily so.
You presented them as if they were
necessarily the same, which is not true. I never said anything like those being opposite.
Really? All those Xeon bros with sandy, ivy, Haswell E chips are not that small and the whole reason to get those platforms, was mostly to not buy 2600K, 3770K or 4770K. Typical K chips are cool, but Xeons were next level. Nothing changes with threadripper
Except it does: bakc then, those chips were the only way to get >4 cores and >8 threads, which had
meaningful performance gains in common real-world tasks. There's also a reason so many of those people still use the same hardware: they keep up decently with current MSDT platforms, even if they are slower. The issue is that the main argument - that the increase in core count is useful - is essentially gone unless you have a very select set of workloads.
Their only job is just to put more cores on mainstream stuff. They develop architecture and then scale it to different users. Same Zen works for Athlon buyer and for Epyc buyer. There isn't millions of dollars expenditures specifically for HEDT anywhere. And unlike Athlon or Ryzen buyers, Threadripper buyers can and are willing to pay high profit margin, making HEDT chip development far more attractive to AMD than Athlon or Ryzen development. Those people also don't need stock cooler or much tech help, which makes it even more cheaper for AMD to make them.
Wait, do you think developing a new CPU package, new chipset, new platform, BIOS configuration, and everything else, is free? A quick google search tells me a senior hardware or software engineer at AMD earns on average ~$130 000. That means tasking eight engineers with this for a year is a million-dollar cost in salaries alone, before accounting for all the other costs involved in R&D. And even a "hobby" niche platform like TR isn't developed in a year by eight engineers.
Maybe two PC is a decent idea then, but anyway, those multithreaded tasks aren't so rare in benchmarks. I personally would like to play around with 3970X far more in BOINC and WCG. 3970X's single core performance is decent.
Well, that places you in the "I want expensive tools to use as toys" group. It's not the smallest group out there, but it doesn't form a viable market for something with multi-million dollar R&D costs.
But the fact that it's impossible to cool adequately doesn't mean anything right? And the fact, that it doesn't beat 5950X decisively is also fine, right? Premium or not, but I wouldn't want a computer that fires my legs just to beat 5950X by a small percentage.
Wait, and an overclocked 3970X is easy to cool?
I mean, you can at least try to be consistent with your arguments.
Well, you literally said here that it's in hardware, so sure software can't do that and you clearly say here that it may be fixed after few gens. Cool, I will care about those gens then, no need to care about experimental 12900K.
What? I didn't say that. I said there is zero indication of there being significant issues with the hardware part of the scheduler. Do you have any data to suggest otherwise?
You posted a link with 11900K benchmarks, not 10900K, making all your points here invalid. 11900K is inferior to 10900K due to 2 cores chopped off for tiny IPC gains. 2C/4T can make a difference. They more or less result in 20% of closing gap with 12900K and then you only need 10% of performance gains, which you can get from simply raising PLs to 12900K levels, you might not even need to overclock 10900K to match 12900K.
The 10900K is listed in the overall result comparison at the bottom of the page, which is where I got my numbers from. The 11900K is indeed slower in the INT test (faster in FP), but I used the 10900K results for what I wrote here. The differences between the 10900K and 11900K are overall minor. Please at least look properly at the links before responding.
You seem to still apply value argument and stability argument to literally the maximum e-peen computer imaginable. If you have tons of cash, you can make others just set it up for you, particularly well insulated phase change cooling.
Wait, weren't you the one arguing that the 12900K makes no sense? You're wildly inconsistent here - on the one hand you're arguing that some people don't care about value and want extreme toys (which would imply things like exotic cooling and high OCs, no?), and on the other you're arguing for the practical value of high tread count workstation CPUs, and on the third (yes, you seem to be sprouting new hands at will) you're arguing for some weird combination of the two, as if there is a significant market of people running high-end massively overclocked workstation parts
both for prestige
and serious work. The way you're twisting and turning to make your logic work is rather confusing, and speaks to a weak basis for the argument.
Maybe, but my point was about maximum computer that money can buy. Value be damned. 5950X or 12900K is not enough. Gotta OC that HEDT chip for maximum performance.
But that depends
entirely on your use case. And there are many, many scenarios in which a highly overclocked (say, with a chiller) 12900K will outperform an equally OC'd 3970X. And at those use cases and budget levels, the people in question are likely to have access to both, or to pick according to which workloads/benchmarks interest them.
And, to be clear, I'm not even arguing that the 12900K is especially good! I'm just forced into defending it due to your overblown dismissal of it and the weird and inconsistent arguments used to achieve this. As I've said earlier in the thread, I think the 12900K is a decent competitor, hitting where it ought to launching a year after its competition. It's impressive in some aspects (ST performance in certain tasks, E-core performance, MT in tasks that can make use of the E cores), but downright bad in others (overall power consumption, efficiency of the new P core arch, etc.). It's very much a mixed bag, and given that it's a hyper-expensive i9 chip it's also only for the relatively wealthy and especially interested. The i5-12600K makes far more sense in most ways, and is excellent value compared to most options on the market today - but you can find Ryzen 5 5600Xes sold at sufficiently lower prices for those to be equally appealing depending on your region. The issue here is that you're presenting things in a
far too black-and-white manner, which is what has lead us into this weird discussion where we're suddenly talking about HEDT CPUs and Athlon 64s in successive paragraphs. So maybe, just maybe, try to inject a bit of nuance into your opinions and/or how they are presented? Because your current black-and-white arguments just miss the mark.
XMP is out of spec still, bear that in mind, the dimms are factory tested, but not the rest of the system to go with it.
My 3200CL14 kit, I had to downclock to 3000mhz on my 8600k because the IMC on my 8600k couldnt handle 3200mhz.
On my Ryzen system my 3000CL16 kit worked fine on windows for years, but after I installed proxmox (linux), the ram stopped working properly, and it was unstable. Sure enough google stress test yielded errors until I downclocked it to 2800MHZ which is still out of spec for the cpu.
Always stress test ram using OS based testing (not memtest86 which is only good at finding hardware defects), dont assume XMP is stable.
I know, I never said that XMP wasn't OC after all. What generation of Ryzen was that, btw? With XMP currently, as long as you're using reasonably specced DIMMs on a platform with decent memory support, it's a >99% chance of working. I couldn't get my old 3200c16 kit working reliably above 2933 on my previous Ryzen 5 1600X build, but that was solely down to it having a crappy first-gen DDR4 IMC, which 1st (and to some degree 2nd) gen Ryzen was famous for. On every generation since you've been able to run 3200-3600 XMP kits reliably on the vast majority of CPUs. But I agree that I should have added "as long as you're not running a platform with a known poor IMC" to the statement you quoted. With Intel at least since Skylake and with AMD since the 3000-series, XMP at 3600 and below is nearly guaranteed stable. Obviously not 100% - there are always outliers - but as close as makes no difference. And, of course, if memory stability is
that important to you, you really should be running ECC DIMMs in the first place.