Nah, this is a new architecture. They eventually have to release the product, knowing full well that they could get more out of it. If they wait too long, they miss their launch window and sale seasons. They very likely got Arrow Lake to a mostly stable place and put them on the market. I don’t doubt they have things to work on, and I would not be surprised if they get more performance out of the product. That said, I wouldn’t buy this product on such hopes.
By the time they get the first engineering samples, it's already too late to do major redesign, as that would add years to the development cycle. And by the time they chose the stepping to be sent out as qualification samples (several months ago), they already had set the clock speeds and knew exactly the performance characteristics. Adding an extra stepping or two before release wouldn't change anything fundamental, just very minor tweaks or bugfixes.
And to reiterate my main point; software isn't going to change much. What we see in Linux tests (e.g. from Phoronix) is probably the best case we can get. It's a small step forward, but mostly this archtecture is laying the groundwork for future generations, which buyers of Arrow Lake will have no benefit from.
I imagine a world where Intel only released processors with the Pcores. They had real gains.
There is Xeon W, which isn't
that expensive considering the pricing of "high-end" mainstream motherboards these days.
But Intel (and AMD) is missing out on a huge opportunity by not having a
proper HEDT platform any more, a platform like:
- CPUs starting at 8 p-cores, decent (but consistent) clock speeds, 250W TDP(without crazy boost), smaller socket with "standard" cooler support.
- 4-channel unregistered ecc/non-ecc
- ~64 CPU PCIe lanes
- Great motherboards at ~$400.
Which would sell great compared to the increasingly useless and unbalanced mainstream platforms from either vendor.
Eight threads is not enough for modern games ported from CPU's with potentially 16 threads, which means games will inevitably drop back to Ecores and suddenly you'll have screwy 1% lows.
<snip>
Whoever at Intel thought a high performance enthusiast chip could run off only 8 pcore threads should have been fired yesterday.
In many cases, SMT actually makes gaming worse, as having multiple threads competing for a core creates more latency and inconsistency in multithreading than having p-cores and e-cores, and a hybrid archtecture would in fact perform better if Windows wasn't using an antiquated kernel.
Even those games which may use more than 8 threads wouldn't have all of these synchronized, as that scales terribly. Usually you might see 2-3 threads completely pegged down and synchronized, and the rest as async worker threads. One thing of note is how much more consistent games could be with better software; some years ago I was working on a rendering engine and testing it on Linux with the standard kernel and a "semi-realtime" kernel, the difference was astounding. While the realtime kernel probably lost ~0.5% average FPS, it was silky smooth and nearly all signs of microstutter disappeared. Applications were smoother too. But the disadvantage; much higher idle load, which is probably why we don't see this shipped as standard. But at the very least, it goes to show that things could be so much better than the jerky stuttery mess we know as Windows today.
Agreed with everything up until this bit. Why is this still a point of contention? Low res CPU benchmarks are an easily controlled testing environment for CPU performance. Nothing more, nothing less.
And while they may not be a big deal for single player games at higher resolutions, those performance deltas absolutely can matter for online play, particularly MMOs.
We've had this silly argument for probably two decades.
When you run benchmarks with unrealistic hardware/software configurations, you are not eliminating the GPU as a factor; you are in fact
introducing artificial bottlenecks which real world users never will run into. Only people who don't know how software scales would think this is a good idea. How a CPU performs in workloads far removed from anything you will ever actually use it for, shouldn't determine your purchasing decisions. And it's not going to tell you anything useful of which CPU is going offer better longevity in gaming performance either. On the contrary, when future games increases their computational load, the CPU with more computational power is going to pull ahead of the weaker CPU with lots of L3 cache.