Tuesday, October 8th 2024
Intel's Core Ultra 9 285K Performance Claims Leaked, Doesn't Beat i9-14900K at Gaming
The Chinese tech press is abuzz with slides allegedly from Intel's pre-launch press-deck for the Core Ultra 2-series "Arrow Lake-S." The most sensational of these are Intel's first-party performance claims for the top Core Ultra 9 285K model. There's good news and bad news. Good news first—Intel claims to have made a big leap in energy efficiency with "Arrow Lake," and the 285K should offer gaming performance comparable to the current Core i9-14900K at around 80 W lower power draw for the processor. But then there in lies the bad news—despite claimed IPC gains for the "Lion Cove" P-core, and rumored clock speeds being on par with the "Raptor Cove" P-cores on the i9-14900K, the 285K is barely any faster than its predecessor in absolute terms.
In its first party testing, when averaged across 12 game tests, which we used Google optical translation to make out the titles of, Intel used performance numbers of the i9-14900K as the mean. The 285K beats the i9-14900K in only four games—Warhammer 40K: Space Marine 2, Age of Mythology Retold, Civilization VI: Gathering Storm, and F1 23. It's on-par with the i9-14900K in Red Dead Redemption 2, Total War: Pharaoh, Metro Exodus, Cyberpunk 2077, Black Myth: Wukong, Rainbow Six Siege. It's slower than the i9-14900K in Far Cry 6, FF XIV, F1 24, Red Dead Redemption 2. Averaged across this bench, the Core Ultra 9 285K ends up roughly on par with the Core i9-14900K in gaming. Intel also compared the 285K to AMD's Ryzen 9 9950X, and interestingly, even the Ryzen 9 7950X3D.The Ryzen 9 7950X3D isn't AMD's fastest gaming processor (which is the 7800X3D), but Intel chose this so it could compare the 285K across both gaming and productivity workloads. The 285K is shown being significantly slower than the 7950X3D in Far Cry 6 and Cyberpunk 2077. It's on par in Assassin's Creed Shadows and CIV 6 Gathering Storm. It only gets ahead in Rainbow Six Siege. Then there's the all important comparison with the current AMD flagship, the Ryzen 9 9950X "Zen 5." The 9950X is shown being on-par or beating the 285K in 8 out of 12 game tests. And the 9950X is the regular version of "Zen 5," without the 3D V-cache.
All is not doom and gloom for the Core Ultra 9 285K, the significant IPC gains Intel made for the "Skymont" E-cores means that the 285K gets significantly ahead of the 7950X3D in multithreaded productivity workloads, as shown with Geekbench 4.3, Cinebench 2024, and POV-Ray.
Sources:
VideoCardz, Wxnod (Twitter)
In its first party testing, when averaged across 12 game tests, which we used Google optical translation to make out the titles of, Intel used performance numbers of the i9-14900K as the mean. The 285K beats the i9-14900K in only four games—Warhammer 40K: Space Marine 2, Age of Mythology Retold, Civilization VI: Gathering Storm, and F1 23. It's on-par with the i9-14900K in Red Dead Redemption 2, Total War: Pharaoh, Metro Exodus, Cyberpunk 2077, Black Myth: Wukong, Rainbow Six Siege. It's slower than the i9-14900K in Far Cry 6, FF XIV, F1 24, Red Dead Redemption 2. Averaged across this bench, the Core Ultra 9 285K ends up roughly on par with the Core i9-14900K in gaming. Intel also compared the 285K to AMD's Ryzen 9 9950X, and interestingly, even the Ryzen 9 7950X3D.The Ryzen 9 7950X3D isn't AMD's fastest gaming processor (which is the 7800X3D), but Intel chose this so it could compare the 285K across both gaming and productivity workloads. The 285K is shown being significantly slower than the 7950X3D in Far Cry 6 and Cyberpunk 2077. It's on par in Assassin's Creed Shadows and CIV 6 Gathering Storm. It only gets ahead in Rainbow Six Siege. Then there's the all important comparison with the current AMD flagship, the Ryzen 9 9950X "Zen 5." The 9950X is shown being on-par or beating the 285K in 8 out of 12 game tests. And the 9950X is the regular version of "Zen 5," without the 3D V-cache.
All is not doom and gloom for the Core Ultra 9 285K, the significant IPC gains Intel made for the "Skymont" E-cores means that the 285K gets significantly ahead of the 7950X3D in multithreaded productivity workloads, as shown with Geekbench 4.3, Cinebench 2024, and POV-Ray.
114 Comments on Intel's Core Ultra 9 285K Performance Claims Leaked, Doesn't Beat i9-14900K at Gaming
If multithreading is to provide significant gains in gaming performance in the future, there would have to be different kinds of changes than we've seen so far. As latency quickly adds up when trying to synchronize increasing number of threads, efforts to reduce latency or even "guarantee" deadlines would be required. Firstly a much faster OS scheduler, and probably some semi-"RT" like features so threads are undisturbed by other tasks. Secondly graphics drivers etc. would need to behave more like in a RT system, and thirdly possibly HW changes to streamline communication.
But while multithreading often gets the most attention, optimizing for ILP is much more important for performance scaling, whether it's for gaming or user interactive applications. For smaller work chunks which needs to be synchronized, multithreading can only get you so far before overhead or latency bottlenecks it, but modern CPUs are also increasingly superscalar, which means the relative performance gains for writing clean efficient code is larger than ever. And while CPU frontends are increasingly advanced, e.g. Meteor Lake improves branch misprediction recovery, the gains from saturating the pipeline is even greater. The bigger problem here is the software practises which are popular today, especially how OOP, abstraction and generalization are employed. It is remarkable how much having dense logic affects CPU performance. But at some point I would expect compilers and potentially ISAs to evolve in order to scale with wider CPU architectures, hopefully in a better way than Itanium. :) If anything, I'm hoping for more consistent performance. Pushing clock speeds too far leads to very unstable clock speeds, and at least for some of us that may be more annoying than slightly lower but more consistent performance. That is at least my impression from comparing Raptor Lake(i5-13600K) to Comet Lake(i7-10700K) at work, purely anecdotal and subjective impression, even though Raptor Lake has clearly higher peak performance. Overhype servers no one.
Unlike most, I'm not that disappointed with Zen 5, and I'm very curious to see how it performs in upcoming Threadripper models.
With HT removal it will be slower in MT workload. ST performance doesn't matter much these days.
The only advantages are native support for DDR5 6400MT memory, raised from 5600MT and I think dedicated PCIe 5.0 NVme port, if someone plans to upgrade now.
Oh and theoretical lack of Vmin Shift hardware bug.
Maybe in the future only MT workloads will only matter and then we will all have slower 100+ core chips, but I doubt it.
285K is NOT appealing. Have a great Cinebench score, but games CyberPunk 2077 like shit.
FnA, that's a nail in the coffin pre-release!!! OUCH! :nutkick:
They must be out of their minds to think that this is acceptable on any level.
We get the same from the Nvidia apologisers, saying AMD uses 30w at idle with their Intel thermonuclear reactors, it's pretty fucking funny :laugh:
- AMD wildly over-promised on performance
- pricing
It has become AMD's habit to set prices too high just long enough to get crucified in initial reviews, then almost immediately drop those prices after the damage is done. It's an unforced error, and it's sad to watch if you're remotely interested in healthy competition. In this case, AMD's went a step further, magnifying their error by arguing with reviewers about their benchmark results. And circumstances magnified the error even more--AMD's flailing over Zen 5 actually took pressure off Intel, which was in the process of immolating its reputation via the ongoing Raptor-Lake-degradation drama.
But sure, I agree; Zen 5 isn't bad. The product itself doesn't deserve much criticism.
This is the weakest post of the day lol Well look again, It clearly says AVERAGE (in chinese, but no need to know that anyway) FPS (which I bet you can read), yeah that's for games, which means including a high end graphics card, and you know it'll be a 4090 (and not AMD or Arc), and you how much power they draw.
It doesn't say 150-200 W anywhere. I already shown you the comparison with TPU's review. You're just making shit up, but since you mentioned "CPU intensive tasks" in the same sentence TWICE I'd suggest you take a nap before replying. What we want here is irrelevant. You know it must be the whole system.
The are all rumors and leaks. All it doing is hinting at what we will see in the coming days when real reviews are released. This debate is not going to conclude until we have real numbers from review sites.
Your initial point was I tried to explain that it must be system power, as in there's nothing strange about that power draw. TPU doesn't show resolution in power draw tests for CPU's either. It is however included in game efficiency, so I'd guess it's the same resolution in power draw.
Finally, this is LEAKED INFO. There's most likely footnotes about all the settings and specifications, but they're not posted here. Maybe, but like I said, this is not CPU power draw only.
Again, what we would want from a portion of a leaked presentation under NDA at the time is irrelevant. It wasn't for the public eye to begin with, and it's not complete.
My ugly calculation says it runs at 65 W average in games, right between 12600K and 12700K. We'll see in two weeks how close it is. :roll:
I guess we'll get more info in a few hours.
www.techpowerup.com/327227/intel-arrow-lake-leak-confirms-october-10-announcement-date-for-core-ultra-200-cpus
________________________________________________________________________________________________________________
Edit: I guess we didn't have to wait that long.
videocardz.com/newz/intel-core-ultra-200s-arrow-lake-s-desktop-processors-announced-lion-cove-skymont-xe-lpg-npu-and-lga-1851
Hotspot not in the center, of course.
It's like saying that the new Ford has better fuel economy than the F-150 Raptor. But I don't have a Raptor, so why would I care? I have a Fiesta, so how does it compare to that? TPU shows max power consumption. Here, we don't even know if it's that or something else. It's just a random number thrown onto a presentation slide. Let's settle with that. Like all leaked info on any product from any company, this is just as much useless.
This isn't any new "Ford"/Intel, it's their newest desktop CPU compared with the previous desktop CPU. We weren't talking about max power consumption to begin with, it was average like I said.
You asked for resolution in power draw tests, and it's not present in TPU reviews either. But again, it's not hard to figure out what it is.
You could have led with that. Cheers
But as you probably know, over time performance per core has become more and more unpredictable. An i9-14900KS (stock) wouldn't run at 6.2 GHz sustained in all kinds of workloads, and the more load there is on other cores the lower it will boost. This has become so unpredictable that the rated clock speeds are almost useless at this point. It started to get bad with Coffee Lake, but with Alder/Raptor Lake the variance of single core performance has gotten pretty extreme. (And I'm talking about desktop K-SKUs, low TDP SKUs and laptops are even worse) How noticeable this is to the end user depends on the workload and the user. So if Arrow Lake manages to reduce this variance while not advancing the peak performance much further, I would still consider it an improvement. If anything, with current products this might be an overlooked advantage for AMD.
*) By "mixed workloads" I mean typical "prosumer" use running multiple applications at once, typically not "high load" most of the time. The vast majority of benchmarks run one at the time, and only benchmarks peak performance. When did AMD over-promise on performance for Zen 5? (I must have missed it)
The big deal-breaker for "prosumers" with Zen 5 is the chipset/motherboard offerings. With too many lanes tied up with USB4, lanes shared between some M.2s and GPU, and only 4 lanes to the chipet, combined with "premium" motherboards which doesn't even maximize the platform IO features, it becomes almost laughable. While offering very affordable and efficient 12 and even 16 cores, with beautiful AVX-512 support, the platform looks very appealing until you start looking at long-term usability. For those who don't replace their machine every 2-3 years, memory bandwidth and PCIe lanes quickly becomes the bottleneck. If they can't offer lanes for a GPU + 3-4 SSDs + 10G NIC + 6-8 SATA devices without significant downgrades in performance, it's really a fail. Intel (mainly W680 motherboards) seem to have an edge here, but even here flexibility for expansion should be the primary focus for picking a motherboard, and it's not easy.
But I have hopes for Threadripper though, to finally unleash the Zen 5 cores.
I would suspect they used System load because they did not want to admit that the 14900K and co use 150W or more in gaming.
It will also vary obviously a lot depending in the testing setup and what was tested so it's roughly consistent with Techspot stuff. They might have manipulated the game list somewhat so that the 14900K looks less the power consumption monster it is.
We will know for certain when reviewers actually test this claim.
But if it's consistent with Techspot test then it would be 7800X3D at 477W vs 285K at ~529W.