• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Core Ultra 9 285K Performance Claims Leaked, Doesn't Beat i9-14900K at Gaming

Desktop gains are slowing down generation to generation as the industry focuses on data centers and AI.
This is quite annoying, just as interesting architectural improvements that may offer significant performance uplifts are on the horizon, the main focus is diverted towards the "AI" gimmicks (yes, I know there are real uses for it). All of these companies (incl. Nvidia) are likely to pay a price for jumping on the bandwagon once someone is successful in creating tiny specialized ASICs for various "AI" markets.

It's Over. AMD win....
How come?
Intel and AMD are practically dead even in gaming (1440p/4K) with their respective current generations; Intel has faster cores while AMD makes up for it with loads of L3 cache. AMD is far more energy efficient (which may save you on cooling too), but since the benefits of 3D V-Cache is really a hit and miss, no one knows for sure whether this will continue to scale with future games. But in practice, it's pretty much on par as long as you select one of the higher SKUs from either vendor, especially if you're buying a mid-range GPU anyways. It's not like in the old Bulldozer days, or even Zen 1 days, where you missed out on a lot of performance.

And when it comes to applications, as usual it depends on your use case. Most buyers shouldn't base their purchasing decisions on "average/aggregated performance", and especially not from synthetic benchmarks.

Isn't there a latency advantage by removing HT?

That's important for gamers, and sound engineers.
Game engines don't scale indefinitely with faster CPUs. If you look at individual games, you'll see some games are already the bottleneck for many current CPUs, so we shouldn't expect faster CPUs to scale significantly further in those games. Eventually we will probably see some games get patched and new games arrive. This is fairly similar to the Skylake-family years; for a while there was a "plateau" in many games with CPUs boosting to ~4.5 GHz.
 
Come on Intel, I was hoping for better things here so that the 9950X price goes down and I can grab one for cheap to replace the 7950X3D.

Still hoping..
 
Come on Intel, I was hoping for better things here so that the 9950X price goes down and I can grab one for cheap to replace the 7950X3D.

Still hoping..
Prices will go down just because people aren't buying Zen5 in general.
 
so that the 9950X price goes down and I can grab one for cheap to replace the 7950X3D.
What's up with these every gen upgrades? Are y'all getting paid for calculating power? Hard pressed to find a non-business task that 7950X3D is not awesome at.
 
Did it do this at 80w though? Because that would be pretty wild.

Edit: ah 80w lower. That’s still pretty crazy power efficiency over what they’ve had.
Indeed. From TPU review, average gaming consumption of 14900K is 144w (the highest of all tested CPUs). 80w less is a huge efficiency leap, if true.
 
I hope the 285K can be cooled in my current setup. I went with 13600K last gen because I didn't think I could adequately cool the higher tier CPUs under full load (unless I undervolted & underclocked).

I'm curious to see if 8000-9600 MT/s CUDIMM can help. I also wonder how the NPU will end up being used. Also, wonder if there is overclocking headroom. I wonder if the NPU is overclockable. It is very interesting to me that all the E-cores of the K SKUs are clocked at 4.6 GHz, I wonder why they all clock the same this time, last gen there was a 10% clock increase on E-Cores going from i5 to i9.

Hopefully there is a bit more useful info at the reveal on the 10th.
 
What's up with these every gen upgrades? Are y'all getting paid for calculating power? Hard pressed to find a non-business task that 7950X3D is not awesome at.
Oh the 7950X3D is awesome for sure, over the 5950X i saw anywhere between 20-40% on most of my workloads. 9950X seems to be a significant step up for me though - have a look at Numpy. I also have some encrypt/decrypt stuff which Zen 5 absolutely rips.

Seeing how these X3D's have stagnated in price and even gone up a bit, I don't think i'll spend much on this upgrade anyway for a sizeable performance gain.
 
have a look at Numpy. I also have some encrypt/decrypt stuff which Zen 5 absolutely rips.
That is business. Even if you're just hobbying them.
 
That is business. Even if you're just hobbying them.
I guess.

I don't care for gaming performance as all of these higher end CPU's will give me the exact same performance in the games I play at 1440p wide. Now if I played some of those simulation games where X3D destroys everything, maybe i'd get a few more frames even at this resolution. But I don't play them in recent times and even if I did the difference would be inconsequential.
 
Weird, arrow lake has a 3 node advantage over raptorlake. It should be doing far better than just use less power.
 
Weird, arrow lake has a 3 node advantage over raptorlake. It should be doing far better than just use less power.
Unless the nodes aren't all that different which really does seem to be the case.

Density improvements are getting smaller and smaller, chips aren't clocking much faster on new nodes, and efficiency gains have been small as well. Unless backside power and GAA are big improvement for TSMC and Intel we can expect more of the same going forward.
 
Maybe we've just platoed in performance? With how much the GPU market seems to have stagnated this wouldn't be a surprise.
 
GPU market seems to have stagnated
It's a monopoly. There's no reason for stagnation to begone.
CPU market is competitive, it's just both AMD and Intel failed to impress this time. Shit happens.
 
Weird, arrow lake has a 3 node advantage over raptorlake. It should be doing far better than just use less power.
It's closer to a two node advantage, and Intel's processes were tuned for higher clocks. That 500 MHz reduction in clock speed is probably undoing most of the IPC gains. In addition, the memory latency is unlikely to be any better. On the other hand, the larger L2 cache should have helped in gaming. In any case, let's wait for the reviews.
 
So if these have a substantial power drop from "14th" gen...and every AIB is coming out with an XOC Z890 board with huge VRMs...I wonder why that is.

I think a fair amount of people would be ok with this launch if they actually bring back meaningful overclocking. That way people that just go for "out-of-box" experience will get a more efficient and cooler CPU, but people who wanted to OC anyway can still get a real improvement with their oversized cooling. 13th gen and 13th-gen-Version2 ("14th" gen) were already so close to maxed out (or even over-volted from the factory apparently) that there really wasn't much to gain from OCing...if anything...unless you had a huge cooling setup and even then, it wasn't wildly different. I'm interested to see what they find in the reviews.
 
Looks like with GPUs limiting performance in games nowadays - during their annual board meeting, CPUs collectively agreed: "we're done pretending to care". Arrow Lake/Zen 5 - boring. X3D is likely to stand out, but it caters to a niche enthusiast market or those willing to splurge for the very best (a minority), that too with its appeal largely limited to a few CPU-bound titles.

I have no complaints. Gamers are already in a good position with plenty of great options available from both previous and current generations.

Its good to see Intel is efficiency-focused with lower power consumption for similar or marginally better performance. Lets hope they get this one right.
 
I really don't dig the naming, drop the ultra-thing and just call it Intel u9 285k, didn't like what AMD did to their mobile name either Ryzen Ai 9 HX 370, I mean WTF is that?
 
Its good to see Intel is efficiency-focused with lower power consumption for similar or marginally better performance. Lets hope they get this one right.

If they have to do it with removing Cpu features like HT (less threads) then no thanks.
 
"All is not doom and gloom for the Core Ultra 9 285K, the significant IPC gains Intel made for the "Skymont" E-cores means that the 285K gets significantly ahead of the 7950X3D in multithreaded productivity workloads, as shown with Geekbench 4.3, Cinebench 2024, and POV-Ray."

Missed that part huh? Oh that's right, gaming is the only metric that matters in a CPU's performance.
Did you miss the part that leaked benchmarks shows the Zen 5 X3D parts stomping on their Zen 4 X3D equivalents. Looks like 9950X3D will be stronger than 285K in multi-threading for the most part and then has gaming up its sleeve.

Having said that I'm keen on 265K this gen as I'm not impressed with the X870 MB's. But I have two PC's and may get X3D for one and 265K for the other.

Indeed. From TPU review, average gaming consumption of 14900K is 144w (the highest of all tested CPUs). 80w less is a huge efficiency leap, if true.
So you think 285K is only 67W average? Intel would be screaming that from the top of the roof if Arrow lake was 60% more efficient. It'd be nice if it's 25%+ for similar performance. But at least AMD delivered a solid 6-9% at 40% lower TDP with Zen 5 (excluding 9950X)

Arrow Lake/Zen 5 - boring. X3D is likely to stand out, but it caters to a niche enthusiast market or those willing to splurge for the very best (a minority), that too with its appeal largely limited to a few CPU-bound titles.
Leaks are showing Zen 5 X3D to have impressive gains outside of gaming, making it no longer a one trick pony. More cache, hihger clocks, full OCing support. Might be the X3D version that finally interests me as I'm only casual gamer and productivity is more important. Being able to have bonus fps and still strong multi-core performance for apps is a win-win especially if they keep the TDP at same as Zen 4 X3D versions.
 
Leaks are showing Zen 5 X3D to have impressive gains outside of gaming, making it no longer a one trick pony. More cache, hihger clocks, full OCing support. Might be the X3D version that finally interests me as I'm only casual gamer and productivity is more important. Being able to have bonus fps and still strong multi-core performance for apps is a win-win especially if they keep the TDP at same as Zen 4 X3D versions.

Without sacrificing efficiency with the above leaks in the affirmative - it'll defo be one to keep an eye out!

Just wondering... any feedback on whether the same applies to the 9900X3D/9950X3D? Or are there any speculations suggesting that AMD has overcome latency issues with more than one CCD? Perhaps 3D-cache available on both CCDs?

I really don't dig the naming, drop the ultra-thing and just call it Intel u9 285k, didn't like what AMD did to their mobile name either Ryzen Ai 9 HX 370, I mean WTF is that?

Yep the Intel naming scheme is annoying. As always it'll take time getting used to. For me "ULTRA" sounds transformer'ishly childish. Then again, I felt the same way about 'SUPER' for graphics cards, but now it seems totally normal. Lets just hope none of these companies think its cool to add "POWER RANGER" to the mix. That will be the end of tech (EMP incoming...)
 
Man 2024 the year every company decided they dgaf about gamers....
 
Game engines don't scale indefinitely with faster CPUs. If you look at individual games, you'll see some games are already the bottleneck for many current CPUs, so we shouldn't expect faster CPUs to scale significantly further in those games. Eventually we will probably see some games get patched and new games arrive. This is fairly similar to the Skylake-family years; for a while there was a "plateau" in many games with CPUs boosting to ~4.5 GHz.

TBH yes I agree. I think we are reaching a clock rate max like we did with netburst. We can try and increase IPC even more, but I dont think that will be the primary shift.

With coding languages and game engines being more advanced then they were, I actually think we will start to see a shift in gaming whein the parallelism with either the engine, software stack or underlying technology APIs. honestly probably all of them.

If that begins happening more and more we will also likely continue to see the shift to include more cores.

Thats from a gaming perspective, but software parallelism as a whole is evolving at pretty crazy rates.
 
So you think 285K is only 67W average? Intel would be screaming that from the top of the roof if Arrow lake was 60% more efficient. It'd be nice if it's 25%+ for similar performance. But at least AMD delivered a solid 6-9% at 40% lower TDP with Zen 5 (excluding 9950X)
That's why I wrote if true...
 
So I guess Arrow Lake is going to be like Zen 5, Only slight improvements here and there and the main focus seems to be improving efficiency. Hoping now that the 9000X3D chips have good improvement because if not and they only have small improvements as well over the 7000X3D chips, it just make it clear that this gen from both sides is just unexceptional and mediocre.

Hopefully this will reduce expectations and hype though about Arrow lake performance increases, Zen 5 had such a bad reception at launch due to overhype which led to massive disappointment at launch when people saw Zen 5 did not improve as much as they were expecting.
 
Back
Top