• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra 9 285K

The review from Phoronix shows great performance across many productive tasks, including compiling, video encoding, as well as server/web developer related stuff like php, datbases, nginx, python, etc.
Overall it's a decent step up from Raptor Lake in most productive tasks, and where it usually falls short of Zen 4/5 is either something which scales well with 12 or 16 cores, or something that uses AVX-512. The latter of which Intel was sorely mistaken to leave out, not only because it would have boosted a lot of workloads, but it also affects the overall impression as most reviews aggregate scores.
It would be interesting to see a comparison with upcoming Threadrippers and Xeon W, to see if many of these workloads scale better on high-end workstations, but few reviewers do this.

But like with any product, power users needs to consider which benchmarks are applicable to their workload, as my workload is different from yours, so a Zen 5 might be better for some and Arrow Lake for others. But I expect if Threadripper and Xeon W is thrown into the mix, it would eat more into Zen 5's winnings than Arrow Lake's.

As for the 15 pages of whining about gaming performance; we are talking about within 2-4% in 1440p/4K with a high-end GPU (less with a lesser GPU), and the results are skewed a bit from outliers, so practically speaking it's a draw, so if this matters to you, you better be playing one of those games where there is a significant difference.
And since this CPU is only named "285K", isn't there a good chance for a higher clocked model down the line? ("295K"?)

Beyond the "AI" nonsense, Arrow Lake is an architectural improvement, and where it falls short is mostly due to lower clock speeds (or it's synthetic stuff). For most use cases this is a good trade-off, but whether that applies to you is subjective. I would apply a liberal amount of salt to the typically harsh and sensational reviews on YouTube.
Technically, I find Arrow Lake mostly interesting for what the changes indicates are coming down the line; deeper and wider execution.
 
Wow, this is seriously the worst CPU launch I've seen since Bulldozer. The only difference is Arrow Lake isn't a juice guzzler.
 
or something that uses AVX-512. The latter of which Intel was sorely mistaken to leave out, not only because it would have boosted a lot of workloads, but it also affects the overall impression as most reviews aggregate scores.
It would be interesting to see a comparison with upcoming Threadrippers and Xeon W, to see if many of these workloads scale better on high-end workstations, but few reviewers do this.
couple of years ago everyone was whining avx 512 should be dumped Thorvalds included
 
@W1zzard Thank you for this awesome and wholesome test!
Especially on the (re-) tested power-consumptions, due to Asus' cheating here on the ROG Maximus 7890 Hero over EPS12V for powering a bunch of vCore-phases exclusively off the 24-pin ATX-plug. How was that not deliberate from them, to picture ARL more efficient than it really is?!
It really was fully intentional, no? It's mechanical, physical power-routing after all – That doesn't just happen 'by accident' …

It's quite strange not really surprising, that a good part of reviewers got shipped said m/b for their reviews … Shame upon him who thinks evil upon it!
 
The review from Phoronix shows great performance across many productive tasks, including compiling, video encoding, as well as server/web developer related stuff like php, datbases, nginx, python, etc.
Overall it's a decent step up from Raptor Lake in most productive tasks, and where it usually falls short of Zen 4/5 is either something which scales well with 12 or 16 cores, or something that uses AVX-512. The latter of which Intel was sorely mistaken to leave out, not only because it would have boosted a lot of workloads, but it also affects the overall impression as most reviews aggregate scores.
It would be interesting to see a comparison with upcoming Threadrippers and Xeon W, to see if many of these workloads scale better on high-end workstations, but few reviewers do this.

But like with any product, power users needs to consider which benchmarks are applicable to their workload, as my workload is different from yours, so a Zen 5 might be better for some and Arrow Lake for others. But I expect if Threadripper and Xeon W is thrown into the mix, it would eat more into Zen 5's winnings than Arrow Lake's.

As for the 15 pages of whining about gaming performance; we are talking about within 2-4% in 1440p/4K with a high-end GPU (less with a lesser GPU), and the results are skewed a bit from outliers, so practically speaking it's a draw, so if this matters to you, you better be playing one of those games where there is a significant difference.
And since this CPU is only named "285K", isn't there a good chance for a higher clocked model down the line? ("295K"?)

Beyond the "AI" nonsense, Arrow Lake is an architectural improvement, and where it falls short is mostly due to lower clock speeds (or it's synthetic stuff). For most use cases this is a good trade-off, but whether that applies to you is subjective. I would apply a liberal amount of salt to the typically harsh and sensational reviews on YouTube.
Technically, I find Arrow Lake mostly interesting for what the changes indicates are coming down the line; deeper and wider execution.

Phoronix showed 10 percent up for the 285K while the 9950X is up 35 percent. Little context here. Those are not good results for Intel.

Did you even read the reviews. Gaming is not within 2 percent. It is losing by 13 percent. It can't justify a high price.
 
Beyond the "AI" nonsense, Arrow Lake is an architectural improvement, and where it falls short is mostly due to lower clock speeds (or it's synthetic stuff). For most use cases this is a good trade-off, but whether that applies to you is subjective. I would apply a liberal amount of salt to the typically harsh and sensational reviews on YouTube.
Technically, I find Arrow Lake mostly interesting for what the changes indicates are coming down the line; deeper and wider execution.
There is a huge node jump, similar to what happened with zen to zen2, and since zen3 is also on the same node as zen2, it is also similar to the jump from zen to zen3. What Intel managed to do with this epic node jump is better E-cores.
 
@W1zzard Thank you for this awesome and wholesome test!
Especially on the (re-) tested power-consumptions, due to Asus' cheating here on the ROG Maximus 7890 Hero over EPS12V for powering a bunch of vCore-phases exclusively off the 24-pin ATX-plug. How was that not deliberate from them, to picture ARL more efficient than it really is?!
It really was fully intentional, no? It's mechanical, physical power-routing after all – That doesn't just happen 'by accident' …

It's quite strange not really surprising, that a good part of reviewers got shipped said m/b for their reviews … Shame upon him who thinks evil upon it!
GN specifically set their tests up to try to account for this.
 
Then you just don't need an x8 slot whatsoever? From what you said now, all you need are x4 links for NVMes and nothing else.
AMD's x670(e) and x870e can also do tons of downstream links, with x12 4.0 and x8 3.0, and coping data among devices in this link would mean full bandwidth as well since it's not going through the CPU either.

Do you often move vast amounts of data across 4+ drives at once? I'm curious about that use case.
Have you ever tried to install or remove an M.2 drive on a motherboard, while that board is in a case?
Have you ever cursed those stupid tiny screws and heatsinks?
Have you ever wondered why it's necessary to remove your GPU to access an M.2 slot?

An add-in M.2 PCIe card has none of these issues.

Now, if motherboard M.2 slots were located on the back of the board where they should be instead of on the front side where add-in cards plug in and get in the way, none of this would be a problem. But motherboard manufacturers have consistently demonstrated they are intellectually bankrupt and incapable of implementing ideas that will make it easier for people to build and maintain their PCs.

And before you say "but M.2 drives need heatsinks" no they don't if they're located in a sane place. But most board manufacturers put them near heat-producing board components like the GPU and chipset which, surprise surprise! means those drives get cooked. In short, M.2 heatsinks are a solution to a problem that board manufacturers themselves created because they are idiots.
 
Have you ever tried to install or remove an M.2 drive on a motherboard, while that board is in a case?
Have you ever cursed those stupid tiny screws and heatsinks?
Have you ever wondered why it's necessary to remove your GPU to access an M.2 slot?


An add-in M.2 PCIe card has none of these issues.

Now, if motherboard M.2 slots were located on the back of the board where they should be instead of on the front side where add-in cards plug in and get in the way, none of this would be a problem. But motherboard manufacturers have consistently demonstrated they are intellectually bankrupt and incapable of implementing ideas that will make it easier for people to build and maintain their PCs.

And before you say "but M.2 drives need heatsinks" no they don't if they're located in a sane place. But most board manufacturers put them near heat-producing board components like the GPU and chipset which, surprise surprise! means those drives get cooked. In short, M.2 heatsinks are a solution to a problem that board manufacturers themselves created because they are idiots.
Yes, yes, yes and absolutely yes.

I think M.2 via PCIe card really should be the way to go, but a kind of middle ground as you said would be at the back of the board, although that would need a case exposing the back area so its convenient.

Pretty much agree with all the flaws you have highlighted, pretty crazy engineers keep these flaws from generation to generation.
 
Currently. That will change. Always does.
Yeah, as I said 64GB are long due. Don't think we will be seeing more than that for DDR5 UDIMMs tho, so 256GB tops for consumer platforms (already double what's possible on DDR4 anyway, which I currently have).

Have you ever tried to install or remove an M.2 drive on a motherboard, while that board is in a case?
Have you ever cursed those stupid tiny screws and heatsinks?
Yes, multiple times.
Seems like a skill issue :p
But really, I don't go changing drives every month or so, so I don't think that's an issue for the majority of people.

An add-in M.2 PCIe card has none of these issues.
Wouldn't this require an x8 slot then? I can see your initial point now.
You won't be achieving this without downgrading your gpu slot in any consumer platform, sadly.
Another option would be a more expensive M2 AIC that has an actual PCIe switch on it, but then you'd be bottlenecking all drives attached to it to the slot speed, and they're also way more expensive.
 
Seems like a skill issue :p
If only this was a joke... I am one of those people who likes DIY but has only thumbs.

But really, I don't go changing drives every month or so, so I don't think that's an issue for the majority of people.
I don't either, but I like having the option. And that is what the PC is supposed to be about: options, via add-in cards and slots.

Wouldn't this require an x8 slot then? I can see your initial point now.
Two drives would require x8, four x16.

You won't be achieving this without downgrading your gpu slot in any consumer platform, sadly.
And therein lies the problem.

Another option would be a more expensive M2 AIC that has an actual PCIe switch on it, but then you'd be bottlenecking all drives attached to it to the slot speed, and they're also way more expensive.
Indeed.
 
And therein lies the problem.
Yeah, but point is that it's still going to be a problem with Z890, even if theoretically doable to have this option, sadly.
I personally would love the possibility to do x8/x8/x8, but I've just come to accept that won't be possible in the near future for consumer platforms unless I spend quite a lot on HEDT/server platforms.
 
Have you ever tried to install or remove an M.2 drive on a motherboard, while that board is in a case?
I do it all the time.
Have you ever cursed those stupid tiny screws and heatsinks?
Yeah, those can be a pain.
Have you ever wondered why it's necessary to remove your GPU to access an M.2 slot?
That's not a thing on all boards. Depends on the model.
An add-in M.2 PCIe card has none of these issues.
That's a good point, but most people don't need one.
 
I do it all the time.

Yeah, those can be a pain.

That's not a thing on all boards. Depends on the model.

That's a good point, but most people don't need one.

Usually only the top slot, and if you are changing m.2's its not like you do it every day is it, so no biggie removing the GPU, unless you have a custom loop.
 
Yeah but we all know, Ryzen gets 10%+ boost in Games...
So at least for AMD he should use it, every AMD would use it
Anyway ..this would only make Intel's life more disgraceful ....

Yes u are right buddy, and that made Intel's defeat even worse....:roll:

Maybe +10% if u use RTX4090

Also AMD cpu will get price increase, have fun paying more.

Didint u told its bad if Nvidia got monopoly, Prices goes up.
But if Amd got monopoly in CPUs its okay for u and other Amd fans

It was first reported HT was being removed a long time ago, I dont think its ever coming back on Intel now. The advantages are very questionable with lots of disadvantages and they have a replacement for it in the workloads HT shined.
Removing E-cores and using HT is better
 
Biggest issue for Intel (apart from the obvious slower than expected gaming performance) is price as a platform! The fact that you need new motherboard for these CPU's is going to hurt their sales significantly, if you had a 13k series processor and were looking for an "upgrade" you could possibly pop in a 285k or 265k, especially going from something like 13600k or lower, but the fact that you have to buy a new mobo would be a tough sell with the prices of the current models on the higher side, considering they are worse at gaming than AMD's offerings.

I think the multithreaded apps performance is very good overall, while there is some regression in a small number of apps, overall it is a big improvement over their 14k series CPU's and all of that at a lower power consumption, the issue is they are also competing with AMD which has a bad 9000 series, way too expensive for a very minor gain over the 7000 series, but they do have the 7000 series which is much cheaper and competes very well with Intel's new offerings in price to performance ratios!

The 7900x would be a good choice for a lot of people who want that mix of value, apps performance and good gaming performance as well.

So its up to Intel to basically reduce the prices and I think they are going to have to do it, they'll be forced to drop prices, as I don't see these new cpu's flying off the shelfs. I still think they are going to do very well with system builders, that is literally 85% of the desktop market, but its not going to be as easy as with previous generations and previous years! People don't automatically want an Intel CPU anymore, they are looking at price and value and increasingly choosing AMD even with prebuilt computers!

So Intel should lower prices by 20% across the board, make sure there are very cheap sub $150 motherboards in order to be successful.
 
Phoronix showed 10 percent up for the 285K while the 9950X is up 35 percent. Little context here. Those are not good results for Intel.
Aggregated performance is nearly meaningless, as the review focuses on various power user/workstation/professional workloads, and as I mentioned many of which are relevant for development and creative works are cases where Arrow Lake outshines the competition. And likewise, many other workloads Zen 4/5 with 12/16 cores excel by a good margin, many of which are more batch workloads. And this goes to show that for any "prosumer", aggregated performance results are largely useless, as the variance between tests aren't a few percent, but can be easily 30-50% in either direction in many cases. And there is no point in buying the perfect hardware for a use case you don't have, which is why reviews should rather say for workloads like A, B and C, this product is clearly better, and for D, E, F and G that is better, rather than an aggregated score.

Did you even read the reviews. Gaming is not within 2 percent. It is losing by 13 percent. It can't justify a high price.
Did you even read my post? :rolleyes:
"within 2-4% in 1440p/4K with a high-end GPU"
Which is very accurate according to TPU's review.
Keep in mind that no one except a few forum users and YouTubers run a RTX 4090 at 720p on low in order to create arificial bottlenecks.

Also keep in mind that Arrow Lake is at a disadvantage here, running "underclocked" memory while Zen 4/5 is overclocked from 5200/5600 MHz respectively. It only matters a tiny bit, but then again, the aggregate results have only tiny margins.
Additionally, as said the results are skewed at bit from outliers, which is a problem with averages, so depending on the game selection the variance will be even less.

So I stand by my statement that it's pratically a draw in most cases. For most current games, any of the faster Zen 4, Zen 5, Raptor Lake or Arrow Lake CPUs will perform virtually the same, especially if paired with something like a RTX 4060 or 4070, and running a stock system (not some extreme OC to set records). In many ways this is a good position to be in, since you can pick the CPU that fits your other requirements, whether those are specific application performance, platform IO, or just whatever is the better deal at that point, and still get top class gaming performance. :)

An add-in M.2 PCIe card has none of these issues.

Now, if motherboard M.2 slots were located on the back of the board where they should be instead of on the front side where add-in cards plug in and get in the way, none of this would be a problem. But motherboard manufacturers have consistently demonstrated they are intellectually bankrupt and incapable of implementing ideas that will make it easier for people to build and maintain their PCs.
A lot of sense in your post except for suggesting to put things on the back. This causes more problems than it solves, needs different cases for access and airflow, makes cases wider, etc.
Just abandoning M.2 for desktop is the better solution; provide all CPU and chipset lanes as PCIe slots, which would make motherboards cheaper at time of purchase, and give much more flexibility down the road. Buyers now need to pick motherboards very carefully, accounting for which IO they might need a few years from now, and if they pick the wrong one, it will effectively shorten the longevity of their system.

And before you say "but M.2 drives need heatsinks" no they don't if they're located in a sane place. But most board manufacturers put them near heat-producing board components like the GPU and chipset which, surprise surprise! means those drives get cooked. In short, M.2 heatsinks are a solution to a problem that board manufacturers themselves created because they are idiots.
The massive metal blobs on the latest Intel and AMD platforms are absurd.
As you suggested, putting SSDs on PCIe cards makes this easily solvable, as a tiny heatsink with good long fins easily cools far more than giant metal blobs, and in extreme cases a tiny fan will do the rest.
But having these massive metal blobs (underneath the graphics card) just absorb heat for a short burst, and then very slowly dissipate it. This doesn't just shorten the lifespan of hardware, it also leads to heavy throttling.
It's about time we get some common sense motherboards with no BS and an affordable price, instead of dozens of terrible gimmicky "high-end" boards, many of which are very expensive and still doesn't fully expose the features of the chipset. This is the kind of nonsense we get when designers and marketing people try to do engineering.
 
It would be more like 12000 and this categorially would not scale nearly as well in most other things. No matter how you spin it the E cores are duds in these CPUs, nothing you wouldn't be able to do with 16P cores and smaller caches vs 8P/16P at similar die sizes and you wouldn't need to deal with the asymmetric architecture in software which will forever remain a problem.
You got it the other way around. It's the P cores sucking and plus losing HT that makes it look bad. Plus the drastically lowered ring and memory performance is having an impact in games.

P+E contribution was roughly 50/50 in Raptorlake. It's 40/60 now.
 
Very mixed launched by Intel. It's not as big a fiasco as Bulldozer was for AMD, but it's still a big blunder for Intel and the timing of it couldn't be any worse. It's a terrible moment for Intel to have this big a misstep. There are some positive and negative takeaways and generation thoughts on where they can deviate next.

Overall the iGPU, cache structure changes, E core performance, and power usage are some positives, but paired with mixture of regressions in other regions. Overall from Raptor Lake it presents itself as a side grade similar to Skylake to Kaby Lake or with comet lake and rocket lake if not maybe a bit worse. It's timing couldn't be worse for Intel to have this kind of misstep moment.

AMD has kind of had a lackluster launch of it's latest series, but most everyone anticipates and expects the latest X3D parts should pick up on a positive note just where AMD generally left off X3D. It should resolve the bigger CCD and X3D latency complaints as well. Not a great time for Intel to blunder, but AMD had a small stumble in terms of underwhelming with it's recent launch to some extent.

It feels a good bit like Intel made multi-threading worse overall from previous generation with removal of HT and doing nothing to increase E cores in it's absence. Like overall thread count is worse between 265K and 14700K and also in more than a handful of scenario's even the 285K ends up lower than a 14700K due to fewer threads. For a new product launch I wouldn't call that great to have the previous generation i7 beating the next generations i9. To quote Steve from Gamer's Nexus in a previous Intel review "Waste of Sand" is pretty close to my feelings on this one.
 
Very mixed launched by Intel.
Wins both single and multi in rendering. E-cores are 50% more productive. And that's what it was created for.
Apparently not for gaming. since the memory controller and Phy were separated from the CPU tile unlike Lunar lake and that is so easy to fix. But in the current state it's DOA for esports.

1730029852708.png
 
You got it the other way around. It's the P cores sucking
They both suck, what you gain with more E cores in MT performance you lose in the ratio of cores that are capable of higher ST performance.
 
Back
Top