• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core Ultra 9 285K

HUB didnt help themselves there, they already have accusations of being preferable to AMD, whilst I dont think they did it for that reason, I think it was embargo date.
I don't think they are that friendly towards AMD.... Well definitely not the Radeon group part of it...
Likewise their review of the Ryzen 9000 series was lukewarm at best.
 
I want competition, I don't want a situation like Nvidia also in the CPU sector. The years of Intel's 4 cores +2% generation vs generation I don't want to see again.

That said, the permance of the Intel processor is disappointing. ‘Eh but in the TechpowerUp application benchmark they do very well’. Yes, but the variability is very high indeed. For example in the Stockfish test they really suck, and that's something I really care about.

The precise consumption I can't see (maybe I'm blind) but it should be around 240W.
That's definitely 40W (or so) less than the 14900K but it still consumes more than the 9950X, despite being 23% slower


View attachment 368812

I am ok with low generational improvements as it delays obsolescence, fast progression is good for people who can afford to upgrade often, bad for those who cant. However I do think have at least two healthy competitors in a market is important, as otherwise the dominant one takes advantage, it wasnt good AMD were so bad pre Zen, and its not good in the GPU market right now, and it wont be great if Intel were to fall so far behind AMD would act like a monopoly.

I do think Intel have the edge on chipset, I also think the product is still ok just not amazing, but they are behind on power draw even with the improvement, and need to find something to catch up with X3D chips on gaming. The power draw it is worth mentioning though Intel systems are better at idle, so depending on how you use your system, an Intel system may be better in that area, or an AMD system may be better.

I see this is as a transitional period for Intel as well with this tile adoption. It should improve moving forward, and it is good they have recognised the power draw is a problem in specific workloads.

I don't think they are that friendly towards AMD.... Well definitely not the Radeon group part of it...
Likewise their review of the Ryzen 9000 series was lukewarm at best.
I dont either, but some people do.
 
dual Thunderbolt 4 40Gbps links without having to sacrifice any PCIe lanes is fantastic, although lack of built-in WiFi 7 is disappointing. Sure, Zen 4/5 has 4 more PCIe 5.0 lanes but I think we can all agree that nobody gives a fuck about PCIe 5.0
Hardly anyone cares about things like Thunderbolt as well.
 
TPU did the same and tested on 23H2 cause of performance issues with AL on 24H2... if anything they have done Intel a favour, good luck to the guys who have W11 24H2 and buy AL though, likely just scheduling issue that will be worked out over the coming weeks/months

Apparently according to level 1 tech there already is an improvement on the insider build but he thinks it isnt a big one.

But yeah 24H2 mess, Intel should have made sure thats not an issue before releasing the product, and this is the same thing I said about AMD's scheduler issues. They falling foul again of releasing to schedule instead of when things are ready.
 
Last edited:
It's essentially the same with Zen 5 as it was with Zen 3. Zen 3 you could get 1800 MHz on pretty much any sample, and 2000 MHz IF on good ones.

Zen 4/5 is more right? Since you're running 6000 MT memory "in sync", but the trickery is that it's actually 1.5/1 "sync". The IF runs at 2000 MHz and the memory runs at 3000 MHz and this is considered "in sync". Compared to Zen 3 where both IMC and memory would run at 1800 MHz or 2000 MHz.
This is all just water-carrying for Intel. It doesn't matter if there's a clock divider in place, all that matters is the performance.
 
Oh ok, did they use 24H2 for the AMD data?

Personally I wouldnt be changing OS on a review because something didnt run as fast as possible, as I dont think a reviewers job is to make the product look as good as possible, thats marketing's job.

Apparently according to level 1 tech there already is an improvement on the insider build but he thinks it isnt a big one.

But yeah 24H2 mess, Intel should have made sure thats not an issue before releasing the product, and this is the same thing I said about AMD's scheduler issues. They falling foul again of releasing to schedule instead of when things are ready.
No both thested on 23H2 as AL has issues with 24H2 with FPS as much as 50% lower than it is on 23H2, if AMD was tested on 24H2 then it would show a bigger loss to AL as apparently there's up to a 10% uplift for AMD 9000* series in 24H2
 
No both thested on 23H2 as AL has issues with 24H2 with FPS as much as 50% lower than it is on 23H2, if AMD was tested on 24H2 then it would show a bigger loss to AL as apparently there's up to a 10% uplift for AMD 9000* series in 24H2
I edited my reply now as it seems the 24H2 issues were quite big. Understandable why TPU did what it did. At least both were tested on 23H2.
 
And here we thought Zen 5 was a disappointment. :laugh:

I was mostly content with Zen 5's lack of uplift because it was largely around the same caliber as the previous generation Zen 4 chips while shooting well under the same power & showed a little muscle at 105W, but this is just...

Losing performance from last gen for the sake of efficiency should be a choice on the volition of the end user. Were they to completely sweep AMD's feet here and hold a steady ~90W at sustained boost while still rivaling 14th gen, now that would be something, but it doesn't.
 
sheeesh that was a rough read. Not insanely bad for a straight up new everything release, but......they already did this kind of stuff with meteor and lunar lake. Lunar taking the cake for 40tops NPU and battlemage integrated. this is a head scratcher. They should have waited. Still; cool everyone in x86 land is on the tile era forrealeez.
 
And here we thought Zen 5 was a disappointment. :laugh:

I was mostly content with Zen 5's lack of uplift because it was largely around the same caliber as the previous generation Zen 4 chips while shooting well under the same power & showed a little muscle at 105W, but this is just...

Losing performance from last gen for the sake of efficiency should be a choice on the volition of the end user. Were they to completely sweep AMD's feet here and hold a steady ~90W at sustained boost while still rivaling 14th gen, now that would be something, but it doesn't.

Sometimes we see industry do this, I remember in the past when car engine manufacturers were transitioning they had regressions from generation to generation until they started figuring things out, usually its not a good idea to upgrade every gen anyway.

The way I look at this release is Intel are transitioning to tiles, they have recognised they need to work on power, so thats likely affected the performance, when this was designed, performance compared to last gen isnt the one single metric.

I would say the replacement of HT with better e-cores for productivity is progress, the rest it seems will need more generations to get to somewhere people consider progress over previous generations.

This shouldnt have been released with the current state 24H2 though. Executives seem to never learn just wait wait wait until things are ready.
 
Purchase a 5700X3D instead. @W1zzard I have never mentioned this in a review for any hardware that has been posted on TPU, but this is simply a rip off. It should be mentioned, although you mostly allude to it.

You can literally save almost $400 dollars for a comparable processor at Zen4, with nearly the same 4k performance and nearly the same power efficiency.
 
Last edited:
Is Raja working in the CPU design now?
AMD Raja.jpg
 
The highlight here was really the CUDIMM preview. I can't wait for the Trident Z5 CK's to be available worldwide, hopefully sooner rather than later. The 9600 MT/s model got listed $389 on Newegg, so it's gonna be a bit expensive, but totally worth it. Just hope my CPU's IMC can take it, at least into the 8000's. 7600 was easy with my existing 6800 kit.
But why would you use it on a 14th gen CPU? The clock redriver will be useless on a platform that doesn't support it, and so far only ARL does so in desktop. Zen 5 doesn't work with it either (i mean, you can use the dimms, just not take advantage of the clock redriver part).
CUDIMMs should be compatible with Raptor Lake as well
Any source on that? AFAIK it does not, as I said above.
In this steaming mess that Windows has become, one is now forced to see the results on Linux, and results on Linux have spoken.

Kudos to Intel for being a generation behind in performance, and probably two generations behind in consumption. Kudos indeed, it was not easy.


View attachment 368777
For some tasks in there the 285k was pretty okay with acceptable power consumption, like in compile workloads where it matched the 9950x.
However, it got wrecked in AVX-512 tests.
Here's the link for all the tests, with the mean per test suite right at the bottom:

In some it wins, in other it matches, in others it gets heavily beaten.

Given that it's priced higher, it's really not a better option compared to ryzen 9000.

Perhaps some form of high-speed on package memory could supplement trips to the system RAM, like what Apple is doing with M-class SOCs? 8-16GB might help offset system costs, even if that means CPU prices go up some. That really seems like the future to me, where system RAM takes a step back for intensive workloads. Sure, eventually an app needs to hit system RAM, but perhaps the next major gain is to keep more tasks on-package. X3Ds do this already, and if they could clock as high as non-3DV chips, there would be no losses.
On-package RAM is either HBM or LPDDR, and both have really high latency, so it'll make things even worse.
Just look at Lunar Lake, memory latency in there is awful and memory is on package.
Memory latency on M-chips is also bad because LPDDR as well.

ARL allows one channel addressing sub-channel 1 on both sticks, and the other channel addressing sub-channel 2 on both sticks. I wonder if this means changing a value in each sub-channel could be done in one cycle, rather than taking two cycles with older MC.
It's 4 different, independent 32-bit (without counting the control pins) controllers, each going to a sub-channel in the DIMM.
You are assuming that the previous 64-bit controller was not able to do independent data transfers to each sub-channel, do you have any source on that? AFAIK, it was capable of doing so.

The main reason it's now 4x32-bit controllers instead of 2x64-bit it's because the SoC tile of ARL is pretty much a copy-paste from MTL.
 
Plus the ARL memory controller allows for more granularity, with sub-channels on the memory. Whereas RPL had one MC channel going to 2x40 bit sub-channels on the memory stick, ARL allows one channel addressing sub-channel 1 on both sticks, and the other channel addressing sub-channel 2 on both sticks. I wonder if this means changing a value in each sub-channel could be done in one cycle, rather than taking two cycles with older MC.
Your explanation is questionable, and so is Wizard's, and so is that slide from MSI. Intel could help clear the smog (both smoke and fog) but what to expect when they don't even mention subchannels here.

In my understanding, each of the four subchannels has to operate independently and have its own MC, which has access to 1/4 of the total RAM. Each one does its thing independently (queueing, prioritisation, sending commands to ranks, taking care of timings). Then you gain some granularity, and that can possibly improve read/write speed of small transfers, because one subchannel is busy doing something and the other one can do something else (even if they belong to the same channel). But large sequential transfers remain limited by the total bandwidth, like ever before.

I've dug up something related from the earlier days of dual-channel memory. "Ganged memory". Apparently the dual-channel logic didn't always work so well, so you had an option in some chipsets to join both channels into a single 128-bit channel. That was the exact opposite of splitting into 32-bit subchannels.
 
HUB didnt help themselves there, they already have accusations of being preferable to AMD, whilst I dont think they did it for that reason, I think it was embargo date. I think they either should have delayed the review until they got it working on 24H2, or provide comparison results on same OS. But of course they had to meet that precious embargo date so scuffed it all together.

HUB showed exactly why they did it that way at the end of the review.

If they use 24H2 across the board they are gimping arrow lake performance. At the same time if you use 23H2 across the board you are gimping Zen3, Zen4 and Zen5 performance.

HUB did the right thing by using the best version for each CPU.
 
It has no analogy to previous cores. Willamette was usually faster, if not by much, but power went out of control.
Conroe was much faster and got power under control again.
Arrow Lake is sometimes faster but putting power under control.
Yes there is it's the laptop CPUs between P4d and Core2. The Pentium M either Banias or Dothan both better when it comes to power but slower than the P4e/P4d. Really Meteor Lake and Arrow Lake remind me of the Pentium M/Core time frame. Intel was getting stomped by AMD in most things but productivity (maybe?), they had process issues (90nm? wasn't great) around high clock speeds and votage and couldn't keep increasing clock speeds. Until Core2 landed Intel was just shoveling laptop chips and limping long on PentiumD chips and trying to get to 65nm and less and a new architecture.

This is the other thing people seem to have forgotten about Intel's processors, they used to glue cores together during the Pentium D and Core2 Quad time frame over the FSB bus. Moving the memory controller on die and going monolith got them out in front for a long time. Moving the memory controller to another chip while a lot closer than the north bridge is a real regression but something "had" to be done to make the chips smaller I guess.
 
Sometimes we see industry do this, I remember in the past when car engine manufacturers were transitioning they had regressions from generation to generation until they started figuring things out, usually its not a good idea to upgrade every gen anyway.

The way I look at this release is Intel are transitioning to tiles, they have recognised they need to work on power, so thats likely affected the performance, when this was designed, performance compared to last gen isnt the one single metric.
Yeah, that's fair enough. This is all-new for desktop, so it would be kind of an upset if it was all that impressive. Doubtful we'll ever see a second Zen moment anytime soon (and even then the upset was primarily that AMD was actually a consideration again). If we see the same cadence with LGA1700, this platform has the time to grind its axe; elsewise, it's another step in the stairway Intel's been falling down these days.
 
This is the other thing people seem to have forgotten about Intel's processors, they used to glue cores together during the Pentium D and Core2 Quad time frame over the FSB bus.
Yeah the P4D worked like that using the FSB for core-to-core communication, but the C2Ds had a proper link to not use the FSB to communicate (as on same die). The C2Qs had to then struggle with the same FSB link BS...
Hopes of an enthusiast low cost dual socket LGA775 never happened though like back in the BX/i815 S370 days.... Probably a good thing as the FSB would have been limited and probably not clocked as high.

Moving the memory controller on die and going monolith got them out in front for a long time. Moving the memory controller to another chip while a lot closer than the north bridge is a real regression but something "had" to be done to make the chips smaller I guess.
But this is the same as AMD have done with the IO die and infinity fabric linking the CCDs - Intel used to have a nack of doing the same thing as AMD, such as on-die IMC, but better (usually because of deeper pockets) - I'm not sure how Intel haven't developed an on-par / better implementation seeing as they still have those deeper pockets.
 
Last edited:
  • Like
Reactions: tfp
But this is the same as AMD have done with the IO die and infinity fabric linking the CCDs - Intel used to have a nack of doing the same thing as AMD, such as on-die IMC, but better (usually because of deeper pockets) - I'm not sure how Intel haven't developed an on-par / better implementation seeing as they still have those deeper pocpockets
I agree it has taken a long time to get Ferveros and it does seem the be because it technically superior.

The same thing for the current ring bus implementation and L3 clock speed, both seems slow and holding the cpu back.

One thing not making sense to me is how little difference the new 3NM process seems to be making. There is savings but based on these numbers I'm not sure Intels old node is that far behind.
 
I agree it has taken a long time to get Ferveros and it does seem the be because it technically superior.

The same thing for the current ring bus implementation and L3 clock speed, both seems slow and holding the cpu back.

One thing not making sense to me is how little difference the new 3NM process seems to be making. There is savings but based on these numbers I'm not sure Intels old node is that far behind.
Oh N3B is more power efficient than Intel 7, but I suspect Intel clocked its CPUs rather high to win the multithreaded benchmarks. They should show greater efficiency at lower clocks. Intel 7 has one advantage though: higher peak clocks. I doubt we'll ever see 6.2 GHz from the Intel P cores anytime soon.
 
Last edited:
But why would you use it on a 14th gen CPU? The clock redriver will be useless on a platform that doesn't support it, and so far only ARL does so in desktop. Zen 5 doesn't work with it either (i mean, you can use the dimms, just not take advantage of the clock redriver part).

Any source on that? AFAIK it does not, as I said above.
CUDIMM was designed to be backwards compatible with UDIMM. Source: Anandtech. We'll see how it turns out in real world.
On-package RAM is either HBM or LPDDR, and both have really high latency, so it'll make things even worse.
Just look at Lunar Lake, memory latency in there is awful and memory is on package.
Memory latency on M-chips is also bad because LPDDR as well.
Yeah. Also, a bit of on-package memory plus more memory on removable modules seems like a great idea but getting optimum performance from two pools or RAM, each having different properties, is next to impossible. OS alone can't take care of that, applications also need to be aware of it.
It's 4 different, independent 32-bit (without counting the control pins) controllers, each going to a sub-channel in the DIMM.
You are assuming that the previous 64-bit controller was not able to do independent data transfers to each sub-channel, do you have any source on that? AFAIK, it was capable of doing so.
Neither Intel nor AMD mentioned the ability to use subchannels up until Arrow Lake. Did I miss something? That's why I'm also assuming that the previous controller couldn't do that. Apart from that, the Alder Lake IMC could run both DDR4 and DDR5 (well, also LPDDR4x and LPDDR5). The common denominator was 64-bit channels of DDR4, and chances are the IMC was designed for that.
Some genius would probably be able to develop a micro-benchmark that makes good use of subchannels. Then we'd know for sure.
The main reason it's now 4x32-bit controllers instead of 2x64-bit it's because the SoC tile of ARL is pretty much a copy-paste from MTL.
Also the lack of DDR4 legacy.
AMD and Intel server CPUs with DDR5 memory controllers are a different story, I think they use subchannels because they run many, many processes at once.
 
I would say the replacement of HT with better e-cores for productivity is progress, the rest it seems will need more generations to get to somewhere people consider progress over previous generations.
Reliable leakers say Lion Cove team had to abandon HT, because they couldn't make it in time. The P core team is executing badly.

There was a Tesla slide about how an 8-wide decoder is impossible on x86, yet Lion Cove did it. However they got 10% per clock gain over it. It couldn't have been easy. Why did they do it? Hubris? Fallen short of expectations? A bit of both? David Huang's tests say that the branch predictor regressed compared to the predecessor Golden Cove. Branch predictor is a very important part and could even be indicative of a skill of the team.
 
Oh jeez it legitimately is minus 5% performance compared to Intel's previous gen.

Arrow Lack. Dozer Lake.

Likely the worst CPU launch from Intel in several years considering the context (best manufacturing process and all new design) but quietly gets away with a rather by the numbers review from TPU. But then again TPU would find it hard to criticise a turd in a box if it had a company logo on it.
 
This is the worst CPU launch I think I have ever seen...far worse than Bulldozer and worse than Prescott. 285K is losing to the 7700 in some games and not even by a small margin, despite Intel having a huge node process advantage, adopting AMD's MCM design (the same one these clowns called "glue" back in the day), costing 3x the price and having 3x the cores. Efficiency is nonexistent, even compared to 4 year old AM4 chips that are handing it its ass (5950X, 5800X3D). This launch was so bad, AM5 7000 non-X3D series chips just got their prices bumped up today. AM4 will probably see the same effect.

If this isn't a wake-up call for Intel to finally give Pat Failswindler the boot by Q1 of next year, they are finished as a company. Forget dedicated GPUs at this point and go back to the drawing board on the bread and butter of your business.
 
Back
Top