• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core Ultra 9 285K

Joined
May 11, 2018
Messages
1,292 (0.53/day)
AMD did a favor to Intel with Zen 5.
Intel did an even bigger favor to AMD with Allow Lake.
The thing is that AMD is coming with the 9000 X3D chips and Intel has no response to them. In the end AMD will gain even more market share in desktops, while Intel will probably win back some market share in laptops, where efficiency is important.

As for us consumers, I guess AMD knew about Arrow Lake and priced 9000 series accordingly. And seeing that Intel offers nothing new in gaming, X3D chips are going ALL up in price. Even on AM4 AMD discontinued 5800X3D and I am pretty sure 5700X3D is going to become pricier over time going slowly to a price close to that of the last price of 5800X3D.

"AMD Ryzen 7 9800X3D CPU Benchmarks Leak Out: Up To 22% Faster In Geekbench Versus 7800X3D"

That's going to he a bloodbath in gaming, if these leaks at all translate to gaming performance.

And as I said, for home users multi core application performance is quickly just "fast enough" and doesn't translate to any deciding point when buying. That's why AMD sold tons of 5800 X3D, 7800 X3D, even though they were quite noticeably slower in productivity than similarly priced 5900X, 7900X.

I have friends that do tons of photo editing, and only game occasionally, and they decided to buy an X3D processor - because the difference for them is just a bit longer end "rendering" time when exporting photos, all other values are similar, and extra cache in "gaming" CPU might make it more responsive in tasks that are hard to benchmark.
 
Joined
May 11, 2018
Messages
1,292 (0.53/day)
It doesn't. It's about ~10% better than the 7800X3D.

Most probably. That is still twice the amount of "Zen 5%", and is just added on top of already leading gaming performance.
 
Joined
Sep 27, 2018
Messages
89 (0.04/day)
System Name A COLD ONE
Processor i7 6700k @ 4.5ghz soon to be R7 3800X
Motherboard Asrock Z170 extreme 6 soon to be MSI X570 Pro Carbon Wifi
Cooling Full custom WC loop/ EK blocks & pumps / 300mm res / Hard lined / linked to external 560 x 80 rad.
Memory 16gb of 2400mhz ddr4 soon to be 32gb of 3600mhz ddr4
Video Card(s) MSI GTX 1080 EKWB Seahawk. soon to be RTX2070 super/RTX2080/Radeon XT series..........PRICE
Storage 1 x Samsung 500gb 970 Evo NVME/ 2 x 500gb Samsung SSD
Display(s) Dell Ultrasharp Curved 3440x1440
Case Heavily Modified Silverstone Fortress FT02
Audio Device(s) Asus Sound card
Power Supply Corsair HX1000i
Mouse Corsair M95
Keyboard Corsair K95
Software Windows 10 64bit home
To be honest I like the track both AMD and Intel are taking.

AMD managed to eek out slightly better performance while increasing efficiency, if they keep doing that can you imagine what we will be using in a decade from now as far as power to efficiency goes.

INTEL yes they took a step back to enable them to keep marching forward, this was a necessity as the track they were on was going to meltdown literally. Just call it a strategic withdrawal.

On a personal note, I mainly game so my 7800x3d does more than I need, and yes I'd be more than happy if the 9800x3d gives a 5% lift but with a 10w to 15w power drop, just my opinion as I live in the UK and power is expensive.
 
Joined
Jun 19, 2023
Messages
114 (0.20/day)
System Name EnvyPC
Processor Ryzen 7 7800X3D
Motherboard Asus ROG Strix B650E-I Gaming WiFi
Cooling Thermalright Peerless Assassin 120 White / be quiet! Silent Wings Pro 4 120mm (x2), 140mm (x3)
Memory 64GB Klevv Cras V RGB White DDR5-6000 (Hynix A) 30-36-36-76 1.35v
Video Card(s) MSI GeForce RTX 4080 16GB Gaming X Trio White Edition
Storage SK Hynix Platinum P41 2TB NVME
Display(s) HP Omen 4K 144Hz
Case Lian Li Dan Case A3 White
Audio Device(s) Creative Sound BlasterX G6
Power Supply Corsair SF750 Platinum
Mouse Logitech G600 White
Software Windows 11 Pro
I have friends that do tons of photo editing, and only game occasionally, and they decided to buy an X3D processor - because the difference for them is just a bit longer end "rendering" time when exporting photos, all other values are similar, and extra cache in "gaming" CPU might make it more responsive in tasks that are hard to benchmark.

Came to post exactly this. Unless you have a super tight workflow where literally every second counts, CPUs have been "good enough" to meet the productivity needs of home users for like a decade.

I game and do photo editing and blender work and can't say I care about saving a few seconds on an export vs keeping my 1% frame dips up in multiplayers. :laugh:
 
Joined
Aug 12, 2021
Messages
64 (0.05/day)
Why the 14900KS is not included ?? Not normal ast flagship to be missing ... with all that intel micro code update, it would hace been important to know where that chip is at now.
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Oh boy.
It is more efficient than the 14900 and 13900 but also slower. New node somewhat helped but not much. Maybe this is something Intel needs to fix with some BIOS updates or some power delivery is messed up or something. The efficiency is better but still not great or even OK for that matter.
Still new Mobo required so that lesson has not been learned. Maybe, using the same socket as RPT would help to run away from the ticking bomb 13th and 14th gen CPUs are. I'm sure consumers would appreciate that even though core ultra is slower. It is not on the bottom of the chart so that is good right?
 
Joined
May 11, 2018
Messages
1,292 (0.53/day)
Came to post exactly this. Unless you have a super tight workflow where literally every second counts, CPUs have been "good enough" to meet the productivity needs of home users for like a decade.

I game and do photo editing and blender work and can't say I care about saving a few seconds on an export vs keeping my 1% frame dips up in multiplayers. :laugh:

But that "good enough" mentality is also a hurdle in selling new generations to buyers that don't fall for hype.

I have a 5900X, and an RTX 3080. Do I gain anything in productivity by buying 7800X3D? Nothing. Not "virtually", measurably there isn't any difference. And I don't expect 9800X3D to change that by more than 5%.

And gaming? It looks impressive, until you consider you don't have an RTX 4090, and you don't game at 720p or 1080p.

But the prices of new platforms - new CPUs, new motherboard, new memory, have all gone up considerably in last two generations. It will have to be a considerable jump in performance to spend that kind of money, and it makes no sense to gain almost no productivity increase, and no noticeable gaming performance increase in real world situations (I mostly game at 4K 60fps, with DLSS and other help to gain that in ageing card).
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,972 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
@dgianstefani

Appears to be a discrepancy in the summary and conclusion. Test Setup page lists 6000 CL36, but summary mentions CL38 twice in the DDR5 Memory & CUDIMM paragraph.
Nice find, CL38 was a typo, fixed in all 3 reviews

The clock redriver will be useless on a platform that doesn't support it, and so far only ARL does so in desktop. Zen 5 doesn't work with it either (i mean, you can use the dimms, just not take advantage of the clock redriver part).
That's my understanding, too. The other platforms can run CUDIMMs in some sort of compatibility mode, which bypasses the clock driver, so no gains over classic modules
 
Joined
Apr 16, 2022
Messages
60 (0.06/day)
Processor AMD Ryzen 9 7900X3D
Motherboard ASUS ROG Crosshair X670E Hero
Cooling ASUS ROG Strix LC III 360
Memory G.Skill 48GB(2x24) TZ5 Neo RGB EXPO 6400mhz CL32
Video Card(s) ASUS TUF RTX4070TI SUPER
Storage Adata XPG SX8200 Pro 2X2TB, Adata XPG SX8100 3X2TB,
Display(s) Dell 34" Curved Gaming Monitor S3422DWG
Case Corsair 5000X
Power Supply Corsair RM1000x SHIFT 80 PLUS Gold
Mouse Asus ROG Gladius II Core
Keyboard Asus ROG Strix Scope
FIASCO.
Intel 15th Generation New Core Ultra Processes are a complete Fiasco, no offense to anyone.
It is nothing more than a revised version of the 14th generation with the addition of an NPU section, and a slightly reduced power consumption. I have been assembling systems and working with hardware for 25 years, and I have never seen Intel like this.
Moreover, it is said that the socket will be supported until the end of 2025, which is complete nonsense.
There is absolutely no need for a person who has an undegraded 14th generation or 13th generation or even 12th generation Intel Core i9 or i7 to switch. I am not even talking about those with AMD 7950X3D, 7900X3D, 7800X3D Ryzen processors.
 
Last edited:
Joined
Apr 12, 2013
Messages
7,564 (1.77/day)
Why would a guy with a solid 14900K, 14700K, 13900K, 13700K or 7950X3D, 7900X3D, 7800X3D switch to the Core Ultra series?
Unnecessary expense, most importantly a money trap.
Why would a someone go from 5800x3d to 9800x3d either, considering the costs? Too many people conflate wants & needs here & yes the earth is going to sh!t as a result of that!

Not the only reason but one of the main ones :ohwell:
 
Joined
Mar 14, 2014
Messages
1,431 (0.36/day)
Processor 11900K
Motherboard ASRock Z590 OC Formula
Cooling Noctua NH-D15 using 2x140mm 3000RPM industrial Noctuas
Memory G. Skill Trident Z 2x16GB 3600MHz
Video Card(s) eVGA RTX 3090 FTW3
Storage 2TB Crucial P5 Plus
Display(s) 1st: LG GR83Q-B 1440p 27in 240Hz / 2nd: Lenovo y27g 1080p 27in 144Hz
Case Lian Li Lancool MESH II RGB (I removed the RGB)
Audio Device(s) AKG Q701's w/ O2+ODAC (Sounds a little bright)
Power Supply Seasonic Prime 850 TX
Mouse Glorious Model D
Keyboard Glorious MMK2 65% Lynx MX switches
Software Win10 Pro
10-20fps lower in the minimums is a little alarming honestly.
 
Joined
Jun 19, 2023
Messages
114 (0.20/day)
System Name EnvyPC
Processor Ryzen 7 7800X3D
Motherboard Asus ROG Strix B650E-I Gaming WiFi
Cooling Thermalright Peerless Assassin 120 White / be quiet! Silent Wings Pro 4 120mm (x2), 140mm (x3)
Memory 64GB Klevv Cras V RGB White DDR5-6000 (Hynix A) 30-36-36-76 1.35v
Video Card(s) MSI GeForce RTX 4080 16GB Gaming X Trio White Edition
Storage SK Hynix Platinum P41 2TB NVME
Display(s) HP Omen 4K 144Hz
Case Lian Li Dan Case A3 White
Audio Device(s) Creative Sound BlasterX G6
Power Supply Corsair SF750 Platinum
Mouse Logitech G600 White
Software Windows 11 Pro
But that "good enough" mentality is also a hurdle in selling new generations to buyers that don't fall for hype.

I have a 5900X, and an RTX 3080. Do I gain anything in productivity by buying 7800X3D? Nothing. Not "virtually", measurably there isn't any difference. And I don't expect 9800X3D to change that by more than 5%.

And gaming? It looks impressive, until you consider you don't have an RTX 4090, and you don't game at 720p or 1080p.

But the prices of new platforms - new CPUs, new motherboard, new memory, have all gone up considerably in last two generations. It will have to be a considerable jump in performance to spend that kind of money, and it makes no sense to gain almost no productivity increase, and no noticeable gaming performance increase in real world situations (I mostly game at 4K 60fps, with DLSS and other help to gain that in ageing card).
Mostly agreed, though if you play MMOs or even some large scale sandbox shooters, even the best gaming CPUs today can still get bogged down with lots of players around.

Even my 7800X3D has dipped below 60 in crowded cities in FFXIV, meanwhile my 4080 is largely AFK.
 

bgx

New Member
Joined
Oct 24, 2024
Messages
9 (0.14/day)
Overall, 2024 was very hyped, but the outcome is disapointing, both for AMD and Intel. I like Lunar Lake (this will be my next ultrabook), but I agree this is not for everyone (most ppl here care about gaming - for which Lunar Lake is not too good, nor too bad).

on power consumption: It seems to me that they pushed arrowlake to the limit on power consumption to extract every bit of perf they can, as it has deficit in clock vs Raptor lake and high latency too (so perf in game is not good).

So yes, efficiency @factory value is not impressive, but it does not mean the efficiency of the architecture is bad. 10nm vs 3(even fake TSMC) nm should still be very obvious.

A test @ 4 or 5Ghz and undervolted while being stable (for different arch, Zen4, 5, ARL, RL, AL) would be interesting to see the real efficiency of the underlying arch/process.

On a side note, on performance, Intel 7 is not that bad!

We can only hope intel 18A is viable next year.
 
Joined
Jul 24, 2024
Messages
301 (1.92/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
The thing is, as usual, the X3D chips are only good for gaming. They're slower than the non X3D chips as well as the Intel competition in anything that isn't gaming, which happens to be what the vast majority of people in the world use CPUs for (not gaming). They're also more expensive.

Compare, say, a mainstream segment $330 9700X against the $310 245K, you're essentially getting 8% more application performance per dollar with the 245K, a more modern platform, and generally it's more efficient, while having slightly slower gaming prowess (with 6000 MT and early firmware). The 7800X3D is both more expensive and 10% slower in applications, but 20% faster in gaming (when the 245K is tested with slow memory), for $490.

What I'm seeing with the $590 285K is a CPU that compares favorably against its more expensive competition ($649 9950X), 30% more efficient in ST View attachment 368773

essentially the same in MT

View attachment 368772

25% less power in idle

View attachment 368774

...plus a better platform, but currently it's slightly slower in gaming despite being more efficient, when tested with memory 2000 MT slower than Intel's "sweetspot" 8000 MT.

View attachment 368775
What?

Did you happen to oversee the upper sections of those graphs?
1729841597285.png


1729841713772.png


AMD dominates in efficiency, except for idle consumption - that is indeed horrible. 9600X might not be so efficient as Ultra 245K, but Ultra 245K delivers about 5% less FPS. If you take that into account, 9600X prevails. With Intel's node advantage, one would expect Ryzen 7900 will have a competition in multi-thread efficiency. The thing is, Ryzen 7900 and 7700 show exactly that situation when CPU is paired with sane voltages and clocks, meaning delivering exceptional efficiency. (I'm not taking X3D SKUs into account.) In terms of efficiency, Intel's i5-13400F is only efficient SKU.

As for 9800X3D, it boosts base clocks significantly compared to 7800X3D. There is estimated about 12-16% application perf. improvement over 7800X3D which means that 9800X3D will dominate gaming while being powerful in apps as 9700X. Although, this might even change a bit after retesting on Win 24H2.
 
Joined
May 25, 2022
Messages
128 (0.14/day)
Everything Arrow Lake does holistically Meteor Lake did first. And Arrow Lake hasn't fixed any of the problems of MTL, only brought them (at last) to desktop.
Arrow Lake, as a whole, is derivative of Meteor Lake.
Lion Cove is derivative of Golden Cove/Redwood Cove.
Skymont is derivative of Goldmont/Crestmont.
Skymont is good, but Lion Cove is not. While Skymont is based off of Crestmont, it also brings new ideas and is a substantial improvement, while Lion Cove seemingly expands a lot, but brings very little. 33% in major structure increase in many areas but with just 10% gain is sad. The Israeli Design Center that brought us Merom/Conroe that enthusiasts pretty much worship is now on it's deathbed and should be replaced.

What should it be replaced by? Successors of Skymont.

The real problem is that Arrowlake(and the predecessor Meteorlake that uses the same tile configuration) is developed during extremely troublesome times for Intel. It's said that lot of the Meteorlake team went to Microsoft's CPU project. So much Intel guys moved to MS that the project acronym they used within Intel for Meteorlake was used at Microsoft. Many IDC(the team) leads and members left Intel during Kraznich's era. Mooly Eden, Dadi Perlmutter, remember them? They were brought to fame after Core 2.

So not only the too many tiles subpar configuration was kept, they didn't have the manpower/intellect/leadership to make that work to their vision either. When people are unhappy and/or leave you don't see the ramifications right away. We're seeing them now. Projects take 3+ years to come to fruition.

The tile based Xeon "Sapphire Rapids" suffered too because during Kraznich's era he fired the entirety of the validation team, responsible for making things work properly, doesn't get errors, and is reliable under workloads.

Increasingly we're coming to a point where Intel as a company is in some jeopardy of declaring bankruptcy in the future. The next few years are absolutely critical.
 
Joined
Jul 24, 2024
Messages
301 (1.92/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
Skymont is good, but Lion Cove is not. While Skymont is based off of Crestmont, it also brings new ideas and is a substantial improvement, while Lion Cove seemingly expands a lot, but brings very little. 33% in major structure increase in many areas but with just 10% gain is sad. The Israeli Design Center that brought us Merom/Conroe that enthusiasts pretty much worship is now on it's deathbed and should be replaced.

What should it be replaced by? Successors of Skymont.

The real problem is that Arrowlake(and the predecessor Meteorlake that uses the same tile configuration) is developed during extremely troublesome times for Intel. It's said that lot of the Meteorlake team went to Microsoft's CPU project. So much Intel guys moved to MS that the project acronym they used within Intel for Meteorlake was used at Microsoft. Many IDC(the team) leads and members left Intel during Kraznich's era. Mooly Eden, Dadi Perlmutter, remember them? They were brought to fame after Core 2.

So not only the too many tiles subpar configuration was kept, they didn't have the manpower/intellect/leadership to make that work to their vision either. When people are unhappy and/or leave you don't see the ramifications right away. We're seeing them now. Projects take 3+ years to come to fruition.

The tile based Xeon "Sapphire Rapids" suffered too because during Kraznich's era he fired the entirety of the validation team, responsible for making things work properly, doesn't get errors, and is reliable under workloads.

Increasingly we're coming to a point where Intel as a company is in some jeopardy of declaring bankruptcy in the future. The next few years are absolutely critical.
Troublesome times or not, those people on team surely did not have a gun aimed at their head to go for insane voltages which cause extreme fast degradation.

What will be critical for Intel is success of it's own 18A process. Pat bet his (company's) ass for this, so hopefully they will deliver something usable. Otherwise they're f*cked, meaning they can't compete in servers, in AI, in GPUs, in gaming, in heavy app workloads. Even their mainstream network controllers suck for 3rd generation in a row.
 
Joined
Mar 22, 2012
Messages
5 (0.00/day)
Location
istanbul
System Name AM5
Processor AMD Ryzen 7600x
Motherboard MSI x670e Tomahawk WIFI
Cooling Noctua NH-D15
Memory G.Skill DDR5 6400 2x16gb 30-40-40-96@1.35v
Video Card(s) Sapphire RX 7800 XT Nitro
Storage Samsung 970 EVO Plus NVME 512GB
Display(s) AOC 24g2u
Case MSI Gungnir 300R
Audio Device(s) Realtek ALC 1200 Onboard 7.1
Power Supply Seasonic Focus GX 750w
Mouse Razer Abyssus
Keyboard Asus ROG Strix Scope II RX
Software Windows 11 Pro
23H2 was used, not 24H2. Did I read it right?
Heavily implied by other reviewers that 24h2 is not friendly with intel's ultra chips.
 
Joined
May 11, 2018
Messages
1,292 (0.53/day)
Heavily implied by other reviewers that 24h2 is not friendly with intel's ultra chips.

As in "they are even more behind AMD, which gains in 24H2", or do they have problems in performance, stability in new Windows update?

Whichever it is, there is no excuse for not benchmarking in 24H2, which is in rollout, and is now basically current Windows 11 version.
 
Joined
Aug 26, 2021
Messages
388 (0.32/day)
I don't know how that would be the case considering the architecture has been simplified.
Windows 2152 update which is currently a preview has the improvement. Even if you're on 24h2 you don't have it. By all accounts it's not a massive improvement 3-5%. But I suspect arrow lake will be like Zen 5 and lots of small improvements will happen.
 
Joined
Mar 10, 2024
Messages
16 (0.05/day)
Location
Hungary
System Name Main rig
Processor Intel Core i5-14600k
Motherboard TUF GAMING B760M-PLUS
Cooling Be Quiet! DARK ROCK PRO 5
Memory 32 GB DDR5 6000 MHz
Video Card(s) RTX 3060 Ti GDDR6X
Storage Kingston KC3000 1TB, Samsung 970 evo plus, Kingmax 480 GB SSD, Western Digital WD Red Plus 3.5 3TB
Display(s) 1080p
Case Fractal Design Focus 2
Power Supply Seasonic FOCUS GX Series 750W
Strange, the general consensus was that Intel's (insert preferred number) nm node was causing the high power consumption etc. and the 3nm node would be a drastic improvement... Wonder what happened
 
Joined
Sep 27, 2008
Messages
1,210 (0.20/day)
As in "they are even more behind AMD, which gains in 24H2", or do they have problems in performance, stability in new Windows update?

Whichever it is, there is no excuse for not benchmarking in 24H2, which is in rollout, and is now basically current Windows 11 version.

From the conclusion page of the TPU review:

When pairing Windows 24H2 with Arrow Lake, performance will be terrible—we've seen games running at 50% the FPS vs 23H2. One solution is to turn off Thread Director or disable the "Balanced" power profile, which is why we decided to use 23H2 for the time being. Last but not least, there are some driver issues and bluescreens when both a dGPU and iGPU are active at the same time.
 
Joined
May 11, 2018
Messages
1,292 (0.53/day)
From the conclusion page of the TPU review:

Wasn't 24H2 available in Release Preview Channel since May 2024? For a lot of users Windows have already updated - that the new Intel CPUs are crap in it shouldn't be excuse to use outdated version. What's next, using Windows 10 if that brings any advantage to Intel?
 
Top