• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core Ultra 200 "Arrow Lake-S" Lineup and Clock Speeds Revealed

Joined
Jun 10, 2014
Messages
2,982 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
The 245K looks decent.
It does, however the real truth will be shown in benchmarks, as rated clock speeds only tell a tiny piece of the picture. Especially Intel are known for throttling and having inconsistent performance. E.g. those 65W high-core CPUs are basically a terrible deal, they may look good in benchmarks, but will throttle like crazy when you use them for anything, and the user experience will suffer as a result. But I would rather have the 6 P-core 245K over the (presumably) 65W 8 P-core 265/285.

If I win the Lotto and want a real intel CPU for desktop I'm buying a Xeon. I'm so sick of P cores and E cores. If I want something pure now have to buy a xeon.

4 grand but still a real CPU.

But so I'm poor AMD is the only company that shows they are for gamers now with pure 16 core cpu's
I do feel your pain, although they do have a workstation lineup Xeon W-2400/3400 (Sapphire Rapids) or the refresh W-2500/3500.
They are still pricy, the most relevant would probably be w5-2455X at $1039 (12-core 3.2/4.6 GHz), w5-2465X at $1389 (16-core 3.1/4.7 GHz) or w7-2495X at $2189 (24-core 2.5/4.8 GHz).
The most relevant motherboard would probably be Asus Pro WS W790-ACE at ~$900 and a specialized cooler, probably Noctua NH-U14S DX-4677 at ~$190.
So, depending on your needs for memory, storage and GPU, you're probably looking at a system cost ~$4000-6000.
But if you're looking for just something consistent, solid, and with good IO, you can go for one of the lower core CPUs and get it cheaper than that.

The bigger issue with such parts is limited availability. Most computer stores wouldn't have these parts, and those who do rarely have it in stock. Compared to the good old HEDT days (x79/x99/x299), great deals are hard to come by. But be aware that when they get discontinued, there can be some great discounts. And the used market do have some amazing deals if you don't need the latest and greatest.

I do wish Intel and AMD would bring back "proper" HEDT platforms, as the mainstream platforms are increasingly held back by IO and memory bottlenecks, as well as thermal limits. For pure gamers this shouldn't be a big concern though.

I have the w2495x. I also have a 12700k. Let me tell you, my 12700 k with only p-cores enabled working at 5.1Ghz all cores managed to do a Linpack bench at 350 Gflops.<snip>
May I ask how the user experience of these platforms are to you?
Click the spoiler for context:
I have an i5-13600K at work running Windows 11, but I haven't had the chance to compare it to a "HEDT" counterpart, and preferably on Linux. But I have done a lot of comparison of Sandy Bridge vs. Sandy Bridge-E (x79) and the Skylake family vs. X299. And the one that is very noticeable is consistency. Even when the rated clock speeds don't favor the HEDT model and the core count isn't significantly higher, just having that consistent clock speed and ample memory bandwidth gives it as sense of "unrestricted performance" which makes it much easier to focus on being productive. For a while I even used the "unimpressive" 6-core Skylake-X i7-7800X (3.5/4.0 GHz), and yet it was a great performer at the time. My workloads have not consisted of large batch jobs, but rather either "lighter" web-development or desktop applications or 3D programming with some tools, VMs, graphics applications, web browser, etc. in the background. And this where the limits of benchmarks in reviews comes into play, no review can ever fit everyone's real workflow, and especially those who run a mix of "medium" workloads at the same time, like probably most in programming, CAD, graphics and content creation does, no review will ever realistically reflect that.

But for Raptor Lake, it certainly gives me the impression of being more inconsistent and "jerky". I don't know yet how much of this should be attributed to Windows 11, and how much of it is the absurd turbos of Raptor Lake, but to use a car analogy; it certainly gives the feeling of driving a car with a tiny turbo 4-banger vs. a smooth V8, sure the turbo engine has some peak power and looks great in benchmarks, but the inconsistency is a persistent source of annoyance and discomfort. I have done a side-by-side comparison to a Comet Lake system on Windows 10 though, and the Raptor Lake is certainly faster overall, but also noticeably more inconsistent.
So when comparing w7-2495X vs. i7-12700K, how would you say that "clock speed deficit" translates into real world performance across various types of workloads?
Or to put it more bluntly; if you had to choose only one to have at home, which one would you prefer?

I'm just curios, although I will probably wait until the next iteration before buying any.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,877 (3.82/day)
Location
Alabama
System Name RogueOne
Processor Xeon W9-3495x
Motherboard ASUS w790E Sage SE
Cooling SilverStone XE360-4677
Memory 128gb Gskill Zeta R5 DDR5 RDIMMs
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 2TB WD SN850X | 2x 8TB GAMMIX S70
Display(s) 49" Philips Evnia OLED (49M2C8900)
Case Thermaltake Core P3 Pro Snow
Audio Device(s) Moondrop S8's on schitt Gunnr
Power Supply Seasonic Prime TX-1600
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Moondrop Luna lights
VR HMD Quest 3
Software Windows 11 Pro Workstation
Benchmark Scores I dont have time for that.
Or to put it more bluntly; if you had to choose only one to have at home, which one would you prefer?

Coming from consumer land for the last 3 generations

12900
12900k
13900k
13900ks
14900ks

I went back to workstation. The biggest selling point imo. IO. Specifically the PCIe lanes given. These systems are powerhungry but anything even remotely multitasked is a dream. There is some credence to ST tasks that like clock speed. But its mis understood; any ST likes clock speed they are ST.....

Multicore performance? Wild. Literally unimaginable. Just like the old HEDT days. With latency to spare. A lot of people still complain about and compare workstation chips to consumer ones, specifically in MT bechmarks when they are pushing like 5.6ghz. Hasn't happened to me personally (though at 4.8ghz boost its no slouch) but the real world is the real world. I was under 100% load the otherday; completely by mistake while I was playing a game. literally did not notice. I looked down at my dash and saw the CPU usage and had to alt tab to stop the task, but the work was invisible to me...as it should be. That is the power of these platforms.


I left x299 around gen 12 to dip my toe in consumer platform land and I have hated it. The constant upgrades, looking to see if a board disables nvme or sata slots. Wild. literally cant comprehend that.

The big problem? expense. I work hard and I get paid. I got this system because I use PCs and I use them a lot, so I got a nice chair too. Thats what I spend my money on. Its not for everyone though. These systems also come with there own form of challenges. For example, boost triggers when you are pushing this many cores is difficult. Im typing this with 112 threads at 800mhz. If I fire up something single threaded the load isnt enough to convince turbo to kick on. So you usually have to tweak the BIOS and windows performance plans, which all of these boards support.

The other; maybe to a minor degree. Complexity. Using CPU mount trays is going to be a foreign concept to your avg valorant player and in that aspect (and honestly unpopular opinion) the price of both the chips and supporting parts being boards or otherwise is almost fine just to arbitrarily gate entry, if only to save $$$$$$ on RMAs from users who dont have the patience these machines need to assemble.

Would be happy to make a thread going over my recent project with these systems if interested, but back on topic.

Arrow Lake moving to P cores is cool, though I personally never had issues with E cores, maybe it is an industry shift to reduce the software debt needed to keep schedulers and microcode updated to handle all this load balancing.
 
Last edited:
Joined
Jun 10, 2014
Messages
2,982 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I went back to workstation. The biggest selling point imo. IO. Specifically the PCIe lanes given…
I've been saying since the days of Coffee Lake that the move to displace HEDT with premium hyper-turboed mainstream CPUs have been a huge mistake. Predictably it has resulted in even more inconsistent performance, as well as absurd power requirements and slightly upgraded IO driving up costs of a platform mainly targeting mainstream users or "basic" office use.
I really think we need Intel and AMD to provide a proper HEDT in between the mainstream and the big 8-channel platforms; a ~2500 pin, 4-channel, ~64 PCIe lanes and standard CPU cooler compatibility, with okay motherboards starting at ~$600. In these days when the customer group of both developers and especially content creators are growing, there is more reason than ever to justify a proper HEDT segment. Then, mainstream should cap out at ~100W and 8 cores.

One of the main drivers behind the abandonment of HEDT is probably the larger PC integrators, which mostly sell new computers based on "specs". Even though we informed PC users mostly know better, big enterprises as well as most consumers just looks at rated clock speeds, cores, etc. This is why they pushed for E-cores on desktop in the first place, because pushing more cores otherwise would be too hard. That's why you get these "24-core up to* 5.8 GHz 65W" CPUs in Dells, Lenovos, etc. *) If you don't read the small print; you only get that speed on a couple of cores for a few seconds.

I was under 100% load the otherday; completely by mistake while I was playing a game. literally did not notice. I looked down at my dash and saw the CPU usage and had to alt tab to stop the task, but the work was invisible to me...as it should be. That is the power of these platforms.
This is so underappreciated.
At least for "prosumers", who doesn't like to take a break to play a little game or something without closing 15 applications? Or just the need to switch between larger "assignments".

It's one of the things I loved about my old i7-3930K(x79 3.2/3.8 GHz) for so many years. For years I had it side-by-side with a Haswell i5-4690K(3.5/3.9GHz), and despite the Haswell being objectively faster in burst speed, there is no question the old x79 fared so much better in productivity.

I left x299 around gen 12 to dip my toe in consumer platform land and I have hated it. The constant upgrades, looking to see if a board disables nvme or sata slots. Wild. literally cant comprehend that.
Yeah, this has become an annoyance for me too, both for current AMD and Intel platforms, just trying to figure out which board have the most flexible IO (and I'm not even sure the specs are always correct). Pretty pricy boards commonly disable 2-4 SATA ports when you use the extra M.2 slot, etc. And boards boasting about 3-4 M.2 slots probably only run them at 2x or some at even 1x lanes (as everything on PCH is shared you know). So buying a "long-lived" prosumer desktop on these platforms have become headache, and that alone is almost worth HEDT. And is this going to change with Arrow Lake? Problably not. Just because the chipset can support all this stuff, doesn't mean the motherboard will implement it.

The closest I've come to buying on Raptor Lake is a W680 board, either Asus or Supermicro. Those seem to have the most flexibility of those platforms.

I do believe Arrow Lake is bringing 4 extra PCIe lanes on the CPU for an extra M.2 slot. Unfortunately most motherboards will lock these lanes to M.2 2280 slots underneath where the graphics card will sit, which is stupid on so many levels. Most notably a PCIe 4x slot will offer more flexibility, those M.2 slots will be poorly cooled so the SSD throttles a lot, and the lack of support for 22110 SSDs will eliminate enterprise grade SSDs (which are more reliable, power loss protection etc.).
So ultimately, this will yet again just drive up the cost of the mainstream platform without making it significantly better for powerusers and professionals.

The big problem? expense. I work hard and I get paid. I got this system because I use PCs and I use them a lot, so I got a nice chair too. Thats what I spend my money on.
Even for those who don't get directly paid for their time, as people "grow up" and get day jobs, family, etc. etc. time itself becomes more "valuable", so if the productivity is better or you get less annoyed from using it, that itself can justify some cost.

While I'm not exactly pooping gold bricks, I can certainly afford a decent workstation as a planned purchase. (And it's really not that much more than many waste on buying new iPhones all the time…)
When I'm working on a project, my workflow actually uses two computers side-by-side. Usually one is the workstation and the other research, documentation and music/podcasts, but sometimes I switch to using the secondary as testing or do code experiments on both. So I'm actually considering whether I need two workstations… :p
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.39/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
Can't wait to see what Intel can deliver with TSMC 3nm, meaning power usage will drop drastically too.

Might just grab a 9800X3D tho, lets see in a few months.
 
Joined
Oct 29, 2016
Messages
110 (0.04/day)
It does, however the real truth will be shown in benchmarks, as rated clock speeds only tell a tiny piece of the picture. Especially Intel are known for throttling and having inconsistent performance. E.g. those 65W high-core CPUs are basically a terrible deal, they may look good in benchmarks, but will throttle like crazy when you use them for anything, and the user experience will suffer as a result. But I would rather have the 6 P-core 245K over the (presumably) 65W 8 P-core 265/285.


I do feel your pain, although they do have a workstation lineup Xeon W-2400/3400 (Sapphire Rapids) or the refresh W-2500/3500.
They are still pricy, the most relevant would probably be w5-2455X at $1039 (12-core 3.2/4.6 GHz), w5-2465X at $1389 (16-core 3.1/4.7 GHz) or w7-2495X at $2189 (24-core 2.5/4.8 GHz).
The most relevant motherboard would probably be Asus Pro WS W790-ACE at ~$900 and a specialized cooler, probably Noctua NH-U14S DX-4677 at ~$190.
So, depending on your needs for memory, storage and GPU, you're probably looking at a system cost ~$4000-6000.
But if you're looking for just something consistent, solid, and with good IO, you can go for one of the lower core CPUs and get it cheaper than that.

The bigger issue with such parts is limited availability. Most computer stores wouldn't have these parts, and those who do rarely have it in stock. Compared to the good old HEDT days (x79/x99/x299), great deals are hard to come by. But be aware that when they get discontinued, there can be some great discounts. And the used market do have some amazing deals if you don't need the latest and greatest.

I do wish Intel and AMD would bring back "proper" HEDT platforms, as the mainstream platforms are increasingly held back by IO and memory bottlenecks, as well as thermal limits. For pure gamers this shouldn't be a big concern though.


May I ask how the user experience of these platforms are to you?
Click the spoiler for context:
I have an i5-13600K at work running Windows 11, but I haven't had the chance to compare it to a "HEDT" counterpart, and preferably on Linux. But I have done a lot of comparison of Sandy Bridge vs. Sandy Bridge-E (x79) and the Skylake family vs. X299. And the one that is very noticeable is consistency. Even when the rated clock speeds don't favor the HEDT model and the core count isn't significantly higher, just having that consistent clock speed and ample memory bandwidth gives it as sense of "unrestricted performance" which makes it much easier to focus on being productive. For a while I even used the "unimpressive" 6-core Skylake-X i7-7800X (3.5/4.0 GHz), and yet it was a great performer at the time. My workloads have not consisted of large batch jobs, but rather either "lighter" web-development or desktop applications or 3D programming with some tools, VMs, graphics applications, web browser, etc. in the background. And this where the limits of benchmarks in reviews comes into play, no review can ever fit everyone's real workflow, and especially those who run a mix of "medium" workloads at the same time, like probably most in programming, CAD, graphics and content creation does, no review will ever realistically reflect that.

But for Raptor Lake, it certainly gives me the impression of being more inconsistent and "jerky". I don't know yet how much of this should be attributed to Windows 11, and how much of it is the absurd turbos of Raptor Lake, but to use a car analogy; it certainly gives the feeling of driving a car with a tiny turbo 4-banger vs. a smooth V8, sure the turbo engine has some peak power and looks great in benchmarks, but the inconsistency is a persistent source of annoyance and discomfort. I have done a side-by-side comparison to a Comet Lake system on Windows 10 though, and the Raptor Lake is certainly faster overall, but also noticeably more inconsistent.
So when comparing w7-2495X vs. i7-12700K, how would you say that "clock speed deficit" translates into real world performance across various types of workloads?
Or to put it more bluntly; if you had to choose only one to have at home, which one would you prefer?

I'm just curios, although I will probably wait until the next iteration before buying any.
Sorry I saw this post very late. I find the speedup from my W7 to i7 to be about consistent with Geekbench MT scores roughly. My workflow that uses heavily Intel MKL finishes in about 7.5 days on all Pcores 12700k and about 5.5 days on the w2495x. What I found annoying is that a lot of single core loads - things that are not parallizable are much faster on the 12700k. So I much prefer to prototype on the i7 and then deploy code on the w7. If I had to choose just one, I guess I will take the w7, but only if I am not paying for it. The w7 was a work machine that costed 10k, whereas my i7 was only 1.5k dollars. I imagine a 14900k would be almost as fast as my w7 but I cannot vouch for its stability in heavy sustained workloads. I sometimes run stuff for months continuously, so any small instability would be noticed.
 
Top