Tuesday, March 12th 2024
Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H
Qualcomm Snapdragon X Elite is about to make landfall in the ultraportable notebook segment, powering a new wave of Windows 11 devices powered by Arm, capable of running even legacy Windows applications. The Snapdragon X Elite SoC in particular has been designed to rival the Apple M3 chip powering the 2024 MacBook Air, and some of the "entry-level" variants of the 2023 MacBook Pros. These chips threaten the 15 W U-segment and even 28 W P-segment of x86-64 processors from Intel, such as the Core Ultra "Meteor Lake," and Ryzen 8040 "Hawk Point." Erdi Özüağ, prominent tech journalist from Türkiye, has access to a Qualcomm-reference notebook powered by the Snapdragon X Elite X1E80100 28 W SoC. He compared its performance to an off-the-shelf notebook powered by a 28 W Intel Core Ultra 7 155H "Meteor Lake" processor.
There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.
Source:
Erdi Özüağ (YouTube)
There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.
55 Comments on Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H
Perhaps people don't care what ISA powers their workload as long as the computer does the job best?
For example, apple's M chips are useless to me because I can't put them in a computer of my own, with a GPU of my choosing, and an OS of my choosing.
And raspberry pi's are cute, but they're not going to be powering my AI or gaming workloads.
And x86 and ARM are nice and all, but they haven't managed to replace s390 that's running bank and government workloads since the 1970s.
Because... the ISA only matters for completely portable or completely new workloads.
Consoles have used x86, arm, mips, whatever. It matters less than the GPU.
As does my gaming and AI scenarios.
Who cares what powers phones anymore? They're toys in comparison and stagnated years ago as seen by sales having plateaued.
Bring on ARM, but not for the sake of it. Just give me something good for my use cases and I'll buy it. But, x86 is that right now.
Pretty much everything Apple does with its walled garden and Mac OS's bad UX is holding the M chips back from greatness.
There have been swings & misses to get ARM on windows/linux laptops in the past but everything I've seen about the X Elite indicates it may be the first ARM laptop chip that is both not a toy and not stuck in an Apple device.
If that one day scales up to higher end computers that need discrete graphics or plenty of PCIE add-ins, it'd be interesting. Interesting to see how software would adapt at least. Apple did really well with their transition but I think that's just something Apple is uniquely positioned to do. Somehow on Windows I imagine we'll all be forced to update to the latest and most dystopian edition of Windows in order to take advantage of future hypothetical ARM desktops.
The switch from x86 to ARM in the power user & business space would require vast amounts of software developers to redevelop their software, or force many businesses to change software altogether. It's a colossal hurdle, which is why we still don't have a vast array of ARM desktop processors, despite their superior efficiency for years.
2. AMD has nothing else to offer?
3. Unification in a single Windows / x86-64 / consoles ecosystem?
Speaking of x86-64 and its inevitable EOL:
1. The foundries can't make infinitely smaller lithography transistors, so the end is near.
2. Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
3. Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore.
2° In the PC realm, there are ample robust cooling solutions available. However, amid intense competition, opting for efficiency by constraining TDP, means leaving performance on the table, and the competitor (Intel) will increase the clock/TDP several times through refreshes to look better in benchmarks.
3° Huh?? There has never been a 15W processor; all manufacturers, including ARM, provide misleading figures that typically align with TDP @ base clock. In reality, most, if not all, efficiency-focused processors approach nearly 30W under heavy loads, including those developed by Apple.
It's the same reason as to why the iPhone use chips so much bigger than Qualcomm : their vertical integration allow them to do it, QC could never sell a chip so big to their client (yes, the iPhone chip is almost as big as a M1).
Apple Silicon: The M2 Ultra Impresses at WWDC 2023 – Display Daily
Intel does have a packaging technology similar to that (EMIB) but it's unclear when or if it's going to be used on consumer products.
For example, the AMD phoenix is "TSMC 4nm" and M2 Pro is "TSMC 5nm." In this example, the M2 Pro has a sight single thread performance advantage. A key difference, the M2 reaches this at 3.5ghz, while the Zen4 core boosts to much more power hungry 5.2 ghz to reach the same level of performance. This is despite the Phoenix having an advantage on process and much higher power envelope. Also, the Phoenix is in a considerably lower GPU and memory performance tier presumably since so much more resources are diverted to the CPU cores.
Even if we assume a 30% IPC boost for Zen5, they will close the single thread performance gap with M3 or Oryon by boosting to 6ghz, but then it will be miles apart on efficiency.
My take, Intel has historically done well since they traditionally had a 1-2 year process advantage. Now that this advantage is effectively gone, since even Intel will now be manufacturing ARM chips, it all boils down to architecture. Look back 30 years on a diverse range of Risc vs. Cisc products, and the general trend is that Risc generally can do more with less when normalized to process.
My second take: Microsoft has already made the decision to drop support for x64 past Windows 12, they just haven't told anyone yet. They can't afford to have developers support two ISA's and any legacy code that absolutely needs it will be (very begrudgingly) supported with virtualization. I say this because they wouldn't be ramping up developer support for arm64 if this wansn't their decision.
What ? The M2 pro consumes up to 100w, it's insane to think that this is super efficient compared to x86 APUs. Outside of synthetic software or accelerated by ASICs, plus, apple's finely tuned ecosystem, this is horrible.
M2 pro is 30W, the Phoenix (mobile) is 35-54W, not sure where 100w is coming from.
The last performance hold out for x86 has been single thread performance. With the introduction of the M3 and Oryon, that is no longer the case. Consider, for example the M3 max, there are very few real world applications OR benchmarks where x86 will prevail at even 5x the power consumption.
OK maybe gaming, you got me there, but then the M1 Max is sorta at the level of an RTX 4060ti, so I think that covers a lot of ground, especially considering it's a portable.
What's not to like about accelerators? Seems like a great way to improve productivity and extend battery life and Apple and Qualcomm silicon seem to have a lot more going for it on their first gen. Let's see... Intel, on their 14th gen and just introducing a sub-par NPU and still sacked with a sub-par media processor. Their iGPU is greatly improved, so kudos there.
M2 - up to 55w
M2 pro - up to 100w+
www.notebookcheck.net/Apple-MacBook-Pro-14-2023-review-The-M2-Pro-is-slowed-down-in-the-small-MacBook-Pro.687345.0.html
The only way to get an M2 pro to pull >100w at the cord is to run a benchmark while the battery is charging and/or with heavy external USB loads.
In the above comparison, they ran different types of GPU and CPU benchmarks simultaneously, which doesn't tell you much. This is because the CPU and GPU will share power and it would be impossible to get a data run that is repeatable. A single benchmark that encompasses both CPU and GPU would be a much better such as a 3D game.
The CPU part of the M2 pro is more around the 27w marks according to notebook check. I don't understand what's happening on that forum lately, where there are more actors in the mainstream CPU market than there's ever been for decades, but people have a huge aversion towards the newcomers, and would rather keep the statu quo.
Apple's chips leverage a 256-bit bus, providing substantially greater bandwidth compared to today's x86 chips. It's the same as comparing oranges to apples. Therefore, let's shift the comparison to the CPU itself and choose a benchmark that reflects real-world scenarios.
Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
I must say, Choosing the M3 max was weird since it's a 3nm chip. So let's see how the M2 max compare to the 7840HS...M2 max = 72w vs Ryzen 7 = 103w. Apple arch is more efficient. At equivalent nodes, it's not a massive difference, but they still have an edge on that aspect. I don't necessarily attribute that to the fact that it's ARM, but that Apple was absolutely focused on making an efficient arch for their laptops. The fact that there's no performance hit when you are unplugged is telling.