• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,240 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Qualcomm Snapdragon X Elite is about to make landfall in the ultraportable notebook segment, powering a new wave of Windows 11 devices powered by Arm, capable of running even legacy Windows applications. The Snapdragon X Elite SoC in particular has been designed to rival the Apple M3 chip powering the 2024 MacBook Air, and some of the "entry-level" variants of the 2023 MacBook Pros. These chips threaten the 15 W U-segment and even 28 W P-segment of x86-64 processors from Intel, such as the Core Ultra "Meteor Lake," and Ryzen 8040 "Hawk Point." Erdi Özüağ, prominent tech journalist from Türkiye, has access to a Qualcomm-reference notebook powered by the Snapdragon X Elite X1E80100 28 W SoC. He compared its performance to an off-the-shelf notebook powered by a 28 W Intel Core Ultra 7 155H "Meteor Lake" processor.

There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.



View at TechPowerUp Main Site | Source
 

bug

Joined
May 22, 2015
Messages
13,772 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Obviously more benchmarks are needed, but not a bad showing so far.
Even is Qualcomm has a winning design on their hands, there's still the matter of securing fab capacity to produce a significant number.
 
Joined
Apr 24, 2021
Messages
278 (0.21/day)
Arm beating x86. Apple did it, now Qualcomm?

meteor lake is a joke. Perhaps lunar lake will be better. AMD ryzen zen5 looks to be much better as well.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
When Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
 
Joined
Aug 29, 2005
Messages
7,262 (1.03/day)
Location
Stuck somewhere in the 80's Jpop era....
System Name Lynni PS \ Lenowo TwinkPad L14 G2
Processor AMD Ryzen 7 7700 Raphael (Waiting on 9800X3D) \ i5-1135G7 Tiger Lake-U
Motherboard ASRock B650M PG Riptide Bios v. 3.10 AMD AGESA 1.2.0.2a \ Lenowo BDPLANAR Bios 1.68
Cooling Noctua NH-D15 Chromax.Black (Only middle fan) \ Lenowo C-267C-2
Memory G.Skill Flare X5 2x16GB DDR5 6000MHZ CL36-36-36-96 AMD EXPO \ Willk Elektronik 2x16GB 2666MHZ CL17
Video Card(s) Asus GeForce RTX™ 4070 Dual OC (Waiting on RX 8800 XT) | Intel® Iris® Xe Graphics
Storage Gigabyte M30 1TB|Sabrent Rocket 2TB| HDD: 10TB|1TB \ WD RED SN700 1TB
Display(s) KTC M27T20S 1440p@165Hz | LG 48CX OLED 4K HDR | Innolux 14" 1080p
Case Asus Prime AP201 White Mesh | Lenowo L14 G2 chassis
Audio Device(s) Steelseries Arctis Pro Wireless
Power Supply Be Quiet! Pure Power 12 M 750W Goldie | 65W
Mouse Logitech G305 Lightspeedy Wireless | Lenowo TouchPad & Logitech G305
Keyboard Ducky One 3 Daybreak Fullsize | L14 G2 UK Lumi
Software Win11 IoT Enterprise 24H2 UK | Win11 IoT Enterprise LTSC 24H2 UK / Arch (Fan)
Benchmark Scores 3DMARK: https://www.3dmark.com/3dm/89434432? GPU-Z: https://www.techpowerup.com/gpuz/details/v3zbr
Said this chip is too overpriced, here the Lenovo ThinkPad X13s G1 is about 2.275,00 USD for a base spec with 16GB Memory, 256GB NVME it's too overpriced for it to make since.

Even if it can performance as a Apple M2 chip, and got better battery life than a AMD or Intel based laptop this is just too much, it had to be half the price to begin with than it would get a better foothold in the market.

I fail to see this being a good chip because of the price sadly, because at about 1000USD it would make better sense to go with.
 

bug

Joined
May 22, 2015
Messages
13,772 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
When Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
It's not so clear-cut. Arm grows increasingly more complex, while x86 has become more RISC-like over the years. What I think drags x86 down is its legacy compatibility. If someone figures out how to provide that via a software layer, the differences between x86 and Arm would be wiped out.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
It's not so clear-cut. Arm grows increasingly more complex, while x86 has become more RISC-like over the years. What I think drags x86 down is its legacy compatibility. If someone figures out how to provide that via a software layer, the differences between x86 and Arm would be wiped out.
What drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
 
Joined
Apr 24, 2021
Messages
278 (0.21/day)
To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture.
You can’t say that for certain. Arrow lake and lunar lake will be on 3nm (allegedly) and zen 5 will be on at least 4nm. They will be more power efficient than their predecessors. As Intel and AMD move to more advanced nodes (as we know Intel was stuck on 14nm and 10/7nm for years) we have to see how power efficient (or not) the new architectures will be.
For example, Intel’s upcoming lion cove + skymont and then panther cove + darkmont. We have to wait to evaluate those architectures to see how power efficient (or not) they will be. They will be produced on advanced nodes. And as we know, AMD’s 7800x3d is very power efficient for the gaming performance it delivers, relative to the competition.

so you can’t write x86 off just yet.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
You can’t say that for certain. Arrow lake and lunar lake will be on 3nm (allegedly) and zen 5 will be on at least 4nm. They will be more power efficient than their predecessors. As Intel and AMD move to more advanced nodes (as we know Intel was stuck on 14nm and 10/7nm for years) we have to see how power efficient (or not) the new architectures will be.
For example, Intel’s upcoming lion cove + skymont and then panther cove + darkmont. We have to wait to evaluate those architectures to see how power efficient (or not) they will be. They will be produced on advanced nodes. And as we know, AMD’s 7800x3d is very power efficient for the gaming performance it delivers, relative to the competition.

so you can’t write x86 off just yet.

It is fairly easy to estimate that the efficiency gap is around 5-6 node shrinks to catch up with ARM (at least for Apple silicon, which has the best implementation of ARM ISA so far). 5-6 generations, so it will never happen. Maybe it will be a few years before we can write off x86, but I wouldn't hold Intel stock either.
 

bug

Joined
May 22, 2015
Messages
13,772 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
What drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
I hinted at this in my previous post. Ever since x86 became pipelined, it started mimicking RISC's fixed width rather well.
At the same time, Arm deals with 32bit, 64bit, Thumb, Neon, whatever, so it's going the opposite direction.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
I hinted at this in my previous post. Ever since x86 became pipelined, it started mimicking RISC's fixed width rather well.
At the same time, Arm deals with 32bit, 64bit, Thumb, Neon, whatever, so it's going the opposite direction.

It doesn't matter if their are pipelined, the bottleneck is with the frontend CISC and x86 is not and will not be able to avoid it. Intel thought they could be smarter than compilers doing their work on hardware. You just can't optimise the hardware pipeline during runtime, it is as it is. You can do this with compiler (and this is also why Rosetta works so well for translating x86 software to ARM). Intel made stupid decision very long time ago, and even more stupid when they kicked-out Apple when they come to develop CPU for IPhones.
 
Joined
Oct 6, 2021
Messages
1,605 (1.40/day)
What drags them down is CISC, it really doesn't matter if it is RISC internally, they will never be able to get benefits of fixed width instruction set, and all the joys coming with how caches / branching could be optimised thanks to this. X86 is dying, it will never be able to catch-up with RISC in terms of efficiency (which directly translate to performance nowadays). CISC was wrong horse to bet on. And they could realise it 20 years ago, when compilers were already very sophisticated and promised much better optimisation capability than creating sophisticated, specialised instruction set. It is not possible to fix X86 architecture, even if you drop legacy instructions.
I don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.

There's no reason to "fix" anything. It's better, more efficient, and it just works. X86 is the dominant ISA, and it's going to be here for a long time.
 
Joined
Oct 2, 2020
Messages
962 (0.63/day)
System Name ASUS TUF F15
Processor Intel Core i7-11800H
Motherboard ASUS FX506HC
Cooling Laptop built-in cooling lol
Memory 24 GB @ 3200
Video Card(s) Intel UHD & Nvidia RTX 3050 Mobile
Storage Adata XPG SX8200 Pro 512 GB
Display(s) Laptop built-in 144 Hz FHD screen
Audio Device(s) LOGITECH 2.1-channel
Power Supply ASUS 180W PSU
Mouse Logitech G604
Keyboard SteelSeries Apex 7 TKL
Software Windows 10 Enterprise 21H2 LTSC
"entry-level" variants of the 2023 MacBook Pros"
WTF:roll:

I'm waiting to one company would beat this Crapple finally, that's nonsense already.
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
I don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.

You are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
 
Joined
Apr 12, 2013
Messages
7,530 (1.77/day)
Obviously more benchmarks are needed, but not a bad showing so far.
Even is Qualcomm has a winning design on their hands, there's still the matter of securing fab capacity to produce a significant number.
Hardly an issue for the world's biggest/baddest modem maker!
 
Joined
Oct 6, 2021
Messages
1,605 (1.40/day)
You are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
Do you mean the M1, manufactured using the 5nm process found in modern CPUs? Any recent AMD chip with a similar TDP would perform similarly. However, I find it impractical and dumb to run a chip that exceeds 30W and reaches 100°C (high load) under passive cooling. For basic tasks like browsing or using spreadsheets, any APU from the 7nm era or newer would easily handle the workload while consuming 2-5W. In this scenario, the laptop's fan rotation is disabled.

All chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.

Delusional suits you very well. :)
 
Joined
Apr 12, 2013
Messages
7,530 (1.77/day)
and let ARM shine as it offers way better efficiency thanks to architecture.
Say what? Just clock limit any AMD/Intel processor & they'll easily be way more efficient. Now let's see Apple or any other ARM chip do (unlimited) turbos & see their efficiency then :slap:
 

bug

Joined
May 22, 2015
Messages
13,772 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I don't understand why there are so many ARM preachers out there. ARM is the one trying to gain performance following in the footsteps of AMD and Intel from half a decade ago.

There's no reason to "fix" anything. It's better, more efficient, and it just works. X86 is the dominant ISA, and it's going to be here for a long time.
This is rooted in academia, all the way back to the 90s, when Mr. Tanenbaum described x86 as a dinosaur that needs to make room for more nimble things.

Between the 3 decades that have come to pass and people not realizing real world is not just a detail you can forget about, some still think Arm/RISC "must" happen.
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,077 (1.84/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
I am not sure why people are still so dismissive of ARM. x86 became niche before COVID. There are far more devices on ARM than x86, and we collectively spend more time on ARM devices than x86 devices. Phones, TVs, routers all use ARM instead of x86. The only holdout in x86 are legacy software, and those are slowly getting converted into cloud (and becoming architecture agnostic as long as we can access the web).
 
Joined
Jun 21, 2019
Messages
44 (0.02/day)
It's not being dismissive but the fallacy that ARM is "inherently" more efficient holds no water! It depends on the node, application as well as chip size believe it or not.

This is why we see ARM in every mobile application, and zero X86 in any application where battery live is critical? Is X86 some kind of religion or what? I know most people (including me) own X86 hardware, but I really don't get it why people feel they have to defend X86 like independence.
 
Joined
Oct 31, 2022
Messages
199 (0.26/day)
When Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
It would be nice.
I am a gamer, nothing more really, so unless EVERY game is converting or emulated very well, I don't see any reason to switch.
I mean Desktop CPUs of course...
 

bug

Joined
May 22, 2015
Messages
13,772 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I am not sure why people are still so dismissive of ARM. x86 became niche before COVID. There are far more devices on ARM than x86, and we collectively spend more time on ARM devices than x86 devices. Phones, TVs, routers all use ARM instead of x86. The only holdout in x86 are legacy software, and those are slowly getting converted into cloud (and becoming architecture agnostic as long as we can access the web).
Saying Arm is not a silver bullet means we're being dismissive?

There are markets where Arm does better. And there are markets where x86 has the upper hand. It's as simple as that.

Plus, there's a built-in fallacy to your statement: this isn't about Arm vs x86, its about implementations of both. x86 can be anything from Netburst to Zen4. Arm can also be anything from cheap Unisoc to Apple's M3...
 
Top