Tuesday, March 12th 2024

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

Qualcomm Snapdragon X Elite is about to make landfall in the ultraportable notebook segment, powering a new wave of Windows 11 devices powered by Arm, capable of running even legacy Windows applications. The Snapdragon X Elite SoC in particular has been designed to rival the Apple M3 chip powering the 2024 MacBook Air, and some of the "entry-level" variants of the 2023 MacBook Pros. These chips threaten the 15 W U-segment and even 28 W P-segment of x86-64 processors from Intel, such as the Core Ultra "Meteor Lake," and Ryzen 8040 "Hawk Point." Erdi Özüağ, prominent tech journalist from Türkiye, has access to a Qualcomm-reference notebook powered by the Snapdragon X Elite X1E80100 28 W SoC. He compared its performance to an off-the-shelf notebook powered by a 28 W Intel Core Ultra 7 155H "Meteor Lake" processor.

There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.
Source: Erdi Özüağ (YouTube)
Add your own comment

55 Comments on Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

#26
R0H1T
atomekThis is why we see ARM in every mobile application, and zero X86 in any application where battery live is critical?
And this is why we also see "portable" consoles running on AMD chips, bet you forgot they have battery as well?
Posted on Reply
#27
bitsandboots
It's amusing to see people in the comments accusing others of having a strange obsession for x86/ against arm while replying to every single comment that does not share their worldview.

Perhaps people don't care what ISA powers their workload as long as the computer does the job best?
For example, apple's M chips are useless to me because I can't put them in a computer of my own, with a GPU of my choosing, and an OS of my choosing.
And raspberry pi's are cute, but they're not going to be powering my AI or gaming workloads.

And x86 and ARM are nice and all, but they haven't managed to replace s390 that's running bank and government workloads since the 1970s.
Because... the ISA only matters for completely portable or completely new workloads.
Consoles have used x86, arm, mips, whatever. It matters less than the GPU.
As does my gaming and AI scenarios.

Who cares what powers phones anymore? They're toys in comparison and stagnated years ago as seen by sales having plateaued.

Bring on ARM, but not for the sake of it. Just give me something good for my use cases and I'll buy it. But, x86 is that right now.
Posted on Reply
#28
bitsandboots
On topic of the Snapdragon, I'm really excited for what I hope will be an M-type chip that can be used in useful situations.
Pretty much everything Apple does with its walled garden and Mac OS's bad UX is holding the M chips back from greatness.
There have been swings & misses to get ARM on windows/linux laptops in the past but everything I've seen about the X Elite indicates it may be the first ARM laptop chip that is both not a toy and not stuck in an Apple device.

If that one day scales up to higher end computers that need discrete graphics or plenty of PCIE add-ins, it'd be interesting. Interesting to see how software would adapt at least. Apple did really well with their transition but I think that's just something Apple is uniquely positioned to do. Somehow on Windows I imagine we'll all be forced to update to the latest and most dystopian edition of Windows in order to take advantage of future hypothetical ARM desktops.
Posted on Reply
#29
EatingDirt
atomekYou are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
You really think the end of x86 is in, what, 10 years(judging by your final comment)? It's possible I suppose, but I think it's unlikely. I wouldn't be surprised if we get some desktop ARM processors in the next 10 years, and it will be fine for the general consumer, but most 'general consumers' don't even own a PC anymore, and get by with phones/tablets/laptops.

The switch from x86 to ARM in the power user & business space would require vast amounts of software developers to redevelop their software, or force many businesses to change software altogether. It's a colossal hurdle, which is why we still don't have a vast array of ARM desktop processors, despite their superior efficiency for years.
Posted on Reply
#30
Fourstaff
bugSaying Arm is not a silver bullet means we're being dismissive?

There are markets where Arm does better. And there are markets where x86 has the upper hand. It's as simple as that.

Plus, there's a built-in fallacy to your statement: this isn't about Arm vs x86, its about implementations of both. x86 can be anything from Netburst to Zen4. Arm can also be anything from cheap Unisoc to Apple's M3...
You are correct on all counts. However, its noted somewhere in this thread that ARM and x86 are starting to become more similar than different, and at some point the implementations of both will converge close enough that most people will go for the more efficient one (either price, power consumption, or both).
Posted on Reply
#31
Lucas_
the only thing I m happy about here that microsoft maybe allocate more resource to their crap implementation arm windows, and maybe with time more application will be working along the way in arm environment...
Posted on Reply
#32
ARF
R0H1TAnd this is why we also see "portable" consoles running on AMD chips, bet you forgot they have battery as well?
1. Someone "smart" has decided to use this particular market niche and *try* to compete with other mobile devices 100% Android and ARM?
2. AMD has nothing else to offer?
3. Unification in a single Windows / x86-64 / consoles ecosystem?

Speaking of x86-64 and its inevitable EOL:
1. The foundries can't make infinitely smaller lithography transistors, so the end is near.
2. Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
3. Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore.
Posted on Reply
#33
Denver
ARF1. Someone "smart" has decided to use this particular market niche and *try* to compete with other mobile devices 100% Android and ARM?
2. AMD has nothing else to offer?
3. Unification in a single Windows / x86-64 / consoles ecosystem?

Speaking of x86-64 and its inevitable EOL:
1. The foundries can't make infinitely smaller lithography transistors, so the end is near.
2. Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
3. Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore.
Huh? All chips will inevitably encounter the manufacturing process barrier. Notably, x86 AMD demonstrates superior performance per transistor compared to ARM designs. Over the years, ARM has been emulating the strategies of AMD and Intel from half a decade ago, gradually converging in various aspects and accruing complexity. Consequently, the inherent advantage of being RISC disappeared.

2° In the PC realm, there are ample robust cooling solutions available. However, amid intense competition, opting for efficiency by constraining TDP, means leaving performance on the table, and the competitor (Intel) will increase the clock/TDP several times through refreshes to look better in benchmarks.

3° Huh?? There has never been a 15W processor; all manufacturers, including ARM, provide misleading figures that typically align with TDP @ base clock. In reality, most, if not all, efficiency-focused processors approach nearly 30W under heavy loads, including those developed by Apple.
Posted on Reply
#34
R0H1T
ARFThe foundries can't make infinitely smaller lithography transistors, so the end is near.
Cuts both ways doesn't it? Except Apple/QC will run into that wall quicker as both AMD/Intel are at least a node to half behind them.
ARFRecent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
And if you remember the million other reviews out there you would probably also remember that lowering the TDP(clocks?) will boost its efficiency massively.
ARFRyzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore
They still have 15W chips, just that U series top out at 8 cores & you can't really run them at 15W constantly. Though in the future we may see zen6c or something with 8 cores @15w in a console or similar form factor.
Posted on Reply
#35
Darmok N Jalad
R0H1TCuts both ways doesn't it? Except Apple/QC will run into that wall quicker as both AMD/Intel are at least a node to half behind them.
Hasn't this always been the case in this industry though? Apple aims for and buys up the new fab space, and by the time the fab can help other large customers, Apple is already planning for and buying up the newer node space. And by not pushing way past the efficiency curve, Apple can afford to take the risks and cost of the newer node. Intel especially can't, because new nodes generally don't like to be pushed hard, and how do you make a new node that can't be pushed hard outperform the current node that is being pushed really hard? There's a balance in there, and also deep pockets needed.
Posted on Reply
#36
Minus Infinity
atomekWhen Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
Well you forgot to tell AMD or Intel as their roadmap out until 2028 is locked in and x86 it is. Whether or not they succeed Intel's new tile design is a lot do to with greatly reducing power. Luna Lake will be a big test as leaks are saying already 50% more multicore performance at half the power of Meteor Lake. Luna Lake will launch this year too (if we can trust Intel).
Posted on Reply
#37
R0H1T
Speaking of chiplets, or tiles, Apple or QC will run into the "scaling" problem as well. Apple already produces the biggest consumer facing ARM chips out there & they're only getting bigger, AMD solved this first & Intel's on the same path although their execution is questionable atm. Eventually with skyrocketing fab costs & yields being an issue with bigger chips Apple/QC will have to move to chiplets/tiles or whatever solution they come up with. That will reduce their efficiency naturally ~ let's see where they are 2-4 years from now at the top end.
Posted on Reply
#38
Noyand
R0H1TSpeaking of chiplets, or tiles, Apple or QC will run into the "scaling" problem as well. Apple already produces the biggest consumer facing ARM chips out there & they're only getting bigger, AMD solved this first & Intel's on the same path although their execution is questionable atm. Eventually with skyrocketing fab costs & yields being an issue with bigger chips Apple/QC will have to move to chiplets/tiles or whatever solution they come up with. That will reduce their efficiency naturally ~ let's see where they are 2-4 years from now at the top end.
They already did, Apple biggest chip (M2 ultra) use a silicon interposer: InFO-LSI of TSMC. It's not monolithic, it's two M2 max fused. It's arguably more advanced than what Intel and AMD are doing, , the bandwidth is very high, and the low power efficiciency better. If you wonder why AMD doesn't use that if it's better, it might just come down to the fact that Apple can afford it because the chip are sold witch computers that get an insane margin. In that regard they don't play with the same rules as Intel/AMD, who have to keep the cost lower for their clients.

It's the same reason as to why the iPhone use chips so much bigger than Qualcomm : their vertical integration allow them to do it, QC could never sell a chip so big to their client (yes, the iPhone chip is almost as big as a M1).



Apple Silicon: The M2 Ultra Impresses at WWDC 2023 – Display Daily
A brief explanation for our readers unfamiliar with the terms, SoIC (System on Integrated Chip) is TSMC’s bump-less chip stacking and hybrid bonding integration technology that allows for stacking multiple chip dies together, enabling extremely high-bandwidth and low power bonding between the silicon dies. Currently, this technology has no equal in the industry.
Posted on Reply
#39
R0H1T
Two massive chips glued together isn't exactly the same thing, AMD or Intel dis-aggregated pretty much all major components from their chips & then made what you see today.

Posted on Reply
#40
Noyand
R0H1TTwo massive chips glued together isn't exactly the same thing, AMD or Intel dis-aggregated pretty much all major components from their chips & then made what you see today.

I see what you mean, but TSMC tech can also work for heterogenous chiplets if needed. And knowing Apple, they would do something closer to what Intel is doing and only use bleeding edge nodes. Apple still enjoys way higher margins than other chip companies, so they can afford to keep any advantages they can get until the very end. Their efficiency (especially on the laptop) is the one thing that makes people tolerate the robbery that they are doing on the storage :D

Intel does have a packaging technology similar to that (EMIB) but it's unclear when or if it's going to be used on consumer products.
Posted on Reply
#41
Toro
DenverDo you mean the M1, manufactured using the 5nm process found in modern CPUs? Any recent AMD chip with a similar TDP would perform similarly. However, I find it impractical and dumb to run a chip that exceeds 30W and reaches 100°C (high load) under passive cooling. For basic tasks like browsing or using spreadsheets, any APU from the 7nm era or newer would easily handle the workload while consuming 2-5W. In this scenario, the laptop's fan rotation is disabled.

All chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.

Delusional suits you very well. :)
Actually, there are examples of overlapping x64 and arm64 sharing the same or quite similar nodes and timeframe of development.

For example, the AMD phoenix is "TSMC 4nm" and M2 Pro is "TSMC 5nm." In this example, the M2 Pro has a sight single thread performance advantage. A key difference, the M2 reaches this at 3.5ghz, while the Zen4 core boosts to much more power hungry 5.2 ghz to reach the same level of performance. This is despite the Phoenix having an advantage on process and much higher power envelope. Also, the Phoenix is in a considerably lower GPU and memory performance tier presumably since so much more resources are diverted to the CPU cores.

Even if we assume a 30% IPC boost for Zen5, they will close the single thread performance gap with M3 or Oryon by boosting to 6ghz, but then it will be miles apart on efficiency.

My take, Intel has historically done well since they traditionally had a 1-2 year process advantage. Now that this advantage is effectively gone, since even Intel will now be manufacturing ARM chips, it all boils down to architecture. Look back 30 years on a diverse range of Risc vs. Cisc products, and the general trend is that Risc generally can do more with less when normalized to process.

My second take: Microsoft has already made the decision to drop support for x64 past Windows 12, they just haven't told anyone yet. They can't afford to have developers support two ISA's and any legacy code that absolutely needs it will be (very begrudgingly) supported with virtualization. I say this because they wouldn't be ramping up developer support for arm64 if this wansn't their decision.
Posted on Reply
#42
Denver
ToroActually, there are examples of overlapping x64 and arm64 sharing the same or quite similar nodes and timeframe of development.

For example, the AMD phoenix is "TSMC 4nm" and M2 Pro is "TSMC 5nm." In this example, the M2 Pro has a sight single thread performance advantage. A key difference, the M2 reaches this at 3.5ghz, while the Zen4 core boosts to much more power hungry 5.2 ghz to reach the same level of performance. This is despite the Phoenix having an advantage on process and much higher power envelope. Also, the Phoenix is in a considerably lower GPU and memory performance tier presumably since so much more resources are diverted to the CPU cores.

Even if we assume a 30% IPC boost for Zen5, they will close the single thread performance gap with M3 or Oryon by boosting to 6ghz, but then it will be miles apart on efficiency.

My take, Intel has historically done well since they traditionally had a 1-2 year process advantage. Now that this advantage is effectively gone, since even Intel will now be manufacturing ARM chips, it all boils down to architecture. Look back 30 years on a diverse range of Risc vs. Cisc products, and the general trend is that Risc generally can do more with less when normalized to process.

My second take: Microsoft has already made the decision to drop support for x64 past Windows 12, they just haven't told anyone yet. They can't afford to have developers support two ISA's and any legacy code that absolutely needs it will be (very begrudgingly) supported with virtualization. I say this because they wouldn't be ramping up developer support for arm64 if this wansn't their decision.
Nope. 5NP (which M2 is based on) offers 10% lower power consumption than 5nm/4nm, with the advantage of the latter being 6% better density.

What ? The M2 pro consumes up to 100w, it's insane to think that this is super efficient compared to x86 APUs. Outside of synthetic software or accelerated by ASICs, plus, apple's finely tuned ecosystem, this is horrible.
Posted on Reply
#43
Toro
DenverNope. 5NP (which M2 is based on) offers 10% lower power consumption than 5nm/4nm, with the advantage of the latter being 6% better density.

What ? The M2 pro consumes up to 100w, it's insane to think that this is super efficient compared to x86 APUs. Outside of synthetic software or accelerated by ASICs, plus, apple's finely tuned ecosystem, this is horrible.
The discrepancy in power consumption is far greater than 10%. The original premise still holds despite the differences in node you point out; Given the latest architecture and similar process, the ARM architecture has far lower power consumption while still matching single thread performance.

M2 pro is 30W, the Phoenix (mobile) is 35-54W, not sure where 100w is coming from.

The last performance hold out for x86 has been single thread performance. With the introduction of the M3 and Oryon, that is no longer the case. Consider, for example the M3 max, there are very few real world applications OR benchmarks where x86 will prevail at even 5x the power consumption.

OK maybe gaming, you got me there, but then the M1 Max is sorta at the level of an RTX 4060ti, so I think that covers a lot of ground, especially considering it's a portable.

What's not to like about accelerators? Seems like a great way to improve productivity and extend battery life and Apple and Qualcomm silicon seem to have a lot more going for it on their first gen. Let's see... Intel, on their 14th gen and just introducing a sub-par NPU and still sacked with a sub-par media processor. Their iGPU is greatly improved, so kudos there.
Posted on Reply
#44
Denver
ToroThe discrepancy in power consumption is far greater than 10%. The original premise still holds despite the differences in node you point out; Given the latest architecture and similar process, the ARM architecture has far lower power consumption while still matching single thread performance.

M2 pro is 30W, the Phoenix (mobile) is 35-54W, not sure where 100w is coming from.

The last performance hold out for x86 has been single thread performance. With the introduction of the M3 and Oryon, that is no longer the case. Consider, for example the M3 max, there are very few real world applications OR benchmarks where x86 will prevail at even 5x the power consumption.

OK maybe gaming, you got me there, but then the M1 Max is sorta at the level of an RTX 4060ti, so I think that covers a lot of ground, especially considering it's a portable.

What's not to like about accelerators? Seems like a great way to improve productivity and extend battery life and Apple and Qualcomm silicon seem to have a lot more going for it on their first gen. Let's see... Intel, on their 14th gen and just introducing a sub-par NPU and still sacked with a sub-par media processor. Their iGPU is greatly improved, so kudos there.
From a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+

www.notebookcheck.net/Apple-MacBook-Pro-14-2023-review-The-M2-Pro-is-slowed-down-in-the-small-MacBook-Pro.687345.0.html
Posted on Reply
#45
Toro
DenverFrom a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+

www.notebookcheck.net/Apple-MacBook-Pro-14-2023-review-The-M2-Pro-is-slowed-down-in-the-small-MacBook-Pro.687345.0.html
I'm speaking of power to the SOC, not through the cord. This can have quite a bit of variation depending on display, attached USB accessories, and battery state. Also, this is not my experience, I have run many types of benchmarks on the M3 Max and the max power input is 82 watts which corresponds well with the spec TDP of the chip of 78 watts. This was measured at battery = 100% to avoid errors from charging.

The only way to get an M2 pro to pull >100w at the cord is to run a benchmark while the battery is charging and/or with heavy external USB loads.
Posted on Reply
#46
Noyand
DenverFrom a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+

www.notebookcheck.net/Apple-MacBook-Pro-14-2023-review-The-M2-Pro-is-slowed-down-in-the-small-MacBook-Pro.687345.0.html
A brief 100w peak for a Full system load* with Cinebench and 3Dmark running at the same time, not a pure CPU load. The competition doesn't have a GPU of the same class as the M2 pro 19 core GPU in their SoC. The power Consumption test also seems to include the screen and peripherals. Keep in mind that this is a laptop review, not an isolated review of the chip



Posted on Reply
#47
Toro
NoyandA brief 100w peak for a Full system load* with Cinebench and 3Dmark running at the same time, not a pure CPU load. The competition doesn't have a GPU of the same class as the M2 pro 19 core GPU in their SoC. The power Consumption test also seems to include the screen and peripherals. Keep in mind that this is a laptop review, not an isolated review of the chip



You are attempting to confound results when it is really quite simple. Most of the industry compares design power of the chip because it is easier to make comparisons, Apple is no different from others in this regard. Is it OK to use power cord draw, yes, but there are lot more asterisks and is usually hard to draw conclusions.

In the above comparison, they ran different types of GPU and CPU benchmarks simultaneously, which doesn't tell you much. This is because the CPU and GPU will share power and it would be impossible to get a data run that is repeatable. A single benchmark that encompasses both CPU and GPU would be a much better such as a 3D game.
Posted on Reply
#48
Noyand
ToroYou are attempting to confound results when it is really quite simple. Most of the industry compares design power of the chip because it is easier to make comparisons, Apple is no different from others in this regard. Is it OK to use power cord draw, yes, but there are lot more asterisks and is usually hard to draw conclusions.

In the above comparison, they ran different types of GPU and CPU benchmarks simultaneously, which doesn't tell you much. This is because the CPU and GPU will share power and it would be impossible to get a data run that is repeatable. A single benchmark that encompasses both CPU and GPU would be a much better such as a 3D game.
Yhea I was just disagreeing with Denver saying that the M2 pro is a 100w chip because he probably read the review diagonally and didn't realise the context in which those 100w were measured. As I said, this isn't a chip review, but a laptop review, power consumption in that context measure the whole laptop. If we follow his reasoning, the R9 7940HS use more power despite having a iGPU that is 183% slower. And some aspect of his criticism are also weird, like how he extrapolate the purposely thermally gimped MacBook Air to say that ARM in general got sustained cooling issues, when that's something that Apple did on purpose to push people towards the higher end SKUs who have an active fan. There's a ton of creative professionals out there making a living on adequately cooled Apple Silicon devices.

The CPU part of the M2 pro is more around the 27w marks according to notebook check. I don't understand what's happening on that forum lately, where there are more actors in the mainstream CPU market than there's ever been for decades, but people have a huge aversion towards the newcomers, and would rather keep the statu quo.
Posted on Reply
#49
Denver
NoyandYhea I was just disagreeing with Denver saying that the M2 pro is a 100w chip because he probably read the review diagonally and didn't realise the context in which those 100w were measured. As I said, this isn't a chip review, but a laptop review, power consumption in that context measure the whole laptop. If we follow his reasoning, the R9 7940HS use more power despite having a iGPU that is 183% slower. And some aspect of his criticism are also weird, like how he extrapolate the purposely thermally gimped MacBook Air to say that ARM in general got sustained cooling issues, when that's something that Apple did on purpose to push people towards the higher end SKUs who have an active fan. There's a ton of creative professionals out there making a living on adequately cooled Apple Silicon devices.

The CPU part of the M2 pro is more around the 27w marks according to notebook check. I don't understand what's happening on that forum lately, where there are more actors in the mainstream CPU market than there's ever been for decades, but people have a huge aversion towards the newcomers, and would rather keep the statu quo.
The TDP of a SOC cannot be attributed solely to the CPU. The boost and TDP configurations of x86 CPUs in laptops differ based on the implementation of each model/brand etc. For instance, the same chip can be configured for power consumption ranging from 20-30 watts in handheld devices utilizing the Z1/7840u, while laptops can push this boundary significantly, reaching levels close to 100 watts in PL2.

Apple's chips leverage a 256-bit bus, providing substantially greater bandwidth compared to today's x86 chips. It's the same as comparing oranges to apples. Therefore, let's shift the comparison to the CPU itself and choose a benchmark that reflects real-world scenarios.










Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
Posted on Reply
#50
Noyand
DenverThe TDP of a SOC cannot be attributed solely to the CPU. The boost and TDP configurations of x86 CPUs in laptops differ based on the implementation of each model/brand etc. For instance, the same chip can be configured for power consumption ranging from 20-30 watts in handheld devices utilizing the Z1/7840u, while laptops can push this boundary significantly, reaching levels close to 100 watts in PL2.

Apple's chips leverage a 256-bit bus, providing substantially greater bandwidth compared to today's x86 chips. It's the same as comparing oranges to apples. Therefore, let's shift the comparison to the CPU itself and choose a benchmark that reflects real-world scenarios.










Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
Blender 2.79 isn't native it's running through Rosetta. Blender 3.1 and up are a better indicator of the SOC performance. Several reviews have placed the M3 max around the i9 13980HX of the 18" alienware M18 in blender. That laptop got a 198wPL2 and143w PL1. So similar performance at 92w for the M3 max, vs beyond 200w for the 7845HX and 13980HX.



I must say, Choosing the M3 max was weird since it's a 3nm chip. So let's see how the M2 max compare to the 7840HS...M2 max = 72w vs Ryzen 7 = 103w. Apple arch is more efficient. At equivalent nodes, it's not a massive difference, but they still have an edge on that aspect. I don't necessarily attribute that to the fact that it's ARM, but that Apple was absolutely focused on making an efficient arch for their laptops. The fact that there's no performance hit when you are unplugged is telling.
Posted on Reply
Add your own comment
Nov 25th, 2024 22:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts