• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

Joined
Nov 8, 2017
Messages
229 (0.09/day)
All chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.
Apple is also a real freak when it comes to silence. Their fan curve is tuned to get the lowest amount of RPM possible (and that's the M2 Ultra). The ARM mac pro fans are spinning at 500~600 RPM underload. The Macbook Air is also gimped when it comes to thermals to push people to buy the Pro.

I never had the impression that ARM had an intresic thermal issue compared to x86, just that some computers maker are stingy when it comes to coolling. (aka no vapor chamber, or jet engine noise level)
1710254446068.png
 
Joined
Apr 12, 2013
Messages
7,479 (1.77/day)
This is why we see ARM in every mobile application, and zero X86 in any application where battery live is critical?
And this is why we also see "portable" consoles running on AMD chips, bet you forgot they have battery as well?
 
Joined
Mar 12, 2024
Messages
49 (0.21/day)
System Name SOCIETY
Processor AMD Ryzen 9 7800x3D
Motherboard MSI MAG X670E TOMAHAWK
Cooling Arctic Liquid Freezer II 420
Memory 64GB 6000mhz
Video Card(s) Nvidia RTX 3090
Storage WD SN850X 4TB, Micron 1100 2TB, ZFS NAS over 10gbe network
Display(s) 27" Dell S2721DGF, 24" ASUS IPS, 24" Dell IPS
Case Corsair 750D
Power Supply Cooler Master 1200W Gold
Mouse Razer Deathadder
Keyboard ROG Falchion
VR HMD Pimax 8KX
Software Windows 10 with Debian VM
It's amusing to see people in the comments accusing others of having a strange obsession for x86/ against arm while replying to every single comment that does not share their worldview.

Perhaps people don't care what ISA powers their workload as long as the computer does the job best?
For example, apple's M chips are useless to me because I can't put them in a computer of my own, with a GPU of my choosing, and an OS of my choosing.
And raspberry pi's are cute, but they're not going to be powering my AI or gaming workloads.

And x86 and ARM are nice and all, but they haven't managed to replace s390 that's running bank and government workloads since the 1970s.
Because... the ISA only matters for completely portable or completely new workloads.
Consoles have used x86, arm, mips, whatever. It matters less than the GPU.
As does my gaming and AI scenarios.

Who cares what powers phones anymore? They're toys in comparison and stagnated years ago as seen by sales having plateaued.

Bring on ARM, but not for the sake of it. Just give me something good for my use cases and I'll buy it. But, x86 is that right now.
 
Joined
Mar 12, 2024
Messages
49 (0.21/day)
System Name SOCIETY
Processor AMD Ryzen 9 7800x3D
Motherboard MSI MAG X670E TOMAHAWK
Cooling Arctic Liquid Freezer II 420
Memory 64GB 6000mhz
Video Card(s) Nvidia RTX 3090
Storage WD SN850X 4TB, Micron 1100 2TB, ZFS NAS over 10gbe network
Display(s) 27" Dell S2721DGF, 24" ASUS IPS, 24" Dell IPS
Case Corsair 750D
Power Supply Cooler Master 1200W Gold
Mouse Razer Deathadder
Keyboard ROG Falchion
VR HMD Pimax 8KX
Software Windows 10 with Debian VM
On topic of the Snapdragon, I'm really excited for what I hope will be an M-type chip that can be used in useful situations.
Pretty much everything Apple does with its walled garden and Mac OS's bad UX is holding the M chips back from greatness.
There have been swings & misses to get ARM on windows/linux laptops in the past but everything I've seen about the X Elite indicates it may be the first ARM laptop chip that is both not a toy and not stuck in an Apple device.

If that one day scales up to higher end computers that need discrete graphics or plenty of PCIE add-ins, it'd be interesting. Interesting to see how software would adapt at least. Apple did really well with their transition but I think that's just something Apple is uniquely positioned to do. Somehow on Windows I imagine we'll all be forced to update to the latest and most dystopian edition of Windows in order to take advantage of future hypothetical ARM desktops.
 
Joined
Jan 17, 2018
Messages
428 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
You are wrong for two reasons - there are way more X86 preachers out there (you are one of them). And second - ARM for decades focused on mobile market, where efficiency was most important. Today, when we are reaching limits in terms of physical process, X86 is approaching the heat wall, and let ARM shine as it offers way better efficiency thanks to architecture. And today efficiency becomes performance. Show me any X86 computer from today that could be passively cooled, and offers at least half of the performance of 3 years old M1.

I'm buying 7800X3D as a gaming PC, but I know it is probably my last X86 PC ever build, I'm just not delusional.
You really think the end of x86 is in, what, 10 years(judging by your final comment)? It's possible I suppose, but I think it's unlikely. I wouldn't be surprised if we get some desktop ARM processors in the next 10 years, and it will be fine for the general consumer, but most 'general consumers' don't even own a PC anymore, and get by with phones/tablets/laptops.

The switch from x86 to ARM in the power user & business space would require vast amounts of software developers to redevelop their software, or force many businesses to change software altogether. It's a colossal hurdle, which is why we still don't have a vast array of ARM desktop processors, despite their superior efficiency for years.
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,074 (1.85/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
Saying Arm is not a silver bullet means we're being dismissive?

There are markets where Arm does better. And there are markets where x86 has the upper hand. It's as simple as that.

Plus, there's a built-in fallacy to your statement: this isn't about Arm vs x86, its about implementations of both. x86 can be anything from Netburst to Zen4. Arm can also be anything from cheap Unisoc to Apple's M3...
You are correct on all counts. However, its noted somewhere in this thread that ARM and x86 are starting to become more similar than different, and at some point the implementations of both will converge close enough that most people will go for the more efficient one (either price, power consumption, or both).
 
  • Like
Reactions: bug
Joined
Jun 21, 2015
Messages
66 (0.02/day)
Location
KAER MUIRE
System Name Alucard
Processor M2 Pro 14"
Motherboard Apple thingy all together
Cooling no Need
Memory 32 Shared Memory
Video Card(s) 30 units
Storage 1 TB
Display(s) Acer 2k 170Hz, Benq 4k HDR
Mouse Logictech M3
Keyboard Logictech M3
Software MacOs / Ubuntu
the only thing I m happy about here that microsoft maybe allocate more resource to their crap implementation arm windows, and maybe with time more application will be working along the way in arm environment...
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.68/day)
Location
Ex-usa | slava the trolls
And this is why we also see "portable" consoles running on AMD chips, bet you forgot they have battery as well?

1. Someone "smart" has decided to use this particular market niche and *try* to compete with other mobile devices 100% Android and ARM?
2. AMD has nothing else to offer?
3. Unification in a single Windows / x86-64 / consoles ecosystem?

Speaking of x86-64 and its inevitable EOL:
1. The foundries can't make infinitely smaller lithography transistors, so the end is near.
2. Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
3. Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore.
 
Joined
Oct 6, 2021
Messages
1,605 (1.43/day)
1. Someone "smart" has decided to use this particular market niche and *try* to compete with other mobile devices 100% Android and ARM?
2. AMD has nothing else to offer?
3. Unification in a single Windows / x86-64 / consoles ecosystem?

Speaking of x86-64 and its inevitable EOL:
1. The foundries can't make infinitely smaller lithography transistors, so the end is near.
2. Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
3. Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore.
Huh? All chips will inevitably encounter the manufacturing process barrier. Notably, x86 AMD demonstrates superior performance per transistor compared to ARM designs. Over the years, ARM has been emulating the strategies of AMD and Intel from half a decade ago, gradually converging in various aspects and accruing complexity. Consequently, the inherent advantage of being RISC disappeared.

2° In the PC realm, there are ample robust cooling solutions available. However, amid intense competition, opting for efficiency by constraining TDP, means leaving performance on the table, and the competitor (Intel) will increase the clock/TDP several times through refreshes to look better in benchmarks.

3° Huh?? There has never been a 15W processor; all manufacturers, including ARM, provide misleading figures that typically align with TDP @ base clock. In reality, most, if not all, efficiency-focused processors approach nearly 30W under heavy loads, including those developed by Apple.
 
Joined
Apr 12, 2013
Messages
7,479 (1.77/day)
The foundries can't make infinitely smaller lithography transistors, so the end is near.
Cuts both ways doesn't it? Except Apple/QC will run into that wall quicker as both AMD/Intel are at least a node to half behind them.
Recent Ryzen's power consumption goes through the roof - 56% higher used energy by Ryzen 9 7950X against its father the Ryzen 9 5950X.
And if you remember the million other reviews out there you would probably also remember that lowering the TDP(clocks?) will boost its efficiency massively.
Ryzen U series were 15-watt chips in the past, today these chips become 25-30-watt - they won't been used in thin notebooks anymore
They still have 15W chips, just that U series top out at 8 cores & you can't really run them at 15W constantly. Though in the future we may see zen6c or something with 8 cores @15w in a console or similar form factor.
 
Joined
Mar 16, 2017
Messages
2,067 (0.74/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Cuts both ways doesn't it? Except Apple/QC will run into that wall quicker as both AMD/Intel are at least a node to half behind them.
Hasn't this always been the case in this industry though? Apple aims for and buys up the new fab space, and by the time the fab can help other large customers, Apple is already planning for and buying up the newer node space. And by not pushing way past the efficiency curve, Apple can afford to take the risks and cost of the newer node. Intel especially can't, because new nodes generally don't like to be pushed hard, and how do you make a new node that can't be pushed hard outperform the current node that is being pushed really hard? There's a balance in there, and also deep pockets needed.
 
Joined
May 3, 2018
Messages
2,881 (1.21/day)
When Apple showed off their M1, I wrote on reddit that this was the beginning of the end of x86. I was downvoted to hell by r/hardware experts. To this day people refuse to understand that the efficiency gap between ARM and x86 cannot be closed by node improvements, it is too big and it all comes down to architecture. If Microsoft jumps on the ARM wagon and the game studios follow, that will be the end of the X86 road. It already started on server market. I just can't understand why Intel hasn't realised this, they kicked Apple out when they came to them with a request to join venture to develop the cpu for their first IPhone. AMD and NVidia had more common sense and at least started developing their own ARM processors.
Well you forgot to tell AMD or Intel as their roadmap out until 2028 is locked in and x86 it is. Whether or not they succeed Intel's new tile design is a lot do to with greatly reducing power. Luna Lake will be a big test as leaks are saying already 50% more multicore performance at half the power of Meteor Lake. Luna Lake will launch this year too (if we can trust Intel).
 
Joined
Apr 12, 2013
Messages
7,479 (1.77/day)
Speaking of chiplets, or tiles, Apple or QC will run into the "scaling" problem as well. Apple already produces the biggest consumer facing ARM chips out there & they're only getting bigger, AMD solved this first & Intel's on the same path although their execution is questionable atm. Eventually with skyrocketing fab costs & yields being an issue with bigger chips Apple/QC will have to move to chiplets/tiles or whatever solution they come up with. That will reduce their efficiency naturally ~ let's see where they are 2-4 years from now at the top end.
 
Joined
Nov 8, 2017
Messages
229 (0.09/day)
Speaking of chiplets, or tiles, Apple or QC will run into the "scaling" problem as well. Apple already produces the biggest consumer facing ARM chips out there & they're only getting bigger, AMD solved this first & Intel's on the same path although their execution is questionable atm. Eventually with skyrocketing fab costs & yields being an issue with bigger chips Apple/QC will have to move to chiplets/tiles or whatever solution they come up with. That will reduce their efficiency naturally ~ let's see where they are 2-4 years from now at the top end.
They already did, Apple biggest chip (M2 ultra) use a silicon interposer: InFO-LSI of TSMC. It's not monolithic, it's two M2 max fused. It's arguably more advanced than what Intel and AMD are doing, , the bandwidth is very high, and the low power efficiciency better. If you wonder why AMD doesn't use that if it's better, it might just come down to the fact that Apple can afford it because the chip are sold witch computers that get an insane margin. In that regard they don't play with the same rules as Intel/AMD, who have to keep the cost lower for their clients.

It's the same reason as to why the iPhone use chips so much bigger than Qualcomm : their vertical integration allow them to do it, QC could never sell a chip so big to their client (yes, the iPhone chip is almost as big as a M1).
1710329170375.png



Apple Silicon: The M2 Ultra Impresses at WWDC 2023 – Display Daily
A brief explanation for our readers unfamiliar with the terms, SoIC (System on Integrated Chip) is TSMC’s bump-less chip stacking and hybrid bonding integration technology that allows for stacking multiple chip dies together, enabling extremely high-bandwidth and low power bonding between the silicon dies. Currently, this technology has no equal in the industry.
1710327423216.png
1710327713308.png
 
Joined
Apr 12, 2013
Messages
7,479 (1.77/day)
Two massive chips glued together isn't exactly the same thing, AMD or Intel dis-aggregated pretty much all major components from their chips & then made what you see today.

 
Joined
Nov 8, 2017
Messages
229 (0.09/day)
Two massive chips glued together isn't exactly the same thing, AMD or Intel dis-aggregated pretty much all major components from their chips & then made what you see today.

I see what you mean, but TSMC tech can also work for heterogenous chiplets if needed. And knowing Apple, they would do something closer to what Intel is doing and only use bleeding edge nodes. Apple still enjoys way higher margins than other chip companies, so they can afford to keep any advantages they can get until the very end. Their efficiency (especially on the laptop) is the one thing that makes people tolerate the robbery that they are doing on the storage :D

Intel does have a packaging technology similar to that (EMIB) but it's unclear when or if it's going to be used on consumer products.
 

Toro

New Member
Joined
Mar 28, 2024
Messages
5 (0.02/day)
Do you mean the M1, manufactured using the 5nm process found in modern CPUs? Any recent AMD chip with a similar TDP would perform similarly. However, I find it impractical and dumb to run a chip that exceeds 30W and reaches 100°C (high load) under passive cooling. For basic tasks like browsing or using spreadsheets, any APU from the 7nm era or newer would easily handle the workload while consuming 2-5W. In this scenario, the laptop's fan rotation is disabled.

All chipmakers are facing limitations due to the laws of physics, including ARM. That's why recent ARM SOCs can reach around 20W for a short period but struggle to sustain performance, often experiencing thermal throttling and instability. The push to expand ARM into other markets stems from the fact that they've exhausted options in mobile and lack an x86 license.

Delusional suits you very well. :)

Actually, there are examples of overlapping x64 and arm64 sharing the same or quite similar nodes and timeframe of development.

For example, the AMD phoenix is "TSMC 4nm" and M2 Pro is "TSMC 5nm." In this example, the M2 Pro has a sight single thread performance advantage. A key difference, the M2 reaches this at 3.5ghz, while the Zen4 core boosts to much more power hungry 5.2 ghz to reach the same level of performance. This is despite the Phoenix having an advantage on process and much higher power envelope. Also, the Phoenix is in a considerably lower GPU and memory performance tier presumably since so much more resources are diverted to the CPU cores.

Even if we assume a 30% IPC boost for Zen5, they will close the single thread performance gap with M3 or Oryon by boosting to 6ghz, but then it will be miles apart on efficiency.

My take, Intel has historically done well since they traditionally had a 1-2 year process advantage. Now that this advantage is effectively gone, since even Intel will now be manufacturing ARM chips, it all boils down to architecture. Look back 30 years on a diverse range of Risc vs. Cisc products, and the general trend is that Risc generally can do more with less when normalized to process.

My second take: Microsoft has already made the decision to drop support for x64 past Windows 12, they just haven't told anyone yet. They can't afford to have developers support two ISA's and any legacy code that absolutely needs it will be (very begrudgingly) supported with virtualization. I say this because they wouldn't be ramping up developer support for arm64 if this wansn't their decision.
 
Last edited:
Joined
Oct 6, 2021
Messages
1,605 (1.43/day)
Actually, there are examples of overlapping x64 and arm64 sharing the same or quite similar nodes and timeframe of development.

For example, the AMD phoenix is "TSMC 4nm" and M2 Pro is "TSMC 5nm." In this example, the M2 Pro has a sight single thread performance advantage. A key difference, the M2 reaches this at 3.5ghz, while the Zen4 core boosts to much more power hungry 5.2 ghz to reach the same level of performance. This is despite the Phoenix having an advantage on process and much higher power envelope. Also, the Phoenix is in a considerably lower GPU and memory performance tier presumably since so much more resources are diverted to the CPU cores.

Even if we assume a 30% IPC boost for Zen5, they will close the single thread performance gap with M3 or Oryon by boosting to 6ghz, but then it will be miles apart on efficiency.

My take, Intel has historically done well since they traditionally had a 1-2 year process advantage. Now that this advantage is effectively gone, since even Intel will now be manufacturing ARM chips, it all boils down to architecture. Look back 30 years on a diverse range of Risc vs. Cisc products, and the general trend is that Risc generally can do more with less when normalized to process.

My second take: Microsoft has already made the decision to drop support for x64 past Windows 12, they just haven't told anyone yet. They can't afford to have developers support two ISA's and any legacy code that absolutely needs it will be (very begrudgingly) supported with virtualization. I say this because they wouldn't be ramping up developer support for arm64 if this wansn't their decision.
Nope. 5NP (which M2 is based on) offers 10% lower power consumption than 5nm/4nm, with the advantage of the latter being 6% better density.

What ? The M2 pro consumes up to 100w, it's insane to think that this is super efficient compared to x86 APUs. Outside of synthetic software or accelerated by ASICs, plus, apple's finely tuned ecosystem, this is horrible.
 

Toro

New Member
Joined
Mar 28, 2024
Messages
5 (0.02/day)
Nope. 5NP (which M2 is based on) offers 10% lower power consumption than 5nm/4nm, with the advantage of the latter being 6% better density.

What ? The M2 pro consumes up to 100w, it's insane to think that this is super efficient compared to x86 APUs. Outside of synthetic software or accelerated by ASICs, plus, apple's finely tuned ecosystem, this is horrible.

The discrepancy in power consumption is far greater than 10%. The original premise still holds despite the differences in node you point out; Given the latest architecture and similar process, the ARM architecture has far lower power consumption while still matching single thread performance.

M2 pro is 30W, the Phoenix (mobile) is 35-54W, not sure where 100w is coming from.

The last performance hold out for x86 has been single thread performance. With the introduction of the M3 and Oryon, that is no longer the case. Consider, for example the M3 max, there are very few real world applications OR benchmarks where x86 will prevail at even 5x the power consumption.

OK maybe gaming, you got me there, but then the M1 Max is sorta at the level of an RTX 4060ti, so I think that covers a lot of ground, especially considering it's a portable.

What's not to like about accelerators? Seems like a great way to improve productivity and extend battery life and Apple and Qualcomm silicon seem to have a lot more going for it on their first gen. Let's see... Intel, on their 14th gen and just introducing a sub-par NPU and still sacked with a sub-par media processor. Their iGPU is greatly improved, so kudos there.
 
Joined
Oct 6, 2021
Messages
1,605 (1.43/day)
The discrepancy in power consumption is far greater than 10%. The original premise still holds despite the differences in node you point out; Given the latest architecture and similar process, the ARM architecture has far lower power consumption while still matching single thread performance.

M2 pro is 30W, the Phoenix (mobile) is 35-54W, not sure where 100w is coming from.

The last performance hold out for x86 has been single thread performance. With the introduction of the M3 and Oryon, that is no longer the case. Consider, for example the M3 max, there are very few real world applications OR benchmarks where x86 will prevail at even 5x the power consumption.

OK maybe gaming, you got me there, but then the M1 Max is sorta at the level of an RTX 4060ti, so I think that covers a lot of ground, especially considering it's a portable.

What's not to like about accelerators? Seems like a great way to improve productivity and extend battery life and Apple and Qualcomm silicon seem to have a lot more going for it on their first gen. Let's see... Intel, on their 14th gen and just introducing a sub-par NPU and still sacked with a sub-par media processor. Their iGPU is greatly improved, so kudos there.
From a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+

 

Toro

New Member
Joined
Mar 28, 2024
Messages
5 (0.02/day)
From a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+


I'm speaking of power to the SOC, not through the cord. This can have quite a bit of variation depending on display, attached USB accessories, and battery state. Also, this is not my experience, I have run many types of benchmarks on the M3 Max and the max power input is 82 watts which corresponds well with the spec TDP of the chip of 78 watts. This was measured at battery = 100% to avoid errors from charging.

The only way to get an M2 pro to pull >100w at the cord is to run a benchmark while the battery is charging and/or with heavy external USB loads.
 
Joined
Nov 8, 2017
Messages
229 (0.09/day)
From a real power consumption test? Does yours come from marketing? Your data only exists in the magical world of Apple.

M2 - up to 55w
M2 pro - up to 100w+

A brief 100w peak for a Full system load* with Cinebench and 3Dmark running at the same time, not a pure CPU load. The competition doesn't have a GPU of the same class as the M2 pro 19 core GPU in their SoC. The power Consumption test also seems to include the screen and peripherals. Keep in mind that this is a laptop review, not an isolated review of the chip
1711666365316.png

1711666423180.png


1711666195197.png
 
Last edited:

Toro

New Member
Joined
Mar 28, 2024
Messages
5 (0.02/day)
A brief 100w peak for a Full system load* with Cinebench and 3Dmark running at the same time, not a pure CPU load. The competition doesn't have a GPU of the same class as the M2 pro 19 core GPU in their SoC. The power Consumption test also seems to include the screen and peripherals. Keep in mind that this is a laptop review, not an isolated review of the chip
View attachment 340969
View attachment 340970

View attachment 340968

You are attempting to confound results when it is really quite simple. Most of the industry compares design power of the chip because it is easier to make comparisons, Apple is no different from others in this regard. Is it OK to use power cord draw, yes, but there are lot more asterisks and is usually hard to draw conclusions.

In the above comparison, they ran different types of GPU and CPU benchmarks simultaneously, which doesn't tell you much. This is because the CPU and GPU will share power and it would be impossible to get a data run that is repeatable. A single benchmark that encompasses both CPU and GPU would be a much better such as a 3D game.
 
Joined
Nov 8, 2017
Messages
229 (0.09/day)
You are attempting to confound results when it is really quite simple. Most of the industry compares design power of the chip because it is easier to make comparisons, Apple is no different from others in this regard. Is it OK to use power cord draw, yes, but there are lot more asterisks and is usually hard to draw conclusions.

In the above comparison, they ran different types of GPU and CPU benchmarks simultaneously, which doesn't tell you much. This is because the CPU and GPU will share power and it would be impossible to get a data run that is repeatable. A single benchmark that encompasses both CPU and GPU would be a much better such as a 3D game.
Yhea I was just disagreeing with Denver saying that the M2 pro is a 100w chip because he probably read the review diagonally and didn't realise the context in which those 100w were measured. As I said, this isn't a chip review, but a laptop review, power consumption in that context measure the whole laptop. If we follow his reasoning, the R9 7940HS use more power despite having a iGPU that is 183% slower. And some aspect of his criticism are also weird, like how he extrapolate the purposely thermally gimped MacBook Air to say that ARM in general got sustained cooling issues, when that's something that Apple did on purpose to push people towards the higher end SKUs who have an active fan. There's a ton of creative professionals out there making a living on adequately cooled Apple Silicon devices.
1711716275515.png

The CPU part of the M2 pro is more around the 27w marks according to notebook check. I don't understand what's happening on that forum lately, where there are more actors in the mainstream CPU market than there's ever been for decades, but people have a huge aversion towards the newcomers, and would rather keep the statu quo.
 
Joined
Oct 6, 2021
Messages
1,605 (1.43/day)
Yhea I was just disagreeing with Denver saying that the M2 pro is a 100w chip because he probably read the review diagonally and didn't realise the context in which those 100w were measured. As I said, this isn't a chip review, but a laptop review, power consumption in that context measure the whole laptop. If we follow his reasoning, the R9 7940HS use more power despite having a iGPU that is 183% slower. And some aspect of his criticism are also weird, like how he extrapolate the purposely thermally gimped MacBook Air to say that ARM in general got sustained cooling issues, when that's something that Apple did on purpose to push people towards the higher end SKUs who have an active fan. There's a ton of creative professionals out there making a living on adequately cooled Apple Silicon devices.
View attachment 341045
The CPU part of the M2 pro is more around the 27w marks according to notebook check. I don't understand what's happening on that forum lately, where there are more actors in the mainstream CPU market than there's ever been for decades, but people have a huge aversion towards the newcomers, and would rather keep the statu quo.
The TDP of a SOC cannot be attributed solely to the CPU. The boost and TDP configurations of x86 CPUs in laptops differ based on the implementation of each model/brand etc. For instance, the same chip can be configured for power consumption ranging from 20-30 watts in handheld devices utilizing the Z1/7840u, while laptops can push this boundary significantly, reaching levels close to 100 watts in PL2.

Apple's chips leverage a 256-bit bus, providing substantially greater bandwidth compared to today's x86 chips. It's the same as comparing oranges to apples. Therefore, let's shift the comparison to the CPU itself and choose a benchmark that reflects real-world scenarios.

CAPM2max.png









Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: https://chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
 
Top