• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

The TDP of a SOC cannot be attributed solely to the CPU. The boost and TDP configurations of x86 CPUs in laptops differ based on the implementation of each model/brand etc. For instance, the same chip can be configured for power consumption ranging from 20-30 watts in handheld devices utilizing the Z1/7840u, while laptops can push this boundary significantly, reaching levels close to 100 watts in PL2.

Apple's chips leverage a 256-bit bus, providing substantially greater bandwidth compared to today's x86 chips. It's the same as comparing oranges to apples. Therefore, let's shift the comparison to the CPU itself and choose a benchmark that reflects real-world scenarios.

View attachment 341077








Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: https://chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
Blender 2.79 isn't native it's running through Rosetta. Blender 3.1 and up are a better indicator of the SOC performance. Several reviews have placed the M3 max around the i9 13980HX of the 18" alienware M18 in blender. That laptop got a 198wPL2 and143w PL1. So similar performance at 92w for the M3 max, vs beyond 200w for the 7845HX and 13980HX.
1711727727287.png
1711730308438.png
1711730219649.png

1711728044639.png


I must say, Choosing the M3 max was weird since it's a 3nm chip. So let's see how the M2 max compare to the 7840HS...M2 max = 72w vs Ryzen 7 = 103w. Apple arch is more efficient. At equivalent nodes, it's not a massive difference, but they still have an edge on that aspect. I don't necessarily attribute that to the fact that it's ARM, but that Apple was absolutely focused on making an efficient arch for their laptops. The fact that there's no performance hit when you are unplugged is telling.
1711731034023.png
1711731286903.png
1711731405042.png
 
Last edited:
Blender 2.79 isn't native it's running through Rosetta. Blender 3.1 and up are a better indicator of the SOC performance. Several reviews have placed the M3 max around the i9 13980HX of the 18" alienware M18 in blender. That laptop got a 198wPL2 and143w PL1. So similar performance at 92w for the M3 max, vs beyond 200w for the 7845HX and 13980HX.
View attachment 341079View attachment 341088View attachment 341087
View attachment 341081
Idk, the first two benches say M3 Max draws as much power as 13980HX and loses to AMD's 3D cache parts...
 
Idk, the first two benches say M3 Max draws as much power as 13980HX and loses to AMD's 3D cache parts...
The first two benches doesn't measure power, but the time in seconds to complete the bench.
1711732075825.png

Otherwise I don't know what's up with that 7856w celeron J :D.
1711732191063.png
 
Last edited:
Some people mistakenly believe that ARM is inherently more efficient than x86, often preaching about it as if it were a revolutionary concept and advocating for the immediate demise of x86. "x86's days are numbered"
However, the more grounded individuals understand the complexities involved and are skeptical: https://chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/

The chips and cheese article doesn't address the assertions made shortly after the release of the M1 (article). That the very high IPC of Apple's arm implementation are achieved with a very wide 8x decode block and large re-order buffer, the exact mechanisms at play are not known, but the thought is that this combination of logic achieves a high level of instruction level parallelism prior to hitting the execution units. The 8x wide decode block, as has been asserted by many, is very impractical to implement beyond 4x on x64 ISA and this places some hard limitations on improving IPC. The tradeoff is more logic, but more logic running lower clock is often more energy efficient, especially as we progress toward higher density nodes. This is likely also the reason why Apply pursued big-little so early, to minimize the resource penalty introduced by the additional logic of the performance cores.
 
Last edited:
Back
Top