Monday, October 30th 2023

Qualcomm Snapdragon X Elite Put Through Graphics Tests, Beats AMD Radeon 780M iGPU in 3DMark

Qualcomm Snapdragon X is out to change the thin-and-light notebook market, and is out to eat the lunches of U-segment and possibly P-segment processors from Intel and AMD. The Arm based processor promises to be a competitor to Apple's M2 and M2 Max SoCs powering the latest generation of Macbooks, so Windows 11 and Chrome OS-based thin-and-lights could offer similar levels of performance and battery life. Geekerwan put the Adreno iGPU of the Snapdragon X Elite through a couple of benchmarks to show how they compare to the iGPUs of contemporary 15 W to 28 W class SoCs across Arm and x64 machine architectures, and a discrete NVIDIA GeForce RTX 3050 Laptop GPU.

In the 3DMark Wildlife Extreme benchmark, designed for graphics solutions of this class, the Snapdragon X Elite scored 39.2 FPS (average), compared to 60 FPS of the Apple M2 Max, 40 FPS of the Apple M2. The Core i7-13700H "Raptor Lake" is a 45 W mobile processor with an Intel Xe-LP based iGPU that has 96 EU. This chip scored just 22.5 FPS in this test. The surprise here is the Radeon 780M, the iGPU of the AMD Ryzen 7 7840HS, based on the latest RDNA3 architecture, with 12 compute units (768 stream processors). This chip did just 28 FPS, falling behind even the M2. The other benchmark is "Control" at 1080p with its lowest graphics settings, and here the results are fundamentally different. With "Control," we see the Snapdragon X Elite post a respectable 53 FPS, which is almost as fast as the 56 FPS by the Radeon 780M powering the Ryzen 7 7740HS, but ahead of the 43 FPS put by the Apple M2, and a whopping 145 FPS by the M2 Max.
Sources: Geekerwan (YouTube), HXL (Twitter), VideoCardz
Add your own comment

40 Comments on Qualcomm Snapdragon X Elite Put Through Graphics Tests, Beats AMD Radeon 780M iGPU in 3DMark

#1
R0H1T
Paired with LPDDR5x I presume? And what did they give the 7840hs :rolleyes:
Posted on Reply
#2
AnotherReader
I didn't expect the 780M to beat M2's GPU in anything, but it does fairly well in Control. The M2 enjoys access to a 8 MB SLC whereas the 780M has to make do with a 2 MB L2 cache for the GPU. Is there any word on the memory configuration for the x86 laptops?
Posted on Reply
#3
RayneYoruka
I've always wanted an ARM laptop simply because of efficiency and I couldn't bother with Apple whatsoever. (I run Linux in laptops of course)

Looks promising.
Posted on Reply
#4
AnarchoPrimitiv
Ehhh....I feel like these numbers would be more meaningful if they were derived from actual gameplay....
Posted on Reply
#5
TheinsanegamerN
RayneYorukaI've always wanted an ARM laptop simply because of efficiency and I couldn't bother with Apple whatsoever. (I run Linux in laptops of course)

Looks promising.
I want one, but does linux have an answer to rosetta2 yet?
Posted on Reply
#6
Tropick
Don't worry everyone Intel just released a statement saying ARM is Very Stinky™ so these numbers don't matter :rockout:

For real though seeing their 96EU iGPU get absolutely housed by Qualcomm's first stab at a desktop part is poetry
Posted on Reply
#8
R0H1T
TropickIntel just released a statement saying ARM is Very Stinky™ so these numbers don't matter :rockout:
Soon enough, at the pace they're going, Intel won't matter either :slap:
Posted on Reply
#9
fancucker
Once x86 porting and emulation are involved, performance and battery life are going to tank. Intel is here to stay :)
Posted on Reply
#10
R0H1T
Linux doesn't need any emulation & MS will probably eventually get native ARM builds like in the past.
Posted on Reply
#11
Tropick
R0H1TSoon enough, at the pace they're going, Intel won't matter either :slap:
I seriously can't believe Gelsinger is trying to dismiss ARM as not worthy to compete with x86 when Apple's M-series Pro and Max SoCs are absolutely tearing it up on mobile and their M2 Ultra is actually a fantastic desktop chip. There are so many examples of ARM being ready for the desktop big time that it's absolute luddism to try and say they're still not able to provide the performance necessary.
Posted on Reply
#12
R0H1T
It's at least a little bit playing to the gallery &/or their shareholders. Apple, if you count their Mx based iPad's, probably has more profits than any other PC maker right now! Their margins are ludicrous & while you can put it down to an extent to the Apple (closed) ecosystem the fact that they have undisputed efficiency lead in the notebook space is no joke. Just don't buy their memory or storage upgrades :nutkick:
Posted on Reply
#13
AnotherReader
Cinebench 2024 uses scalar instructions for the most part. Chips and Cheese found that only 6.8% of instructions do math on 128-bit vectors. Consequently, Zen 4 isn't able to show its advantage over Golden Cove for wider vectors and both x86 chips are held back when compared to their ARM competitors. A test like Blender would be interesting as that has support for most versions of AVX.
Cinebench 2024 makes little use of vector compute. Most AVX or SSE instructions operate on scalar values. The most common FP/vector math instructions are VMULSS (scalar FP32 multiply) and VADDSS (scalar FP32 add).
Posted on Reply
#14
Denver
The M2's 20W becomes almost 40W at high load, Intel's 45W becomes 100W, even AMD goes beyond what is specified. What will be Qualcomm's real TDP? Everything is very beautiful on paper and in synthetic benchmarks.
Posted on Reply
#15
TheinsanegamerN
R0H1TLinux doesn't need any emulation & MS will probably eventually get native ARM builds like in the past.
QEMU needs some serious work though. Performance on anything remotely demanding is atrocious, and it's a CLUNKY answer, nowhere near as streamlined as Rosetta2.

So long as users value using software made before <current year> such performance will be key to widespread ARM adoption. Box86/64 shows a lot of promise, but they're like steam for linux in 2013,a neat idea but far from mature.
TropickI seriously can't believe Gelsinger is trying to dismiss ARM as not worthy to compete with x86 when Apple's M-series Pro and Max SoCs are absolutely tearing it up on mobile and their M2 Ultra is actually a fantastic desktop chip. There are so many examples of ARM being ready for the desktop big time that it's absolute luddism to try and say they're still not able to provide the performance necessary.
Apple has vertical integration on their side.

For windows, at least for now, he's right. ARM devices are nothing more then curiosities that sell to a niche of a niche. Widespread adoption requires the use of translation software, which MS has been VERY slow to improve.
Posted on Reply
#16
Lew Zealand
AnotherReaderI didn't expect the 780M to beat M2's GPU in anything, but it does fairly well in Control. The M2 enjoys access to a 8 MB SLC whereas the 780M has to make do with a 2 MB L2 cache for the GPU. Is there any word on the memory configuration for the x86 laptops?
Like most Mac games, Control is running in emulation so while it's a relevant IRL comparison as that's what our actual options are, a native port of Control or most other Mac games would likely have not only higher FPS but much better 1% lows. The 1% lows are the killer when you're running emulation. I have an M1 Pro laptop from work which is my best laptop GPU but have little motivation to get gaming up and running for crappy frametimes.
Posted on Reply
#17
AnotherReader
TheinsanegamerNQEMU needs some serious work though. Performance on anything remotely demanding is atrocious, and it's a CLUNKY answer, nowhere near as streamlined as Rosetta2.

So long as users value using software made before <current year> such performance will be key to widespread ARM adoption.

Apple has vertical integration on their side.

For windows, at least for now, he's right. ARM devices are nothing more then curiosities that sell to a niche of a niche. Widespread adoption requires the use of translation software, which MS has been VERY slow to improve.
Apple has more experience than anyone with writing high performance emulators. They did this for the 68000 to Power PC transition, and did it again for the Power PC to x86 transition. In addition, unlike Qualcomm, M1 and M2 support the x86 memory model.
Posted on Reply
#18
TheinsanegamerN
AnotherReaderApple has more experience than anyone with writing high performance emulators. They did this for the 68000 to Power PC transition, and did it again for the Power PC to x86 transition. In addition, unlike Qualcomm, M1 and M2 support the x86 memory model.
True, Apple does have a LOT of experience. They'd be a lot easier to support if they would just support Vulkan. Steam proton and Wine are already well developed to handle games/software, respectively. But they need vulkan.

If they did support it, I'd have a macbook pro, because nothing in the PC space remotely the size of a macbook can match its performance AND battery life at the same time.
Posted on Reply
#19
AnotherReader
Lew ZealandLike most Mac games, Control is running in emulation so while it's a relevant IRL comparison as that's what our actual options are, a native port of Control or most other Mac games would likely have not only higher FPS but much better 1% lows. The 1% lows are the killer when you're running emulation. I have an M1 Pro laptop from work which is my best laptop GPU but have little motivation to get gaming up and running for crappy frametimes.
It shouldn't be an issue in GPU bound scenarios which would be akin to DXVK. For the CPU, it is an issue. However, in the interests of fairness, they should have tested a game that was coded to Apple Silicon as well as x64. Baldur's Gate 3 and Resident Evil: Village are perhaps the most recent games on that list.
TheinsanegamerNTrue, Apple does have a LOT of experience. They'd be a lot easier to support if they would just support Vulkan. Steam proton and Wine are already well developed to handle games/software, respectively. But they need vulkan.

If they did support it, I'd have a macbook pro, because nothing in the PC space remotely the size of a macbook can match its performance AND battery life at the same time.
I don't expect Apple to ever support Vulkan. It goes against the Apple way as much as socialism goes against the American way. AMD has matched Apple's performance efficiency under load with Zen 4, but they have a long way to go before matching them in low load power consumption. Apple's long experience with smartphone SOCs will allow them to remain ahead of Intel and AMD for power efficiency in lighter load scenarios for a long while.
Posted on Reply
#20
TheinsanegamerN
AnotherReaderIt shouldn't be an issue in GPU bound scenarios which would be akin to DXVK. For the CPU, it is an issue. However, in the interests of fairness, they should have tested a game that was coded to Apple Silicon as well as x64. Baldur's Gate 3 and Resident Evil: Village are perhaps the most recent games on that list.


I don't expect Apple to ever support Vulkan. It goes against the Apple way as much as socialism goes against the American way. AMD has matched Apple's performance efficiency under load with Zen 4, but they have a long way to go before matching them in low load power consumption. Apple's long experience with smartphone SOCs will allow them to remain ahead of Intel and AMD for power efficiency in lighter load scenarios for a long while.
I know, but a man can dream. Hell maybe they'd work with valve to make a metal API version of proton....

For all their failures the macbooks are really impressive. Able to maintain full performance when plugged in or push 19 hours of youtube streaming on a single charge. That's just impressive.
Posted on Reply
#21
AnotherReader
TheinsanegamerNI know, but a man can dream. Hell maybe they'd work with valve to make a metal API version of proton....

For all their failures the macbooks are really impressive. Able to maintain full performance when plugged in or push 19 hours of youtube streaming on a single charge. That's just impressive.
I concur. Apple silicon is really impressive especially the CPUs. I don't think the GPUs are as impressive as people think; AMD and Nvidia could match them or beat them by running at similarly low clocks and using LPDDR5 instead of power hungry GDDR6.
Posted on Reply
#22
TheinsanegamerN
AnotherReaderI concur. Apple silicon is really impressive especially the CPUs. I don't think the GPUs are as impressive as people think; AMD and Nvidia could match them or beat them by running at similarly low clocks and using LPDDR5 instead of power hungry GDDR6.
Well what are you comparing them to? It's hard to find a direct comparison, but No Mans Sky seems to run about as well on a M2 pro as it does on a 780m. Similar story for resident evil village, where performance of the M2 pro is about 70% of the 780m using the apple native port, but the M2 max is easily 50%+ faster. Depends on the scene and the reviewer.

dGPUs cannot factor in here, because while they offer superior performance their efficiency is horrific by comparison. Even downclocked dGPU systems are not known for battery life.
Posted on Reply
#24
AnotherReader
TheinsanegamerNWell what are you comparing them to? It's hard to find a direct comparison, but No Mans Sky seems to run about as well on a M2 pro as it does on a 780m. Similar story for resident evil village, where performance of the M2 pro is about 70% of the 780m using the apple native port, but the M2 max is easily 50%+ faster. Depends on the scene and the reviewer.

dGPUs cannot factor in here, because while they offer superior performance their efficiency is horrific by comparison. Even downclocked dGPU systems are not known for battery life.
I'm comparing them to dGPUs like the RTX A2000 which is very efficient despite using an inferior process and power hungry GDDR6. A fairer comparison for the 780m would be the M2 which has more fp32 flops per clock but a lower clock speed. The M2 also enjoys a 8 MB SLC while the 780m makes do with a 2MB L2. By my relatively crude estimates, both AMD and Nvidia were more efficient than Apple when you take into account the lower clock speeds and efficient LPDDR5.
Posted on Reply
#25
Battler624
TheinsanegamerNI want one, but does linux have an answer to rosetta2 yet?
There is x64/86 emulation on linux via box64/box86.

Also, iirc someone was able to run rosetta2 on amazon servers running linux, 3 hiccups happened, one was kernel (some kernel 4kb or 16kb stuff), some memory ordering stuff that is different between arm/x86 (apple follows intel memory when running a rosetta app), and iirc it required a binary taken from an ARM-based mac.

The last one can be reverse engineered/remade in software, the first 2 require hardware level stuff (and thus require to be emulated on un-supported hardware). And not every application required the issues to be solved. This is all for near-native speeds, if you dont care about near-native then yea all can be done in software.
Posted on Reply
Add your own comment
Dec 21st, 2024 21:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts