Tuesday, June 18th 2024

First Reviews are Live and Snapdragon X Elite Doesn't Quite Deliver on Promised Performance

The first reviews of a notebook with Qualcomm's Snapdragon X Elite SoC have appeared today, and it looks like the promised performance isn't quite there. And yes, all the reviews that went live today are all based on Asus' Vivobook S 15 OLED, so it might be a bit too early to state that Qualcomm isn't delivering on its claimed performance, as other manufacturers might deliver better performance. Let's start with the battery life. The Vivobook S 15 OLED comes with a 70 Wh battery pack which enables it to deliver better battery life than many AMD or Intel notebooks, but Apple's MacBook Air 15 M3 delivers on average a 40 percent better battery life, with a smaller 66.5 Wh battery pack. Browsing the web or watching movies aren't really too taxing for the Snapdragon X Elite, but under heavier loads the battery life drops off a cliff.

When it comes to application performance, the Snapdragon X Elite offers good multicore performance in benchmarks like Cinebench 2024 and PCMark 10, but it falls way behind in most other tests, ranging from video encoding to file extraction and document conversion, with Intel Core Ultra 7 155H based notebooks often pulling ahead by 50 percent or more. Despite being equipped with LPDDR5X-8448 memory, the Snapdragon X Elite falls behind in both the memory copy and write tests in AIDA64 compared to the Intel powered laptops. However, it's not all doom and gloom, as the Qualcomm chip delivers an impressive memory latency of a mere 8.1 ns, compared to 100+ for the Intel based laptops. It also outclasses the Intel laptops when it comes to memory read performance.
Asus went with a fairly basic Micron 2400 SSD which is a DRAM-less Phison based drive and this might be part of the reason for some of the less flattering results in some tests. However, this shouldn't affect the gaming tests and this is another area where the Snapdragon X Elite doesn't deliver, and most games are unplayable at 1080p resolution. Many games don't run on the Qualcomm chip for obvious reasons, but many that do, suffer from texture and graphics glitches at times. Most games don't even manage 30 FPS at reduced graphics settings, let alone 60 FPS, but then again, this is hardly expected from an integrated GPU. Considering that the Vivobook S 15 OLED comes in at US$1300 with 16 GB of RAM and 1 TB SSD, you would expect it to deliver in terms of performance, but it seems like Qualcomm and Microsoft have a lot of work to do to optimize the platform as a whole.
Sources: Windows Central, Notebook Check (in German)
Add your own comment

124 Comments on First Reviews are Live and Snapdragon X Elite Doesn't Quite Deliver on Promised Performance

#76
L'Eliminateur
TheLostSwedeThe price point is all wrong for the use case though...


Which is why I specifically wrote that the Asus model doesn't deliver, as I don't want to draw any hastily conclusions.
yeah, if i want to do all that type of "content consumption" and what i call "passive computing" then i'd buy a macbook air, since for not interacting much with the device the OS type is kind of moot, plus you know they have extremely good battery life
Posted on Reply
#77
Darmok N Jalad
trsttteI don't think back then "Pro" was such a marketing gimmick like it is now
No, and in fact, I misspoke. There was the iBook and Powerbook for laptops. Both topped out with G4 (PowerPC 7447a). The G5 (970fx) was limited to the Powermac, and a slower version landed in the iMac. It was a hotter chip, and it failed to reach the clock speeds that Apple expected from IBM. That was a big deal, since clock for clock, the G5 wasn't really any faster than the G4.The G5 was never able to scale down to mobile, which put Apple in a tough spot. At that time, Intel had moved on from Netburst, and had really good options with Core. I don't think IBM was that interested in continuing to make CPUs anyway, so Intel was a really good landing spot for Apple, and served its purpose until Apple took matters in thier own hands.
Posted on Reply
#78
phanbuey
trsttteTuxedo announced they were developing an ARM laptop with one of these snapdragon elite chips, they'll come eventually. If Tuxedo is doing one, some big white label oem is doing one and so there will be at least a couple on offer from the usual suspects (shenker, tuxedo, system76, etc)
I would be excited for this -- especially if you can run docker/postgres etc. on it natively like you can on cloud linux distros... Would be awesome to have a functional linux devbox that can last 15 hours on battery...
Posted on Reply
#79
TheinsanegamerN
user556Intel didn't make efficient chips at all. They were fast CPUs just through brute force of having the vastly biggest budget.
Hard disagree. The core 2 lineup were efficiency marvels when they released. Nothing IBM had could come close. You could have efficiency, or performance. Intel gave you both.
Posted on Reply
#80
Kodehawa
Native ARM application performance seems alright to me. Would have liked it to be more around $1000 but I don't think the doomposting is all that warranted. Many applications run on ARM natively now, and many more are to come.

It's always good to have options.
phanbueyI would be excited for this -- especially if you can run docker/postgres etc. on it natively like you can on cloud linux distros... Would be awesome to have a functional linux devbox that can last 15 hours on battery...
For now you can run WSL2 with aarch64 ubuntu and you can get native ARM pgsql/docker/etc. A proper linux ARM laptop would definitely get me to buy one, though, but I have some faith in WoA and application support has been expanding nicely.
Posted on Reply
#81
Neo_Morpheus
dragontamer5788The irony of your post is that "Reduced Instruction Set Computer" these days includes the FJCVTZS instruction, aka Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero, instruction.
The irony of your post is that i was responding to a specific comment, which its correct since that was exactly what intel did when they jumped from the 486 to the pentium, yet you ignored that.
dragontamer5788RISC vs CISC has been dead for decades. Ever since ARM and RISC-V adopted AES Instructions, Javascript instructions, and SIMD, the world has gone 100% CISC.
See the response above and no, i wasnt talking about that war, simply speaking about the original chip.
Darmok N JaladI dunno, I think Apple gave up on PowerPC when it became obvious that the G5 was under-delivering. They never made a G5 MacBook Pro, I believe because it wouldn’t hit performance and thermal results, even on desktop it required way too much cooling. The writing was on the wall for PPC.
Exactly, perhaps i wasnt clear but the main reason for the stagnation was the lack of volume to justify the development expenses.
Darmok N JaladNo, and in fact, I misspoke. There was the iBook and Powerbook for laptops. Both topped out with G4 (PowerPC 7447a). The G5 (970fx) was limited to the Powermac, and a slower version landed in the iMac. It was a hotter chip, and it failed to reach the clock speeds that Apple expected from IBM. That was a big deal, since clock for clock, the G5 wasn't really any faster than the G4.The G5 was never able to scale down to mobile, which put Apple in a tough spot. At that time, Intel had moved on from Netburst, and had really good options with Core. I don't think IBM was that interested in continuing to make CPUs anyway, so Intel was a really good landing spot for Apple, and served its purpose until Apple took matters in thier own hands.
Again correct, Motorola bailed first, IBM took over of what motorola was supposed to deliver but ended quitting themselves due to the lack of volume.
wNotyarDTechnically, the M1 MBA Air is a banger for what it costs now (~USD720 around here). If only this price wasn't for that measly 8G/256G configuration...
Indeed. Same for the M1 Mac Mini.

They just need to add some more RAM at the entry level prices.
Easy RhinoIt will only get better. x86 days are numbered.
This is a good read about it.

chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/
Posted on Reply
#82
v12dock
Block Caption of Rainey Street
wolfOverhyped and under delivered.... I wonder if they copied Radeon Technologies Group's homework.

After test driving this laptop for the past couple of hours, I feel like this is near an M1 experience. The emulation layer works well, albeit with extra power usage. However, I did buy this laptop with the intention of finding a laptop with decent performance and exceptional battery life to replace my M2 Mac. So far, this ticks my boxes. I am using Office within Edge, and all but two processes are running native ARM64 code.

I can confirm the battery life is far superior to my x86 notebooks. I am test driving this to see if it's a viable competitor to the 13th Gen Intels I have been buying for work. Overall, my impressions are that the chipset is very impressive, comparable to 13th-14th Gen Intel while sipping power; however, the software needs another 6-12 months. Would I buy this for my workforce moving forward? I need another couple of weeks to decide, but from a purely general business usage perspective, I am genuinely impressed.
Posted on Reply
#83
Assimilator
Shitty smartphone CPU performs shittily in real i.e. desktop workloads. Next on today's news, water is still wet and the sky remains blue.

My distaste for these toy CPUs being sold at desktop CPU prices aside, I'm sure this will part fools from their money and thus cut into Apple's marketshare. And anything that hurts Apple is good for consumers.
Posted on Reply
#84
Gungar
P4-630Good for browsing, email, online banking and online shopping, netflix and prime video...
That's about it then.
So you are saying it would be a good cpu in a phone? hum... makes sense.
Posted on Reply
#85
Denver
v12dockAfter test driving this laptop for the past couple of hours, I feel like this is near an M1 experience. The emulation layer works well, albeit with extra power usage. However, I did buy this laptop with the intention of finding a laptop with decent performance and exceptional battery life to replace my M2 Mac. So far, this ticks my boxes. I am using Office within Edge, and all but two processes are running native ARM64 code.

I can confirm the battery life is far superior to my x86 notebooks. I am test driving this to see if it's a viable competitor to the 13th Gen Intels I have been buying for work. Overall, my impressions are that the chipset is very impressive, comparable to 13th-14th Gen Intel while sipping power; however, the software needs another 6-12 months. Would I buy this for my workforce moving forward? I need another couple of weeks to decide, but from a purely general business usage perspective, I am genuinely impressed.
That doesn't mean anything. There are plenty of laptops with the same hardware and battery capacity but with huge disparities in battery life due to different implementation of the power/performance curve. I'm absolutely certain that there are x86 laptops with better battery life (Framework 13.5 or Thinkpad T14s, both 7840u), and all the performance, stability and compatibility inherent in the dominant ISA.

I don't see any advantage in buying qualcomm's buggy product.
Posted on Reply
#86
Fourstaff
Not the slam dunk we are looking for, but no slouch either. I wonder if they will be able to price this competitively to drive adoption. Will we be able to pair this with external graphics?
Posted on Reply
#87
wNotyarD
FourstaffNot the slam dunk we are looking for, but no slouch either. I wonder if they will be able to price this competitively to drive adoption. Will we be able to pair this with external graphics?
Doesn't seem to be the idea right now. Not even sure if the X Plus devices will be much cheaper either.

I have one question though: will the development of Prism help those with older 8cx devices?
Posted on Reply
#88
dragontamer5788
Neo_MorpheusThe irony of your post is that i was responding to a specific comment, which its correct since that was exactly what intel did when they jumped from the 486 to the pentium, yet you ignored that.

See the response above and no, i wasnt talking about that war, simply speaking about the original chip.
What part of Intel's 16-bit original FPU modified x87, 32-bit extended MMX, SSE, SSE2, SSE3, SSE4.1, SSE4.2, AVX, AES-NI, BMI, BMI2, AVX512, AVX10 extended instruction set is RISC?

Oh, and Pentium / i686 was dual-issue / dual-pipelined. Intel didn't experiment with microcodes until Pentium4 IIRC, which is where people started talking bullshit about "RISC Core" even though the microcodes are "Perform an entire AES encryption step", which is hardly "RISC". By the time we're talking about 486 and Pentium btw, we're at "16-bit original FPU modified x87, 32-bit extended MMX, SSE". The later stuff hasn't happened yet, but it seems very difficult to call this a "Reduced Instruction Set Computer", especially as Pentium still supports the "loop" assembly instruction, push, pop, divide, and other "complex" instructions that perform multiple tasks in one code.

Unless you're talking load/store architectures btw, which is all the microcode system converts Intel Assembly into. Load/store is just one component of RISC, nothing else Intel / AMD do with x86_64 is anything close to the RISC they talked about in the 80s or 90s.

-----------

Oh, btw. both ARM and RISC-V are microcode engines today. Because their instructions are so complex they need multiple microcodes to implement them.

Everyone's Divide / Modulo instruction is microcoded. Because that's what you do, you destroy RISC-mindset because divide is so common that it makes sense to accelerate it at the machine level. But it also doesn't make sense to implement divide in its entirety all in hardware, because division is a very complex set of operations. So you compromise with microcode.

---------

My point is that CISC vs RISC has been stupid for decades. The entire debate is just a bunch of people misunderstanding microprocessor implementations and circlejerking over it.
Posted on Reply
#89
Assimilator
dragontamer5788My point is that CISC vs RISC has been stupid for decades. The entire debate is just a bunch of people misunderstanding microprocessor implementations and circlejerking over it.
Mostly Apple fanboys, back when that was the Apple kool-aid. Nowadays it's the RISC-V fanboys because... IDK... RISC-V is open-source so that makes it "better" somehow? Or something equally facetious.
Posted on Reply
#90
dragontamer5788
AssimilatorMostly Apple fanboys, back when that was the Apple kool-aid. Nowadays it's the RISC-V fanboys because... IDK... RISC-V is open-source so that makes it "better" somehow? Or something equally facetious.
Its more than just Apple.

PowerPC, ARM, and the legion of lost 80s/90s processors (SPARC, Alpha, MIPS, PA-RISC) all claimed to be RISC. Furthermore, not a single company claims their ISA to be CISC. CISC is what "other" companies call Intel's design, almost a derogatory term that's pushed upon the competitor to make these other designs feel better about themselves.

If I were to push a company responsible for the RISC-mania, it'd be IBM and its POWER architecture. IBM did many CPU designs and seemed to push RISC as a marketing term the most (even as RISC was used all over the place, IBM was probably the most powerful and widespread mouthpiece to the pseudo-concept).

That being said: deep discussions about pipelines, processor efficiency, throughput etc. etc. allowed for these companies to leapfrog and advance CPUs. Alas, the "CISC" x86 instruction set by Intel (and later AMD) turned out to also take all those innovations anyway, and become the fastest processor of the 2000s, 2010s.

-----------

RISC-V continues the tradition of claiming advanced ISA design, much like its SPARC / ALPHA / MIPS bretherin before it. After all, every new CPU design needs to say why its better than the "CISC" machine over there, without necessarily using the trademarked term (Intel) in their marketing.

-------

I think with all the processor / ISA wars of the last decades, I've come to the conclusion that it doesn't matter. AMD came out with the dual ARM+x86 "Zen" design back in 2016, proving that all of these ISAs can convert between each other on a modern core anyway. And that's when I began to realize how similar ARM vs x86 was at the instruction level.

ARM stands for "Advanced RISC Machine" by the way, and was one of the other major marketers of the RISC term. (ARM wasn't very big in the 90s, but is a big deal now). With AMD making ARM + x86 compatible processor (even if it was just for its own internal tests), it proves how bullshit this whole discussion was. Decoders are not the critical element of modern processors, and they can be swapped out without much hassle. (At least, not a hassle for these multi-$$Billion megacorps).
Posted on Reply
#91
Darmok N Jalad
I’m curious if we’ll eventually see a desktop version of this chip, and a passively cooled option as well. I think they need to expand across multiple areas if they want us to take them seriously. How about something with expansion slots and external GPU support? That’s when I think we could more likely say that WOA has actually arrived.
Posted on Reply
#92
bug
Easy RhinoIt will only get better. x86 days are numbered.
That much is clear. I mean, x86 can't have more than 1 million days or so left, can it?
Posted on Reply
#93
kapone32
wNotyarDYup. Give 'em OLED screens all they want, but 1300 bucks is preposterous for the performance they deliver.
SOunds like the surface all over again
Posted on Reply
#94
qOmega
When an article begins with comparing battery life of an OLED to a non OLED you know the article is gonna be a liiil biased. Js.
Posted on Reply
#95
bug
qOmegaWhen an article begins with comparing battery life of an OLED to a non OLED you know the article is gonna be a liiil biased. Js.
It's not biased, it's just written on a sample size of one. And it acknowledges that.
Posted on Reply
#96
qOmega
bugIt's not biased, it's just written on a sample size of one. And it acknowledges that.
Comparing models that drain a significant more amount of battery life is not the same as just comparing a random benchmark that may or may not be top of the line. It's quite literally not comparable. It's a different product.
Posted on Reply
#97
bug
qOmegaComparing models that drain a significant more amount of battery life is not the same as just comparing a random benchmark that may or may not be top of the line. It's quite literally not comparable. It's a different product.
Again, it's the only data we have atm. We'll get more relevant data shortly.
Posted on Reply
#98
TheinsanegamerN
KodehawaNative ARM application performance seems alright to me. Would have liked it to be more around $1000 but I don't think the doomposting is all that warranted. Many applications run on ARM natively now, and many more are to come.

It's always good to have options.
Define "many".
KodehawaFor now you can run WSL2 with aarch64 ubuntu and you can get native ARM pgsql/docker/etc. A proper linux ARM laptop would definitely get me to buy one, though, but I have some faith in WoA and application support has been expanding nicely.
I wouldnt be trusting ANYTHING microsoft makes these days.
kapone32SOunds like the surface all over again
If Microsoft had any sense about them, this would have been the CPU in a surface go 4 or 5. It's the perfect formfactor for ARMs tradeoffs.
Posted on Reply
#99
Kodehawa
TheinsanegamerNDefine "many".
So far in a quick look of the applications I use the most:
- Jetbrains IDEs (IntelliJ, Rider) work perfectly and have ARM64 versions, as so C# and Java developer tools.
- Many databases can be ran using WSL2 natively on aarch64 ubuntu, and honestly it's also how it's done usually on x86 windows (ex. redis, pgsql usually tell you its better to run them on WSL2 as they're linux-first, obviously)
- Web browsers are fine (Edge, Firefox, Chrome have all WoA versions)
- foobar2000, 7-zip and misc utilities all had ARM downloads, I was also able to find a qbittorrent buildbot.
- Most benchmarking software I have installed have an ARM64 version now
- PowerToys, of course
- HWInfo64 has been contacted by qualcomm and are on ongoing efforts to port it, which also means probably PerfMon will be available
- Syncthing has a WoA version, aswell.
- Office suite also has an ARM version.

With that I wouldn't be missing any programs I use, including developer and work tools.

Outliers:
- Discord (which you can run in your browser)
- Telegram (with plans)
Posted on Reply
#100
Darmok N Jalad
I read in the Windows Central review of the ASUS model that a some games from the Windows Store refuse to run, giving an unsupported arch message. Ironically, some are MS first party games, like the Halo MCC and Infinite. Running stuff from Steam works, but with graphical issues at times. It just shows the breadth of size and inconsistency at MS, that they won’t emulate stuff with their own name on it, but others will. I get that this isn’t a gaming platform, but still you’d think that MS would help lead the charge, and there’s really no reason something like MCC shouldn’t run pretty well.
Posted on Reply
Add your own comment
Jun 29th, 2024 21:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts