Friday, November 2nd 2018

Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

The shadow of a ARM-based MacBook has been there for years. Rumors have been adding up, and the performance of their own mobile processors is more and more convincing with each new generation of devices. The recent launch of the iPad Pro has reinforced those signs after knowing its Apple A12X Bionic' Geekbench 4 results. According to this benchmark, the new iPad Pro isn't that far in raw performance from what we have with a Core i9-8950HK-based MacBook Pro (2018). We have a Single-Core/Multi-Core score of 5020/18217 in the iPad Pro vs the 5627/21571 on the MacBook Pro. If this seems nuts it's because it really is.

This comparison is pretty absurd in itself: TDPs are quite different on both (7 W vs 45 W) but there are also important distinctions in areas such as the memory used in those devices (most Apple laptops still use DDR-2133 modules) and, of course, the operating system on which they are based. Those numbers are just a tiny reference, but if we pay attention to Apple's recent keynote, that Photoshop CC demo can really speak for itself. And again, comparisons are hateful, but let's look for a slightly fairer comparison.
That's in fact not that difficult, and Apple has given us a perfect candidate. If we don't want to compare such different processors we can make a more compelling comparison between that A12X and the Core i5-8210Y that Apple has used for the new MacBook Air. TDPs match, and there's a clear indication of how those Apple processors could be the heart of their laptops in the not-too-distant future. The scores (again, different OS, LPDDR3 RAM): 4248/7828 in GeekBench 4. As Joel Hruska has explained at ExtremeTech, Geekbench 4 "is designed to capture an overall picture of SoC performance rather than highlighting just one metric", and we can explore those numbers to discover that there are certain big differences in some of the tests.

That's important, sure, but the question arises anyway: will Apple launch an ARM-based MacBook? This question begs another: what will be the operating system in that machine? It certainly seems that iOS is the spoiled kid at Apple with poor macOS long overshadowed by its mobile cousin. But iOS has no mouse support, for example, and it's an OS which focuses on making us work with one and only one application in the foreground. There is also certain conventional macOS apps not available there (but they're coming, and Photoshop CC is a good example), so some people see that this ARM-Apple-latptop-and-desktop-world is not only possible, but inevitable.

If that change occurs there should be a transitional period, but we've experienced that before. When Steve Jobs announced the jump to Intel processors in their Macs he told the audience how an Intel-compiled OS X version had been running for five years in a secret lab at Cupertino. The same could be happening right now, but with an ARM-based MacBook based on iOS. Or maybe an ARM-compiled version of macOS, for that matter.

Interesting times, for sure.
Source: ExtremeTech
Add your own comment

72 Comments on Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

#26
XiGMAKiD
dmartin(most Apple laptops still use DDR-2133 modules)
Ty Lee... uhh I mean typo
Posted on Reply
#27
efikkan
Fabiohow a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?
To find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
Posted on Reply
#28
Vya Domus
R0H1Tdoes the PC crowd still think tablets are toys?
Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
Posted on Reply
#29
ypsylon
Well certainly I'm not Apple cultists by any stretch (have only 1 Apple thing - iPhone7 as was sick and tired of Android and its superb). Their pricing structure is abominable, support not much better assuming your stuff is at most 3 years old (big shout for Louis Rossmann 'The Educator' :toast:), otherwise support is a 4 letter one - GTFO. I wish Apple started selling their stuff without warranties of any kind (except initial say 30 day period with possible DOA). Charge half the price and stop pretend there is a support.

On the other hand. Watched iPad presentation and I find it interestingly tempting, but... it doesn't run macOS (where I can get stuff I work on - Clip Studio/Corel Painter), but mobile iOS (stuff like Procreate is pathetic) and that's a deal breaker for me. If it was macOS ecosystem I would jump on it in a jiffy. :snapfingers: It is vastly superior in every possible way vs products like by now totally archaic Wacom Mobile Studio Pro 16 (model 13 is so lame I don't even say more).

If you never painted outdoors don't pretend you know everything. WMSP16 is great tool (essentially Windows tablet PC) when you just want to take your art stuff and move away from room and desk and cables. Go and paint in the park or something. But here is the deal. Because it is full blown PC, you can use desktop apps. iPad is a weird thing. It would be great if it would be a macOS version of WMSP, but it is not. Sadly. :(
Posted on Reply
#30
efikkan
Vya DomusYeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ?
Yes, exactly.
And even in terms of ergonomics; a tablet have to lie flat or be in a stand, and touch-only is imprecise and inefficient for most serious work.

I surely see a use for tablets, but purely as "toys". One of the neatest things I've found with tablets is to use them for sheet music, or viewing photos.

I would like to see cheaper 12-15" tablets, the Ipad Pros are at least twice what I think they are worth. But a tablet is always going to be a supplement.
Vya Domus… then sure I bet there is someone out there playing Doom unironically on their TI calculator as well.
Oh, but there is: LGR - "Doom" on a Calculator! Ti-83 Plus Games Tutorial
:D
Posted on Reply
#31
stimpy88
R0H1TYou forgot to mention that desktop class chips, I assume we're still talking notebooks here, use at least 15W avg TDP with PL2 of generally 25W & the ARM counterparts do well even with half of that i.e. 7W or thereabouts.

I still don't get why everyone is so hung up on GB numbers, are there better cross platform benchmarks around? Is Intel this infallible or does the PC crowd still think tablets are toys? The same was said about Intel vs AMD, before Zen, & we know how that turned out.
How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
Posted on Reply
#32
R0H1T
stimpy88How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
I'm not sure what you're saying i.e. do you agree with the premise that Ax can replace x86 in MB or even MBP? We'll leave the entire desktop lineup debate to a more appropriate time where Apple has more than one die, because I don't see the same chip going in an iphone & a i9 replacement.
Vya DomusYeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
So you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them? What do you say to people who end up bootcamping Windows every time they open a Mac? And who does serious work on a laptop, don't we have desktops for that or workstation/server class PC? What does serious work even mean in this case?
efikkanTo find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
No different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.

There's 3 implementations from x86 as well, would you like to call them out?

Which is actually a good thing & the reason why iPhone beats every other phone out there, in most synthetic & real world benchmarks, in fact virtually all of them. This is also the reason why x86 won't beat a custom ARM chip, across the board, should Apple decide to replace the former in their future laptop &/or desktop parts.
Posted on Reply
#33
StrayKAT
PCs are like the Ford Raptors of the computing world. They do just about everything well. It's the ultimate all purpose vehicle imo. And that will always be wanted computing wise as well. Even with the trend in computers now being specialized devices.
Posted on Reply
#34
R0H1T
Yeah the "PC" is jack of all trades & master of some, that's all it needs to do. It doesn't have to be the best at everything, never mind the fact that it isn't the best in lots of things atm.
Posted on Reply
#35
Vayra86
Its fun to say Apple has an ARM design that is competing with non-ARM hardware, but I really don't care as long as the software is not cross compatible.

As long as that isn't a universal thing, ARM and x86 will always be two separate worlds. As long as it is labor intensive to port back and forth, or emulate, the performance of ARM is irrelevant. The move to migration from x86 to ARM is way too slow for it to matter.

MacOS on ARM... who cares? Proprietary OS, and it gets only more isolated and less versatile by moving away from x86, in terms of software. And its not like MacOS was ever really good at that. Look at the reason Windows is still huge: enterprise + gaming. Both are elements MacOS fails to provide properly.
Posted on Reply
#36
Vya Domus
R0H1TSo you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them?
I am not arguing anything, they are primarily used for entertainment not productivity. Apple insists to call it "Pro" for marketing purposes and to justify it's crazy price tag. You're not buying a thousand dollar tablet to watch Netflix on it, you are actually a content creator. :laugh:
Posted on Reply
#37
trparky
I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
Posted on Reply
#38
Vya Domus
trparkyI keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years.
CISC remains CISC no matter the implementation and it's always going to more robust. Complex x86 instructions optimized at the micro-op level, will always be faster than the ARM equivalent.
Posted on Reply
#39
efikkan
R0H1TNo different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.
No, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
Posted on Reply
#40
Xpect
Vya DomusTablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
Honestly, before it got stolen along with the rest of my hardware, I used my Tablet more than my PC. For everything from reading books or comics over webbrowsing and Netflix in the bed to even simple text editing and light spreadsheet work (for the latter two parts I was using my Bluetooth Keyboard that was previously only used as a remote for my HTPC). My main PC was only fired up for gaming. Unless I was playing something on PS3, PS4 or Switch. Which did happen more, than I would have anticipated before I got those consoles.
Posted on Reply
#41
R0H1T
efikkanNo, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
I'm not, how many applications make use of AVX or rather can make use of AVX? Heck SSE is only as old as 1999, while PC computing is much older than that. What are the equivalent ARM counterparts you're talking about?

edit - Scratch that, I get it what you're saying.
Posted on Reply
#42
newtekie1
Semi-Retired Folder
Considering just 3 years ago Apple was running their laptops off the absolutely horrible Core M 5Y31, I can certainly see the A12X or one of its sucessors taking that processors place. Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money, make the product "look cooler" in some way, and add some kind of bullshit marketing point about it.
Posted on Reply
#43
Patriot
Fabiohow a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?
because they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
Posted on Reply
#44
Vya Domus
newtekie1Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money
The irony is that these chips likely cost more to make than anything comparable from Intel.
Posted on Reply
#45
newtekie1
Semi-Retired Folder
Vya DomusThe irony is that these chips likely cost more to make than anything comparable from Intel.
Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
Posted on Reply
#46
R0H1T
Patriotbecause they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
You have any evidence of faking iOS benches? I know SS did it, Huawei as well as Intel, yet to see Apple devices called out for faking benchmark in recent times.

You mean the only server chip, QC's project is dead & any other ARM based vendor seems miles off in their efforts to deliver a viable server chip.

And that's related to desktops or notebooks how? Not to mention Apple is a completely different beast with close to a decade worth of experience behind them.
newtekie1Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
Absolutely, Intel has insane margins, just like Apple.
Posted on Reply
#47
Reeves81x
efikkanYeah, different benchmark benchmarking different things.


We've been hearing this from the RISC camp since the late 80s; x86 have too much legacy overhead, RISC is more "efficient" and perhaps "faster". Even back then this was only partially true, but it's important to understand the premises. Back then, CISC chips like 80386 and 80486 were larger chips compared to some of their RISC counterparts, and this was before CPUs hit the power wall, so die size was the deciding factor for scaling clock speed. The reduced instruction set of RISC resulted in smaller designs which was cheaper to make and could be clocked higher, potentially reaching higher performance levels in some cases. But RISC always had much lower performance per clock, so higher clock speed was always a requirement for RISC to be performing.

Since the 80s, CPU designs have changed radically. Modern x86 implementations have nothing in common with their ancestors, with design features such as pipelining, OoO execution, cache, prefetching, branch prediction, superscalar, SIMD and application specific acceleration. As clock speeds have increased beyond 3 GHz, new bottlenecks have emerged; like the power wall and memory wall. x86 today is just an ISA, implemented as different microarchitectures. All major x86 implementations since the mid 90s have adapted a "RISC like" microarchitecture, where x86 is translated into architecture-specific micro-operations, a sort of hybrid approach, to get the best of both worlds.

x86 and ARM implementations have adapted all the techniques mentioned above to achieve our current performance level. Many ARM implementations have used much more application specific instructions. Along with SIMD extensions, these are no longer technically purely RISC designs. Applications specific instructions is the reason why you can browse the web on your Android phone with a CPU consuming ~0.5W, watch or record h.264 videos in 1080p and so on. Some chips even have instructions to accelerate Java bytecode. If modern smartphones were pure RISC designs, they would never be usable like we know them. The same goes for Blu-ray players; if you open one up you'll probably find a ~5W MIPS CPU in there, and it relies either on a separate ASIC or special instructions for all the heavy lifting. One fact still remains, RISC still needs more instructions to do basic operations, and since the power wall is limiting clock speed, RISC will remain behind until they find a way to translate it to more efficient CISC-style operations.

I want to refer to some of the findings from the "VRG RISC vs CISC study" from the University of Wisconsin-Madison:


The only real efficiency advantage we see with ARM is in low power CPUs. But this has nothing to do with the ISA, just Intel failing to make their low-end x86 implementations scale well, this is why we see some ARM designs can compete with Atom.
that study was based on nearly 10 year old tech. Ax processors have come a long way since the iphone 3GS lol.
Not saying the study's wrong just that Apple has made significant improvements to their version of the ARM core over the past decade. a Study based on processors of a dozen generations back will be an inaccurate representation of the current generations of Ax processors. Intel on the other hand are still using the same architecture and have seen only small increases in performance. in fact according to what i can find at a glance, even the A11 was an incredible 100x faster than a 3GS Name another processor in the past decade which can claim such a feat.... hell even the A8 in the iphone 6 was 50x faster than the 3gs. I'd like to see a study done on the most recent cpu's. THAT would be interesting.
trparkyI keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
This too^
Posted on Reply
#48
darklight2k2
everybody spoke about the clock is forgetting some x86 istructions can't be completed in one clock only.
so even if modern processors try to optimize executing them, and also using branch prediction there's a limit to their IPC. As someone else said those are then split in micro-OPs, making the architecture really similar to RISC.
What's the real difference right now is the market that those chips are aimed to. Also developers seems unable to write multithreaded programs correctly; and this is made very obvious by the small adoption of Vulkan and DX12.
In pure sheer computational power ARM cpus already find their place. When x86 architecture will not be able to improve any further, we'll probably finally see the benefits of multicores and RISC cpu
Posted on Reply
#49
Unregistered
Looks like the x86 days are finally numbered...

AMD was out to lunch for a decade, and intel preferred to milk their willing customer base and pushing innovation at a snail's pace.

All the while ARM was laying down the foundation work they needed. It's not even about Apple's version of the chip. Now ARM is ready and pushing beyond their initial market.
#50
efikkan
yakkLooks like the x86 days are finally numbered...

AMD was out to lunch for a decade, and intel preferred to milk their willing customer base and pushing innovation at a snail's pace.

All the while ARM was laying down the foundation work they needed. It's not even about Apple's version of the chip. Now ARM is ready and pushing beyond their initial market.
Your lack of realism is astounding.

The irony is that AMD's own next-gen ARMv8 design K12 is MIA, and the focus has shifted to making 5 iterations of Zen, an architecture which was originally intended as an intermediate solution until K12 was to conquer to desktop and server markets. The current status of K12 is unknown, but by the time it's potentially done it's probably going to be outdated, if it's not already canceled like the rest of AMD's failed ARM ventures.
Posted on Reply
Add your own comment
Oct 2nd, 2024 08:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts