Friday, November 2nd 2018

Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

The shadow of a ARM-based MacBook has been there for years. Rumors have been adding up, and the performance of their own mobile processors is more and more convincing with each new generation of devices. The recent launch of the iPad Pro has reinforced those signs after knowing its Apple A12X Bionic' Geekbench 4 results. According to this benchmark, the new iPad Pro isn't that far in raw performance from what we have with a Core i9-8950HK-based MacBook Pro (2018). We have a Single-Core/Multi-Core score of 5020/18217 in the iPad Pro vs the 5627/21571 on the MacBook Pro. If this seems nuts it's because it really is.

This comparison is pretty absurd in itself: TDPs are quite different on both (7 W vs 45 W) but there are also important distinctions in areas such as the memory used in those devices (most Apple laptops still use DDR-2133 modules) and, of course, the operating system on which they are based. Those numbers are just a tiny reference, but if we pay attention to Apple's recent keynote, that Photoshop CC demo can really speak for itself. And again, comparisons are hateful, but let's look for a slightly fairer comparison.
That's in fact not that difficult, and Apple has given us a perfect candidate. If we don't want to compare such different processors we can make a more compelling comparison between that A12X and the Core i5-8210Y that Apple has used for the new MacBook Air. TDPs match, and there's a clear indication of how those Apple processors could be the heart of their laptops in the not-too-distant future. The scores (again, different OS, LPDDR3 RAM): 4248/7828 in GeekBench 4. As Joel Hruska has explained at ExtremeTech, Geekbench 4 "is designed to capture an overall picture of SoC performance rather than highlighting just one metric", and we can explore those numbers to discover that there are certain big differences in some of the tests.

That's important, sure, but the question arises anyway: will Apple launch an ARM-based MacBook? This question begs another: what will be the operating system in that machine? It certainly seems that iOS is the spoiled kid at Apple with poor macOS long overshadowed by its mobile cousin. But iOS has no mouse support, for example, and it's an OS which focuses on making us work with one and only one application in the foreground. There is also certain conventional macOS apps not available there (but they're coming, and Photoshop CC is a good example), so some people see that this ARM-Apple-latptop-and-desktop-world is not only possible, but inevitable.

If that change occurs there should be a transitional period, but we've experienced that before. When Steve Jobs announced the jump to Intel processors in their Macs he told the audience how an Intel-compiled OS X version had been running for five years in a secret lab at Cupertino. The same could be happening right now, but with an ARM-based MacBook based on iOS. Or maybe an ARM-compiled version of macOS, for that matter.

Interesting times, for sure.
Source: ExtremeTech
Add your own comment

72 Comments on Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

#1
Gungar
The macbook can't handle the 45W of the cpu anyway.
Posted on Reply
#2
Imsochobo
Geekbench produces hilarious numbers.
for instance ryzen on windows vs ryzen on macos is wildly different numbers...
Most other things are faster on windows while geekbench is faster on macos, I do not trust geekbench at all..
Posted on Reply
#3
R0H1T
GungarThe macbook can't handle the 45W of the cpu anyway.
What, which Ax is 45W TDP?
ImsochoboGeekbench produces hilarious numbers.
for instance ryzen on windows vs ryzen on macos is wildly different numbers...
Most other things are faster on windows while geekbench is faster on macos, I do not trust geekbench at all..
That still doesn't take away the fact that Ax are now the undisputed leaders in sub 15W ULV segment, they've left the core series well & truly in the dust.
Posted on Reply
#4
StrayKAT
I don't see why it'd be a problem to port Mac OS itself to ARM. The core (Darwin) is already the same. The problem is third party migration rather than Apple itself.
Posted on Reply
#5
qubit
Overclocked quantum bit
This just goes to prove what I've been saying for years, that given the same enhancements as x86 has received, the ARM architecture is superior in speed and power consumption. The day that the PC moves to an ARM architecture and x86 goes away will be one to celebrate. Not gonna be anytime soon, though. :ohwell:
Posted on Reply
#6
Vya Domus
I really wish people would stop using Geekbench and stop comparing mobile ARM cores with desktop x86 ones. Whoever seriously believes Apple is getting anywhere close to desktop performance with a chip that has single digits TDP values (for the entire SoC mind you) needs a reality check. Apple specifically designs very wide cores optimized for short performance bursts which inadvertently causes poor clock and core scaling. They are also very expensive to manufacture and rely heavily on leading nodes in order to provide a sustainable increase in performance each year. Though all of this applies to all mobile SoC manufactures.

Mobile SoCs have certainly come far in last 5 years or so , but don't hold your breath expecting them to replace Intel and AMD GPUs inside Macs. Maybe for something like that 12inch Macbook, maybe.
Posted on Reply
#7
dmartin
Vya DomusI really wish people would stop using Geekbench and stop comparing mobile ARM cores with desktop x86 ones. Whoever seriously believes Apple is getting anywhere close to desktop performance with a chip that has single digits TDP values (for the entire SoC mind you) needs a reality check. Apple specifically designs very wide cores optimized for short performance bursts which inadvertently causes poor clock and core scaling. They are also very expensive to manufacture and rely heavily on leading nodes in order to provide a sustainable increase in performance each year. Though all of this applies to all mobile SoC manufactures.

Mobile SoCs have certainly come far in last 5 years or so , but don't hold your breath expecting them to replace Intel and AMD GPUs inside Macs. Maybe for something like that 12inch Macbook, maybe.
That's why I specifically mentioned the new MacBook Air and its 7 W processor.
Posted on Reply
#8
Assimilator
LOL, people taking Geekbench numbers seriously again. When are y'all going to realise that a benchmark program written for mobile devices might - just might - not produce a realistic result when run on a desktop device, where the expected workloads are entirely different?

Call me back when the A12X can do a Cinebench run faster than a desktop... assuming the A12X can complete the run without causing whatever it's inside to catch fire.
Posted on Reply
#9
StrayKAT
dmartinThat's why I specifically mentioned the new MacBook Air and its 7 W processor.
If they just did it for one subset of their Macs, it'll probably fail. I mean, you wouldn't get much migration that way (much like Windows RT). But like you said, it could just be another iOS machine, with a keyboard.
Posted on Reply
#10
king of swag187
Vya DomusI really wish people would stop using Geekbench and stop comparing mobile ARM cores with desktop x86 ones. Whoever seriously believes Apple is getting anywhere close to desktop performance with a chip that has single digits TDP values (for the entire SoC mind you) needs a reality check. Apple specifically designs very wide cores optimized for short performance bursts which inadvertently causes poor clock and core scaling. They are also very expensive to manufacture and rely heavily on leading nodes in order to provide a sustainable increase in performance each year. Though all of this applies to all mobile SoC manufactures.

Mobile SoCs have certainly come far in last 5 years or so , but don't hold your breath expecting them to replace Intel and AMD GPUs inside Macs. Maybe for something like that 12inch Macbook, maybe.
My favorite part is when they tweeted "The new iPad X has the power of the Xbox One S" (they actually did this BTW)
Although, beating that Jaguar CPU isn't much of a challenge
Posted on Reply
#11
Gasaraki
Can you even compare the numbers of an ARM based cpu running iOS vs. x86 running MacOS? An better test would be the A12X running MacOS using the x86 version of Geekbench otherwise this means nothing.
Posted on Reply
#12
TheGuruStud
Here we go with BS benchmarks, again. They're all completely made up and GB is the worst.
Posted on Reply
#13
WikiFM
R0H1TWhat, which Ax is 45W TDP?
@Gungar meant the 8950K, not tbe Ax, he meant that the 8950K throttles in the Macbook at doesn't achieve it's real performance.
Posted on Reply
#14
dmartin
GungarThe macbook can't handle the 45W of the cpu anyway.
Sure. That's why, as I've said earlier, there's another comparison with the new MacBook Air's Core i5-8210Y.
ImsochoboGeekbench produces hilarious numbers.
for instance ryzen on windows vs ryzen on macos is wildly different numbers...
Most other things are faster on windows while geekbench is faster on macos, I do not trust geekbench at all..
This goes beyond trust. As with any other benchmark, it doesn't give absolute truths, just reference points. The point is not if one is really more powerful than the other.

The idea is that and ARM CPU apparently makes perfect sense for Apple. The like control. They like independence. It's easy to see from my point of view, but I'd like to know what you think about that possibility.
StrayKATI don't see why it'd be a problem to port Mac OS itself to ARM. The core (Darwin) is already the same. The problem is third party migration rather than Apple itself.
I don't see it either. I love (well to a certain point) macOS, and Apple ported OS X to Intel back in the day. Developers went the Intel way too, of course, thanks to some transition period/tools (remember Rosetta?).

But macOS is not getting much love lately from Apple. The iOS app store works really well for Apple, developers and users, and new generations are "iOS/Android native", not "macOS/Windows" native". The already feel comfortable with iOS, but my main question goes around mouse support: there's a strong heritage for millions of users that for sure wouldn't find easy to transition in a world without a mouse or a trackpad/touchpad.
qubitThis just goes to prove what I've been saying for years, that given the same enhancements as x86 has received, the ARM architecture is superior in speed and power consumption. The day that the PC moves to an ARM architecture and x86 goes away will be one to celebrate. Not gonna be anytime soon, though. :ohwell:
I think I won't celebrate that, but I'm pretty excited about what is going to come with that changes if that happens. I've enjoyed the x86 era too much. Still enjoy it, in fact. We'll see about that timeframe though. I'm sure you've seen Windows 10 on ARM and SoCs as the Snapdragon 850 designed specifically for that purpose. They don't seem to be that interesting at this very moment, but we'll see.
Vya DomusI really wish people would stop using Geekbench and stop comparing mobile ARM cores with desktop x86 ones. Whoever seriously believes Apple is getting anywhere close to desktop performance with a chip that has single digits TDP values (for the entire SoC mind you) needs a reality check. Apple specifically designs very wide cores optimized for short performance bursts which inadvertently causes poor clock and core scaling. They are also very expensive to manufacture and rely heavily on leading nodes in order to provide a sustainable increase in performance each year. Though all of this applies to all mobile SoC manufactures.

Mobile SoCs have certainly come far in last 5 years or so , but don't hold your breath expecting them to replace Intel and AMD GPUs inside Macs. Maybe for something like that 12inch Macbook, maybe.
Well, with over $237.1 billion in cash and spending $14 billion in R&D I'd say maybe they're exploring that option. Not only the ARM Macbook, of course: they could be developing a 45W ARM SoC, I guess. Tim Cook isn't Steve Jobs, but I wouldn't dare to deny that they're looking into this.

But yes. They'll start with the MacBook/MacBook Air. Without the maybe.
AssimilatorLOL, people taking Geekbench numbers seriously again. When are y'all going to realise that a benchmark program written for mobile devices might - just might - not produce a realistic result when run on a desktop device, where the expected workloads are entirely different?

Call me back when the A12X can do a Cinebench run faster than a desktop... assuming the A12X can complete the run without causing whatever it's inside to catch fire.
Did you see the Photoshop CC demo on the iPad Pro. If you didn't please take a look at it. Besides that, I don't know how many people work with Cinebench, but I know that most of us usually work with the kind of apps iOS has: a browser, a music player, etc. The problem in my case is the mouse support. I don't see myself working without a mouse anytime soon.
Posted on Reply
#15
efikkan
ImsochoboGeekbench produces hilarious numbers.

for instance ryzen on windows vs ryzen on macos is wildly different numbers...

Most other things are faster on windows while geekbench is faster on macos, I do not trust geekbench at all..
Yeah, different benchmark benchmarking different things.
qubitThis just goes to prove what I've been saying for years, that given the same enhancements as x86 has received, the ARM architecture is superior in speed and power consumption. The day that the PC moves to an ARM architecture and x86 goes away will be one to celebrate. Not gonna be anytime soon, though. :ohwell:
We've been hearing this from the RISC camp since the late 80s; x86 have too much legacy overhead, RISC is more "efficient" and perhaps "faster". Even back then this was only partially true, but it's important to understand the premises. Back then, CISC chips like 80386 and 80486 were larger chips compared to some of their RISC counterparts, and this was before CPUs hit the power wall, so die size was the deciding factor for scaling clock speed. The reduced instruction set of RISC resulted in smaller designs which was cheaper to make and could be clocked higher, potentially reaching higher performance levels in some cases. But RISC always had much lower performance per clock, so higher clock speed was always a requirement for RISC to be performing.

Since the 80s, CPU designs have changed radically. Modern x86 implementations have nothing in common with their ancestors, with design features such as pipelining, OoO execution, cache, prefetching, branch prediction, superscalar, SIMD and application specific acceleration. As clock speeds have increased beyond 3 GHz, new bottlenecks have emerged; like the power wall and memory wall. x86 today is just an ISA, implemented as different microarchitectures. All major x86 implementations since the mid 90s have adapted a "RISC like" microarchitecture, where x86 is translated into architecture-specific micro-operations, a sort of hybrid approach, to get the best of both worlds.

x86 and ARM implementations have adapted all the techniques mentioned above to achieve our current performance level. Many ARM implementations have used much more application specific instructions. Along with SIMD extensions, these are no longer technically purely RISC designs. Applications specific instructions is the reason why you can browse the web on your Android phone with a CPU consuming ~0.5W, watch or record h.264 videos in 1080p and so on. Some chips even have instructions to accelerate Java bytecode. If modern smartphones were pure RISC designs, they would never be usable like we know them. The same goes for Blu-ray players; if you open one up you'll probably find a ~5W MIPS CPU in there, and it relies either on a separate ASIC or special instructions for all the heavy lifting. One fact still remains, RISC still needs more instructions to do basic operations, and since the power wall is limiting clock speed, RISC will remain behind until they find a way to translate it to more efficient CISC-style operations.

I want to refer to some of the findings from the "VRG RISC vs CISC study" from the University of Wisconsin-Madison:


The only real efficiency advantage we see with ARM is in low power CPUs. But this has nothing to do with the ISA, just Intel failing to make their low-end x86 implementations scale well, this is why we see some ARM designs can compete with Atom.
Posted on Reply
#16
ArbitraryAffection
I read somewhere that the Apple ARM-based SOCs can get close to desktop-class x86 cores running Geekbench, but when an actual workload is used (I think it was 3DMark Physics test) the desktop core completely wipes the floor with the ARM part. But I guess RISC CPUs will never be used for desktop performance because of exactly what it says on the tin: they have less instructions, so when you need to use one of those omitted instructions, performance tanks through the floor, right?
Posted on Reply
#17
efikkan
ArbitraryAffectionBut I guess RISC CPUs will never be used for desktop performance because of exactly what it says on the tin: they have less instructions, so when you need to use one of those omitted instructions, performance tanks through the floor, right?
A smaller instruction set and less powerful instructions means you need more instructions to do the same work, even for basic things like memory operations, moving between registers, etc. When you need more instructions to do the same work, you need to compensate in some way to compensate for the lower performance; substantial higher clocks, faster operations, etc. Imagine how highly clocked an ARM CPU would have to be to compete with the latest Coffee Lake CPUs?

To make matters worse, modern CPUs heavily rely on their front-end to prefetch, predict, and saturate the CPU. When there is more code; everything the front-end does becomes harder, including cache efficiency, OoO, prefetching etc.
Posted on Reply
#18
stimpy88
What makes the A12X so special, is that it only uses 4 high performance cores, and 7 GPU cores.

Imagine this chip with 8 high performance cores, and a 16 core GPU with doubled caches throughout, as well as a slight bump in clock speed to match Intels single core performance. Your still looking at a sub 20w chip, with vastly superior performance to anything reasonable from Intel. A MacBook could have graphics much more powerful than an XBox One X, and a CPU nearly 4 times faster at multicore performance, as well as significantly superior single core performance.

The A13X will signal Intel's death knell on the portable Mac platform next year. It will get to the point where Apple will have to apologize for just how much faster an iPad is than their top of the range MacBook Pro! I really expect see an Apple CPU powered MacBook on the market within 2 years. It's already at the point where they just simply cannot ignore Intels complete incompetence at CPU design and manufacturing for much longer.

Intel, we PC enthusiasts told you that a 2 - 5% IPC increase was not going to cut it everytime you release a "new" generation of chips, you were warned, many years ago... And you replied F**k you, we will put the prices up anyway, and you will buy it... Intel, you're the one that's going to "buy it" soon.
Posted on Reply
#19
StrayKAT
Until Apple actually commits to gaming and networking within that industry, I don't really care if it's faster than an Xbox or not (let alone a PC). I know they've tossed the idea around though, especially for AppleTV.
Posted on Reply
#20
First Strike
Yea yea yea, I know A12&A11's big core has a extravagant configuration comparable to Intel's counterpart. But would someone please provide some data other than the infamous Geekbench.

Personally, I welcome this change, because I think this is what ultrabook should do.
But it will seriously cripple professional workloads. No matter how extravagant the ARM big core configuration maybe, it doesn't have AVX equivalent. GPU side is quite close. Ironically enough, A12's GPU is slightly lagging, and Intel has better compatibility.

Apple fans will have a hard time editing videos on A12.
Posted on Reply
#21
R0H1T
ArbitraryAffectionI read somewhere that the Apple ARM-based SOCs can get close to desktop-class x86 cores running Geekbench, but when an actual workload is used (I think it was 3DMark Physics test) the desktop core completely wipes the floor with the ARM part. But I guess RISC CPUs will never be used for desktop performance because of exactly what it says on the tin: they have less instructions, so when you need to use one of those omitted instructions, performance tanks through the floor, right?
You forgot to mention that desktop class chips, I assume we're still talking notebooks here, use at least 15W avg TDP with PL2 of generally 25W & the ARM counterparts do well even with half of that i.e. 7W or thereabouts.

I still don't get why everyone is so hung up on GB numbers, are there better cross platform benchmarks around? Is Intel this infallible or does the PC crowd still think tablets are toys? The same was said about Intel vs AMD, before Zen, & we know how that turned out.
Posted on Reply
#22
Fabio
how a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?
Posted on Reply
#23
StrayKAT
R0H1TYou forgot to mention that desktop class chips, I assume we're still talking notebooks here, use at least 15W avg TDP with PL2 of generally 25W & the ARM counterparts do well even with half of that i.e. 7W or thereabouts.

I still don't get why everyone is so hung up on GB numbers, are there better cross platform benchmarks around? Is Intel this infallible or does the PC crowd still think tablets are toys? The same was said about Intel vs AMD, before Zen, & we know how that turned out.
I don't think tablets are toys, but they're highly consumption oriented. While PCs are both consumption and creative (and it's capability to consume goes far beyond what a tablet can do... with games especially.. but also just storing/saving movies/music/etc . If you want to do that too. I don't personally, but I'm glad it doesn't hold me back).
Posted on Reply
#24
medi01
Reminds me the times when Macoids were running around with "fastest eva!" claims with IBM's chip inside, early 200x.

Yeah, right, go for it, Apple.
Posted on Reply
#25
StrayKAT
medi01Reminds me the times when Macoids were running around with "fastest eva!" claims with IBM's chip inside, early 200x.

Yeah, right, go for it, Apple.
They were great chips, until the last run. Sort of on par with Intel (604e = Pentium Pro, G3 = PII, etc), but Intel went crazy in the megahertz wars. Maybe IBM would have figured it out if given enough time, but Apple was their main co-designer/customer and they jumped ship. I mean, IBM's Power chips are better than Xeon, so I don't see why PowerPC wouldn't have evolved as well.

The real failure of PowerPC is not many adopted it (and Apple probably helped kill it off anyways, when they destroyed Mac clones). That was it's real intent - for IBM to own the PC market again. They wanted NT, Macs, and anything else on it.
Posted on Reply
Add your own comment
Nov 19th, 2024 08:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts