• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Editorial Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

Reminds me the times when Macoids were running around with "fastest eva!" claims with IBM's chip inside, early 200x.

Yeah, right, go for it, Apple.

They were great chips, until the last run. Sort of on par with Intel (604e = Pentium Pro, G3 = PII, etc), but Intel went crazy in the megahertz wars. Maybe IBM would have figured it out if given enough time, but Apple was their main co-designer/customer and they jumped ship. I mean, IBM's Power chips are better than Xeon, so I don't see why PowerPC wouldn't have evolved as well.

The real failure of PowerPC is not many adopted it (and Apple probably helped kill it off anyways, when they destroyed Mac clones). That was it's real intent - for IBM to own the PC market again. They wanted NT, Macs, and anything else on it.
 
how a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?
To find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
 
does the PC crowd still think tablets are toys?

Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
 
Well certainly I'm not Apple cultists by any stretch (have only 1 Apple thing - iPhone7 as was sick and tired of Android and its superb). Their pricing structure is abominable, support not much better assuming your stuff is at most 3 years old (big shout for Louis Rossmann 'The Educator' :toast:), otherwise support is a 4 letter one - GTFO. I wish Apple started selling their stuff without warranties of any kind (except initial say 30 day period with possible DOA). Charge half the price and stop pretend there is a support.

On the other hand. Watched iPad presentation and I find it interestingly tempting, but... it doesn't run macOS (where I can get stuff I work on - Clip Studio/Corel Painter), but mobile iOS (stuff like Procreate is pathetic) and that's a deal breaker for me. If it was macOS ecosystem I would jump on it in a jiffy. :snapfingers: It is vastly superior in every possible way vs products like by now totally archaic Wacom Mobile Studio Pro 16 (model 13 is so lame I don't even say more).

If you never painted outdoors don't pretend you know everything. WMSP16 is great tool (essentially Windows tablet PC) when you just want to take your art stuff and move away from room and desk and cables. Go and paint in the park or something. But here is the deal. Because it is full blown PC, you can use desktop apps. iPad is a weird thing. It would be great if it would be a macOS version of WMSP, but it is not. Sadly. :(
 
Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ?
Yes, exactly.
And even in terms of ergonomics; a tablet have to lie flat or be in a stand, and touch-only is imprecise and inefficient for most serious work.

I surely see a use for tablets, but purely as "toys". One of the neatest things I've found with tablets is to use them for sheet music, or viewing photos.

I would like to see cheaper 12-15" tablets, the Ipad Pros are at least twice what I think they are worth. But a tablet is always going to be a supplement.

… then sure I bet there is someone out there playing Doom unironically on their TI calculator as well.
Oh, but there is: LGR - "Doom" on a Calculator! Ti-83 Plus Games Tutorial
:D
 
You forgot to mention that desktop class chips, I assume we're still talking notebooks here, use at least 15W avg TDP with PL2 of generally 25W & the ARM counterparts do well even with half of that i.e. 7W or thereabouts.

I still don't get why everyone is so hung up on GB numbers, are there better cross platform benchmarks around? Is Intel this infallible or does the PC crowd still think tablets are toys? The same was said about Intel vs AMD, before Zen, & we know how that turned out.

How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
 
How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
I'm not sure what you're saying i.e. do you agree with the premise that Ax can replace x86 in MB or even MBP? We'll leave the entire desktop lineup debate to a more appropriate time where Apple has more than one die, because I don't see the same chip going in an iphone & a i9 replacement.
Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
So you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them? What do you say to people who end up bootcamping Windows every time they open a Mac? And who does serious work on a laptop, don't we have desktops for that or workstation/server class PC? What does serious work even mean in this case?
To find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
No different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.

There's 3 implementations from x86 as well, would you like to call them out?

Which is actually a good thing & the reason why iPhone beats every other phone out there, in most synthetic & real world benchmarks, in fact virtually all of them. This is also the reason why x86 won't beat a custom ARM chip, across the board, should Apple decide to replace the former in their future laptop &/or desktop parts.
 
Last edited:
PCs are like the Ford Raptors of the computing world. They do just about everything well. It's the ultimate all purpose vehicle imo. And that will always be wanted computing wise as well. Even with the trend in computers now being specialized devices.
 
Yeah the "PC" is jack of all trades & master of some, that's all it needs to do. It doesn't have to be the best at everything, never mind the fact that it isn't the best in lots of things atm.
 
Its fun to say Apple has an ARM design that is competing with non-ARM hardware, but I really don't care as long as the software is not cross compatible.

As long as that isn't a universal thing, ARM and x86 will always be two separate worlds. As long as it is labor intensive to port back and forth, or emulate, the performance of ARM is irrelevant. The move to migration from x86 to ARM is way too slow for it to matter.

MacOS on ARM... who cares? Proprietary OS, and it gets only more isolated and less versatile by moving away from x86, in terms of software. And its not like MacOS was ever really good at that. Look at the reason Windows is still huge: enterprise + gaming. Both are elements MacOS fails to provide properly.
 
So you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them?

I am not arguing anything, they are primarily used for entertainment not productivity. Apple insists to call it "Pro" for marketing purposes and to justify it's crazy price tag. You're not buying a thousand dollar tablet to watch Netflix on it, you are actually a content creator. :laugh:
 
I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
 
I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years.

CISC remains CISC no matter the implementation and it's always going to more robust. Complex x86 instructions optimized at the micro-op level, will always be faster than the ARM equivalent.
 
No different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.
No, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
 
Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.

Honestly, before it got stolen along with the rest of my hardware, I used my Tablet more than my PC. For everything from reading books or comics over webbrowsing and Netflix in the bed to even simple text editing and light spreadsheet work (for the latter two parts I was using my Bluetooth Keyboard that was previously only used as a remote for my HTPC). My main PC was only fired up for gaming. Unless I was playing something on PS3, PS4 or Switch. Which did happen more, than I would have anticipated before I got those consoles.
 
No, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
I'm not, how many applications make use of AVX or rather can make use of AVX? Heck SSE is only as old as 1999, while PC computing is much older than that. What are the equivalent ARM counterparts you're talking about?

edit - Scratch that, I get it what you're saying.
 
Considering just 3 years ago Apple was running their laptops off the absolutely horrible Core M 5Y31, I can certainly see the A12X or one of its sucessors taking that processors place. Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money, make the product "look cooler" in some way, and add some kind of bullshit marketing point about it.
 
how a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?

because they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
 
Last edited:
Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money

The irony is that these chips likely cost more to make than anything comparable from Intel.
 
The irony is that these chips likely cost more to make than anything comparable from Intel.

Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
 
because they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
You have any evidence of faking iOS benches? I know SS did it, Huawei as well as Intel, yet to see Apple devices called out for faking benchmark in recent times.

You mean the only server chip, QC's project is dead & any other ARM based vendor seems miles off in their efforts to deliver a viable server chip.

And that's related to desktops or notebooks how? Not to mention Apple is a completely different beast with close to a decade worth of experience behind them.
Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
Absolutely, Intel has insane margins, just like Apple.
 
Yeah, different benchmark benchmarking different things.


We've been hearing this from the RISC camp since the late 80s; x86 have too much legacy overhead, RISC is more "efficient" and perhaps "faster". Even back then this was only partially true, but it's important to understand the premises. Back then, CISC chips like 80386 and 80486 were larger chips compared to some of their RISC counterparts, and this was before CPUs hit the power wall, so die size was the deciding factor for scaling clock speed. The reduced instruction set of RISC resulted in smaller designs which was cheaper to make and could be clocked higher, potentially reaching higher performance levels in some cases. But RISC always had much lower performance per clock, so higher clock speed was always a requirement for RISC to be performing.

Since the 80s, CPU designs have changed radically. Modern x86 implementations have nothing in common with their ancestors, with design features such as pipelining, OoO execution, cache, prefetching, branch prediction, superscalar, SIMD and application specific acceleration. As clock speeds have increased beyond 3 GHz, new bottlenecks have emerged; like the power wall and memory wall. x86 today is just an ISA, implemented as different microarchitectures. All major x86 implementations since the mid 90s have adapted a "RISC like" microarchitecture, where x86 is translated into architecture-specific micro-operations, a sort of hybrid approach, to get the best of both worlds.

x86 and ARM implementations have adapted all the techniques mentioned above to achieve our current performance level. Many ARM implementations have used much more application specific instructions. Along with SIMD extensions, these are no longer technically purely RISC designs. Applications specific instructions is the reason why you can browse the web on your Android phone with a CPU consuming ~0.5W, watch or record h.264 videos in 1080p and so on. Some chips even have instructions to accelerate Java bytecode. If modern smartphones were pure RISC designs, they would never be usable like we know them. The same goes for Blu-ray players; if you open one up you'll probably find a ~5W MIPS CPU in there, and it relies either on a separate ASIC or special instructions for all the heavy lifting. One fact still remains, RISC still needs more instructions to do basic operations, and since the power wall is limiting clock speed, RISC will remain behind until they find a way to translate it to more efficient CISC-style operations.

I want to refer to some of the findings from the "VRG RISC vs CISC study" from the University of Wisconsin-Madison:
View attachment 109779

The only real efficiency advantage we see with ARM is in low power CPUs. But this has nothing to do with the ISA, just Intel failing to make their low-end x86 implementations scale well, this is why we see some ARM designs can compete with Atom.

that study was based on nearly 10 year old tech. Ax processors have come a long way since the iphone 3GS lol.
Screen-Shot-2015-06-30-at-3.05.30-PM.png

Not saying the study's wrong just that Apple has made significant improvements to their version of the ARM core over the past decade. a Study based on processors of a dozen generations back will be an inaccurate representation of the current generations of Ax processors. Intel on the other hand are still using the same architecture and have seen only small increases in performance. in fact according to what i can find at a glance, even the A11 was an incredible 100x faster than a 3GS Name another processor in the past decade which can claim such a feat.... hell even the A8 in the iphone 6 was 50x faster than the 3gs. I'd like to see a study done on the most recent cpu's. THAT would be interesting.

I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
This too^
 

Attachments

  • iphone-6-a8-soc-performance-cpu-gpu1.jpg
    iphone-6-a8-soc-performance-cpu-gpu1.jpg
    36.1 KB · Views: 332
Last edited:
everybody spoke about the clock is forgetting some x86 istructions can't be completed in one clock only.
so even if modern processors try to optimize executing them, and also using branch prediction there's a limit to their IPC. As someone else said those are then split in micro-OPs, making the architecture really similar to RISC.
What's the real difference right now is the market that those chips are aimed to. Also developers seems unable to write multithreaded programs correctly; and this is made very obvious by the small adoption of Vulkan and DX12.
In pure sheer computational power ARM cpus already find their place. When x86 architecture will not be able to improve any further, we'll probably finally see the benefits of multicores and RISC cpu
 
Looks like the x86 days are finally numbered...

AMD was out to lunch for a decade, and intel preferred to milk their willing customer base and pushing innovation at a snail's pace.

All the while ARM was laying down the foundation work they needed. It's not even about Apple's version of the chip. Now ARM is ready and pushing beyond their initial market.
 
Back
Top