Tuesday, August 18th 2020

Apple A14X Bionic Rumored To Match Intel Core i9-9880H

The Apple A14X Bionic is an upcoming processor from Apple which is expected to feature in the upcoming iPad Pro models and should be manufactured on TSMC's 5 nm node. Tech YouTuber Luke Miani has recently provided a performance graph for the A14X chip based on "leaked/suspected A14 info + average performance gains from previous X chips". In these graphs, the Apple A14X can be seen matching the Intel Core i9-9880H in Geekbench 5 with a score of 7480. The Intel Intel Core i9-9880H is a 45 W eight-core mobile CPU found in high-end notebooks such as the 2019 16-inch MacBook Pro and requires significant cooling to keep thermals under control.

If these performance estimates are correct or even close then Apple will have a serious productivity device and will serve as a strong basis for Apple's transition to custom CPU's for it's MacBook's in 2021. Apple may use a custom version of the A14X with slightly higher clocks in their upcoming ARM MacBooks according to Luke Miani. These results are estimations at best so take them with a pinch of salt until Apple officially unveils the chip.
Source: @LukeMiani
Add your own comment

85 Comments on Apple A14X Bionic Rumored To Match Intel Core i9-9880H

#76
Vayra86
dragontamer5788I'm not sure if you guys know what I know.

Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.

www.cs.utexas.edu/~bornholt/post/z3-iphone.html

Though 20 seconds is slow, the blogpost indicates that they continuously ran the 3-SAT solver over and over again, so the iPhone was behaving at its thermal limits. The Z3 solver tries to solve the 3-SAT NP Complete problem. At this point, it has been demonstrated that Apple's A12 has a faster L1, L2, and memory performance than even Intel's chips in a very difficult, single-threaded task.

----------

Apple's chip team has demonstrated that its small 5W chip is in fact pretty good at some very difficult benchmarks. It shouldn't be assumed that iPhones are slower anymore. They're within striking distance of Desktops in single-core performance in some of the densest compute problems.
You're echoing Apple marketing and are desperately finding talking points to make it worth our while. Its admirable. But don't sell it as a lack of understanding from others, because its really not. There is no magic in CPU land, contrary to what you might think. Its a balancing act and Apple has specifically made its own SoC to get its own specific, required balance on a chip. That balance works well for the use case Apple has selected it to do.

Its like @theoneandonlymrk says, again, eloquently... these highly specific workloads say little about overall CPU performance. Having a CPU repeatedly do the same task is not a real measure of overall performance. Its a measure of its performance in that specific workload. If you do that for different architectures, the comparison is off. You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs. Its irrelevant the chip can repeat that test a million times over. Its still a burst mode, specific workload-based view, and not the whole picture.

Its just that simple. Apple isn't smarter than the rest. They have specialized themselves to very specific workloads, specific devices, with specific use cases. That is why any sort of advanced user / system modding stuff on Apple is nigh impossible and if it IS, Apple has carefully prepared the path you need to walk for it. This is a company that manages your user experience. On most other (non mobile) OS'es, the situation is turned around: you get an OS with lots of tools, have fun with it, the only thing you can't touch is kernel... unless you try harder.

The new direction for Apple, and I've said it as a joke, but its really not... terminals. ARM and the chip Apple has created is fantastic for logging in, and getting the heavy lifting done off-site. Cloud. Apple's been big on it, and they'll go bigger. They are drooling all over Chromebooks because imagine the margins! They can sell an empty shell with an internet connection that can 'feel' like it is a true Apple device, barely include hardware, and still get the Apple Premium on it.

That is what the ARM push is about, alongside another step forward in full IP ownership of soft- and hardware.
Posted on Reply
#77
dragontamer5788
Vayra86You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs.
My discussion revolves around Z3 because theoneandonlymrk clearly refused to accept SPECint2006 and Geekbench4 as benchmarks. Anandtech has the Xeon 8176 @3.8 GHz here: www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

Xeon 8176 @3.8 GHzApple A12 Vortex @ 2.5 GHz
400.perlbench46.445.38
401.bzip22528.54
403.gcc3144.56
429.mcf40.649.92
445.gobmk27.638.54
456.hmmer35.644.04
458.sjeng30.836.60
462.libquantum86.2113.40
464.h264ref64.566.59
471.omnetpp37.935.73
473.astar24.727.25
483.xalancbmk63.747.07

So now we have SPECInt2006 AND Geekbench4 suites where the A12 Vortex is crushing single-threaded performance. That's just the reality of the A12 chip. The results speak for themselves, the A12 Vortex at 2.5 GHz outright beats the Xeon in 75% of the SPECint2006 suite.

Yeah, the A12 is really good at 64-bit singlethreaded code. Surprisingly good. (Note: H264 is implemented with SIMD instructions typically. The SPEC Int2006 benchmark is a 64-bit reference implementation. So this doesn't really test H264 in practice)

----------

Look, I don't even own an iPhone. I don't give a care about iPhones, and I don't plan to deal with any of Apple's walled garden bullshit. I don't like their business model, I don't like Apple. I don't like their stupid reality distortion fields.

But I've seen the benchmarks. their A12 chip is pretty damn good in single-threaded performance. As a CPU-nerd, that makes me intrigued and interested. But not really enough to buy an iPhone yet.
Posted on Reply
#78
Vya Domus
As far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?
Posted on Reply
#79
dragontamer5788
Vya DomusAs far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?
According to Anandtech, it was GCC 7.2 -Ofast. So I don't think AVX512 was enabled. But I don't expect much improvement from perlbench, gcc, astar, or bzip2.

Xeon 8172 has one of the best SIMD-vector processors on the market. So yes, it would be "more fair" to let the Xeon use its SIMD units to the degree that is convenient (ie: GCC's autovectorizer), as long as say intrinsics and/or hand-crafted assembly aren't being used. A few memcpys or strcmp functions here and there might get a bit faster, but I don't expect any dramatic improvements in the Xeon's speed.

---------

EDIT: I can't find a benchmark that runs the Xeon 8176 exactly as we like. The closest run I found is: www.spec.org/cpu2006/results/res2017q3/cpu2006-20170710-47735.html

This runs 112 identical copies of the benchmarks across the 56-cores (x2 threads). Divide the run by 56 to get a "pseudo-single core" score. Yes, with AVX512 enabled (-xCORE-AVX512). We can see that none of the SPECInt2006 scores vary dramatically from Anandtech's single-threaded results.

I don't think AVX512 will matter too much on 64-bit oriented code like the SPECInt2006 suite.
Posted on Reply
#80
Searing
squallheartI am curious how you came to that conclusion.
I actually looked up GFXbench, which is cross-platform and fairly well regarded and there is no known bias to any platform.

gfxbench.com/device.jsp?benchmark=gfx50&os=iOS&api=metal&cpu-arch=ARM&hwtype=iGPU&hwname=Apple%20A12Z%20GPU&did=83490021&D=Apple%20iPad%20Pro%20(12.9-inch)%20(4th%20generation)
gfxbench.com/device.jsp?benchmark=gfx50&os=Windows&api=dx&cpu-arch=x86&hwtype=dGPU&hwname=NVIDIA%20GeForce%20GTX%201060%206GB&did=36085769&D=NVIDIA%20GeForce%20GTX%201060%206GB

I am comparing the A12Z (from the 4th gen Ipad pro , faster than the A12X) to the 1060

For Aztec high offscreen, the most demanding test, A12Z recorded 133.8 vs 291.1 on the GTX1060.

So my question to you is, are you expecting the A14X to more than double in graphics performance?
First of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).

Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.

Considering that the A12X/Z are basically 2 years old I think Apple can more than add 50 percent to GPU performance. I'm actually expecting 1.5x for the iPad and 2x for the ARM Mac models, at least. That would also be enough to beat their existing models, like the 5500M.

(the 2020 iPad Pro's A12Z is twice as fast as the A10x from 2017's CPU, and 50 percent faster GPU wise)
Posted on Reply
#81
squallheart
SearingFirst of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).

Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.

Considering that the A12X/Z are basically 2 years old I think Apple can more than add 50 percent to GPU performance. I'm actually expecting 1.5x for the iPad and 2x for the ARM Mac models, at least. That would also be enough to beat their existing models, like the 5500M.

(the 2020 iPad Pro's A12Z is twice as fast as the A10x from 2017's CPU, and 50 percent faster GPU wise)
You didn't specify you were comparing to the mobile variant.

www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)

FYI that is a really old test, based on rather ancient APIs.

If you look at other Anandtech's benchmarks of mobile soc, the GFXbenchmark Aztec is used a lot more often, and he also calculates fps/w of those results so I would think that's a better comparison.

Anyways, I guess the bottom line is you think it will get a 50% performance boost over the A12X/Z. I have no qualms with that and I think it's reasonable. I am still not convinced that it will match the laptop variant for the 1070 though unless it is extremely thermally constraint.
Posted on Reply
#82
Searing
squallheartYou didn't specify you were comparing to the mobile variant.

www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)

FYI that is a really old test, based on rather ancient APIs.

If you look at other Anandtech's benchmarks of mobile soc, the GFXbenchmark Aztec is used a lot more often, and he also calculates fps/w of those results so I would think that's a better comparison.

Anyways, I guess the bottom line is you think it will get a 50% performance boost over the A12X/Z. I have no qualms with that and I think it's reasonable. I am still not convinced that it will match the laptop variant for the 1070 though unless it is extremely thermally constraint.
You don't have to specify it, I'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple. And different games will have different results. For example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.

Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions.
Posted on Reply
#83
squallheart
SearingI'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple.
You have a tendency to make claims without any support IMO. "Easily" beat a discrete desktop GPU?
Searingthe ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.
Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.

"Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions."

Your original statement was that the new Ipad can easily run games from PS4/XBOX and be faster than a 1070. Don't move the goal post here.
Posted on Reply
#84
Searing
squallheartYou have a tendency to make claims without any support IMO. "Easily" beat a discrete desktop GPU?


Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.

"Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions."

Your original statement was that the new Ipad can easily run games from PS4/XBOX and be faster than a 1070. Don't move the goal post here.
You are plainly ignoring the meaning of the words. Can Apple silicon easily beat a GTX 1060? Yes. I never said 1070, and haven't moved any goalposts, you are just making stuff up now. And it can easily run PS4 games, that is mainly a comment on the quality of the CPU side of things and the fact that the iPad for example has double the memory bus width of the Switch and other mobile products. The iPad Pro already has a desktop GPU 128bit bus just like a GTX 1650 but it can't use GDDR6 for power consumption reasons of course. Stick GDDR6 in a desktop Apple CPU/GPU chip and voila, then it really gets exciting.
Posted on Reply
#85
squallheart
SearingFor example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.
To be honest when you wrote that it really showed that you didn't know much about GPU. All GPUs carryout Arithmetic logic unit which is essentially just hardware that does math. To put it simply, they are all part of the performance of the GPU and a combination of that as well as factors like the memory bandwidth will lead to a certain level of rasterization performance.

The way you described ALU makes it sound like it excels at a certain kind of graphical workload just tells me you fail to understand it's still part of rasterization performance. Not sure why you can't just concede made a groundless claim and instead choose to double down by putting that ALU comment.
Posted on Reply
Add your own comment
Nov 21st, 2024 18:23 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts