• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple A14X Bionic Rumored To Match Intel Core i9-9880H

Joined
Apr 24, 2020
Messages
2,661 (1.72/day)

Is it really bye? Or are you one of those dudes who don't really mean what they say?

Getting technical Intel have foveros and FPGA tech, as soon as they have a desktop CPU with an FPGA and HBM, it could be game over so far as benches go against anything on anything, enabled by one API.
Power pc simply are in another league.
AMD will iterate core count way beyond apples horizon and incorporate better fabrics and Ip..

Have you ever used an FPGA? Have you ever used Power? Power has anemic SIMD units. The benefit of Power9 / Power10 is memory bandwidth and L3.

FPGAs are damn hard to program for. Not only is Verilog / VHDL rarely taught, synthesis takes a lot longer than compiles. OpenCL / CUDA is far simpler, honest. We'll probably see more from Intel's Xe than from FPGAs.

Not to mention, high-end FPGAs are like $4000+ if we're talking things competitive vs GPUs or CPUs.
 
Joined
Sep 17, 2014
Messages
21,558 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I'm not sure if you guys know what I know.

Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.


Though 20 seconds is slow, the blogpost indicates that they continuously ran the 3-SAT solver over and over again, so the iPhone was behaving at its thermal limits. The Z3 solver tries to solve the 3-SAT NP Complete problem. At this point, it has been demonstrated that Apple's A12 has a faster L1, L2, and memory performance than even Intel's chips in a very difficult, single-threaded task.

----------

Apple's chip team has demonstrated that its small 5W chip is in fact pretty good at some very difficult benchmarks. It shouldn't be assumed that iPhones are slower anymore. They're within striking distance of Desktops in single-core performance in some of the densest compute problems.

You're echoing Apple marketing and are desperately finding talking points to make it worth our while. Its admirable. But don't sell it as a lack of understanding from others, because its really not. There is no magic in CPU land, contrary to what you might think. Its a balancing act and Apple has specifically made its own SoC to get its own specific, required balance on a chip. That balance works well for the use case Apple has selected it to do.

Its like @theoneandonlymrk says, again, eloquently... these highly specific workloads say little about overall CPU performance. Having a CPU repeatedly do the same task is not a real measure of overall performance. Its a measure of its performance in that specific workload. If you do that for different architectures, the comparison is off. You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs. Its irrelevant the chip can repeat that test a million times over. Its still a burst mode, specific workload-based view, and not the whole picture.

Its just that simple. Apple isn't smarter than the rest. They have specialized themselves to very specific workloads, specific devices, with specific use cases. That is why any sort of advanced user / system modding stuff on Apple is nigh impossible and if it IS, Apple has carefully prepared the path you need to walk for it. This is a company that manages your user experience. On most other (non mobile) OS'es, the situation is turned around: you get an OS with lots of tools, have fun with it, the only thing you can't touch is kernel... unless you try harder.

The new direction for Apple, and I've said it as a joke, but its really not... terminals. ARM and the chip Apple has created is fantastic for logging in, and getting the heavy lifting done off-site. Cloud. Apple's been big on it, and they'll go bigger. They are drooling all over Chromebooks because imagine the margins! They can sell an empty shell with an internet connection that can 'feel' like it is a true Apple device, barely include hardware, and still get the Apple Premium on it.

That is what the ARM push is about, alongside another step forward in full IP ownership of soft- and hardware.
 
Last edited:
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs.

My discussion revolves around Z3 because theoneandonlymrk clearly refused to accept SPECint2006 and Geekbench4 as benchmarks. Anandtech has the Xeon 8176 @3.8 GHz here: https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

Xeon 8176 @3.8 GHzApple A12 Vortex @ 2.5 GHz
400.perlbench46.445.38
401.bzip22528.54
403.gcc3144.56
429.mcf40.649.92
445.gobmk27.638.54
456.hmmer35.644.04
458.sjeng30.836.60
462.libquantum86.2113.40
464.h264ref64.566.59
471.omnetpp37.935.73
473.astar24.727.25
483.xalancbmk63.747.07
So now we have SPECInt2006 AND Geekbench4 suites where the A12 Vortex is crushing single-threaded performance. That's just the reality of the A12 chip. The results speak for themselves, the A12 Vortex at 2.5 GHz outright beats the Xeon in 75% of the SPECint2006 suite.

Yeah, the A12 is really good at 64-bit singlethreaded code. Surprisingly good. (Note: H264 is implemented with SIMD instructions typically. The SPEC Int2006 benchmark is a 64-bit reference implementation. So this doesn't really test H264 in practice)

----------

Look, I don't even own an iPhone. I don't give a care about iPhones, and I don't plan to deal with any of Apple's walled garden bullshit. I don't like their business model, I don't like Apple. I don't like their stupid reality distortion fields.

But I've seen the benchmarks. their A12 chip is pretty damn good in single-threaded performance. As a CPU-nerd, that makes me intrigued and interested. But not really enough to buy an iPhone yet.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,200 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
As far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?
 
Last edited:
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
As far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?

According to Anandtech, it was GCC 7.2 -Ofast. So I don't think AVX512 was enabled. But I don't expect much improvement from perlbench, gcc, astar, or bzip2.

Xeon 8172 has one of the best SIMD-vector processors on the market. So yes, it would be "more fair" to let the Xeon use its SIMD units to the degree that is convenient (ie: GCC's autovectorizer), as long as say intrinsics and/or hand-crafted assembly aren't being used. A few memcpys or strcmp functions here and there might get a bit faster, but I don't expect any dramatic improvements in the Xeon's speed.

---------

EDIT: I can't find a benchmark that runs the Xeon 8176 exactly as we like. The closest run I found is: https://www.spec.org/cpu2006/results/res2017q3/cpu2006-20170710-47735.html

This runs 112 identical copies of the benchmarks across the 56-cores (x2 threads). Divide the run by 56 to get a "pseudo-single core" score. Yes, with AVX512 enabled (-xCORE-AVX512). We can see that none of the SPECInt2006 scores vary dramatically from Anandtech's single-threaded results.

I don't think AVX512 will matter too much on 64-bit oriented code like the SPECInt2006 suite.
 
Last edited:
Joined
Nov 4, 2019
Messages
234 (0.14/day)
I am curious how you came to that conclusion.
I actually looked up GFXbench, which is cross-platform and fairly well regarded and there is no known bias to any platform.


I am comparing the A12Z (from the 4th gen Ipad pro , faster than the A12X) to the 1060

For Aztec high offscreen, the most demanding test, A12Z recorded 133.8 vs 291.1 on the GTX1060.

So my question to you is, are you expecting the A14X to more than double in graphics performance?

First of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).

Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.

Considering that the A12X/Z are basically 2 years old I think Apple can more than add 50 percent to GPU performance. I'm actually expecting 1.5x for the iPad and 2x for the ARM Mac models, at least. That would also be enough to beat their existing models, like the 5500M.

(the 2020 iPad Pro's A12Z is twice as fast as the A10x from 2017's CPU, and 50 percent faster GPU wise)
 
Last edited:
Joined
Aug 15, 2017
Messages
18 (0.01/day)
First of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).

Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.

Considering that the A12X/Z are basically 2 years old I think Apple can more than add 50 percent to GPU performance. I'm actually expecting 1.5x for the iPad and 2x for the ARM Mac models, at least. That would also be enough to beat their existing models, like the 5500M.

(the 2020 iPad Pro's A12Z is twice as fast as the A10x from 2017's CPU, and 50 percent faster GPU wise)

You didn't specify you were comparing to the mobile variant.

https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)

FYI that is a really old test, based on rather ancient APIs.

If you look at other Anandtech's benchmarks of mobile soc, the GFXbenchmark Aztec is used a lot more often, and he also calculates fps/w of those results so I would think that's a better comparison.

Anyways, I guess the bottom line is you think it will get a 50% performance boost over the A12X/Z. I have no qualms with that and I think it's reasonable. I am still not convinced that it will match the laptop variant for the 1070 though unless it is extremely thermally constraint.
 
Joined
Nov 4, 2019
Messages
234 (0.14/day)
You didn't specify you were comparing to the mobile variant.

https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)

FYI that is a really old test, based on rather ancient APIs.

If you look at other Anandtech's benchmarks of mobile soc, the GFXbenchmark Aztec is used a lot more often, and he also calculates fps/w of those results so I would think that's a better comparison.

Anyways, I guess the bottom line is you think it will get a 50% performance boost over the A12X/Z. I have no qualms with that and I think it's reasonable. I am still not convinced that it will match the laptop variant for the 1070 though unless it is extremely thermally constraint.

You don't have to specify it, I'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple. And different games will have different results. For example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.

Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions.
 
Joined
Aug 15, 2017
Messages
18 (0.01/day)
I'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple.

You have a tendency to make claims without any support IMO. "Easily" beat a discrete desktop GPU?

the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.
Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.

"Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions."

Your original statement was that the new Ipad can easily run games from PS4/XBOX and be faster than a 1070. Don't move the goal post here.
 
Joined
Nov 4, 2019
Messages
234 (0.14/day)
You have a tendency to make claims without any support IMO. "Easily" beat a discrete desktop GPU?


Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.

"Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions."

Your original statement was that the new Ipad can easily run games from PS4/XBOX and be faster than a 1070. Don't move the goal post here.

You are plainly ignoring the meaning of the words. Can Apple silicon easily beat a GTX 1060? Yes. I never said 1070, and haven't moved any goalposts, you are just making stuff up now. And it can easily run PS4 games, that is mainly a comment on the quality of the CPU side of things and the fact that the iPad for example has double the memory bus width of the Switch and other mobile products. The iPad Pro already has a desktop GPU 128bit bus just like a GTX 1650 but it can't use GDDR6 for power consumption reasons of course. Stick GDDR6 in a desktop Apple CPU/GPU chip and voila, then it really gets exciting.
 
Joined
Aug 15, 2017
Messages
18 (0.01/day)
For example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.
To be honest when you wrote that it really showed that you didn't know much about GPU. All GPUs carryout Arithmetic logic unit which is essentially just hardware that does math. To put it simply, they are all part of the performance of the GPU and a combination of that as well as factors like the memory bandwidth will lead to a certain level of rasterization performance.

The way you described ALU makes it sound like it excels at a certain kind of graphical workload just tells me you fail to understand it's still part of rasterization performance. Not sure why you can't just concede made a groundless claim and instead choose to double down by putting that ALU comment.
 
Top