# Apple A14X Bionic Rumored To Match Intel Core i9-9880H



## Uskompuf (Aug 18, 2020)

The Apple A14X Bionic is an upcoming processor from Apple which is expected to feature in the upcoming iPad Pro models and should be manufactured on TSMC's 5 nm node. Tech YouTuber Luke Miani has recently provided a performance graph for the A14X chip based on "leaked/suspected A14 info + average performance gains from previous X chips". In these graphs, the Apple A14X can be seen matching the Intel Core i9-9880H in Geekbench 5 with a score of 7480. The Intel Intel Core i9-9880H is a 45 W eight-core mobile CPU found in high-end notebooks such as the 2019 16-inch MacBook Pro and requires significant cooling to keep thermals under control.

If these performance estimates are correct or even close then Apple will have a serious productivity device and will serve as a strong basis for Apple's transition to custom CPU's for it's MacBook's in 2021. Apple may use a custom version of the A14X with slightly higher clocks in their upcoming ARM MacBooks according to Luke Miani. These results are estimations at best so take them with a pinch of salt until Apple officially unveils the chip.





*View at TechPowerUp Main Site*


----------



## Vya Domus (Aug 18, 2020)

No, it wont. God I hate geekbench, literately the only benchmark that you see or hear about whenever there is something Apple related.


----------



## Xex360 (Aug 18, 2020)

Rubbish, if the chip is used to run specialised code it can run fast but in general purpose they can only dream of coming close to Intel/AMD.
As usual very misleading marketing...


----------



## Flanker (Aug 18, 2020)

I'll believe it when it runs some open source benchmark side by side


----------



## Vayra86 (Aug 18, 2020)

Yes, Apple, as long as you have your slow as molasses IOS to pair with your great CPUs, it all looks very smooth and fast. Meanwhile, latency is pretty high on all your devices. Its a nice hiding trick but in raw performance, its not all that special as many think it is.

I'll take x86 and some real responsiveness, ty

ARM Is a joke



devicelatency
(ms)yearipad pro 10.5" pencil302017ipad pro 10.5"702017iphone 4s702011iphone 6s702015iphone 3gs702009iphone x802017iphone 8802017iphone 7802016iphone 6802014gameboy color801998iphone 5902012blackberry q101002013huawei honor 81102016google pixel 2 xl1102017galaxy s71202016galaxy note 31202016moto x1202013nexus 5x1202015oneplus 3t1302016blackberry key one1302017moto e (2g)1402015moto g4 play1402017moto g4 plus1402016google pixel1402016samsung galaxy avant1502014asus zenfone3 max1502016sony xperia z5 compact1502015htc one m41602013galaxy s4 mini1702013lg k41802016


----------



## Frick (Aug 18, 2020)

What will be interesting is how well various software will run on the final machines.


----------



## Searing (Aug 18, 2020)

It's always funny to watch the anti Apple CPU people crawl out of the cracks and say inane things. Yes Geekbench is perfectly fine for comparisons, and latency is not a problem on Apple devices. We have much more than Geekbench for comparisons, we can do server tests ala SPEC and Apple comes out ahead. We can export video or calculate Pi and it comes out ahead. We can even play games like Fortnite (before) and come out ahead. It shouldn't surprise anyone that the huge dual core CPUs in iPhones are fast when they use more transistors than Intel, and they will also have a 5nm EUV advantage over Intel, making it trivial to beat Intel.

A14X will run Shadow of the Tomb Raider and any other PS4/Xbox game. Not surprising since it will match a GTX 1060 easily enough, but only need 10-15W. The Switch is circa 2014 hardware (Galaxy Note 4 CPU plus half a GTX 750) so imagine a Switch that is 6 years more advanced and there you have it.

ARM is just an instruction set. The FX-8350 from AMD and the Intel 10900k have as much in common (both x86) as any Apple CPU and any other ARM CPU. Barely anything in common. You can't say "ARM is slow or ARM sucks" unless you are truly ignorant. The only thing keeping Intel half decent is high clock speeds, in fact run your 10900k in dual core mode at 2.4ghz and compare with the iPhone and you'll see how slow it really is in comparison. Like for like. And those clock speeds are going to climb massively when Apple sticks the A14X in a desktop or laptop.


----------



## M2B (Aug 18, 2020)

Apple's current lightning core inside the iPhone 11 consumes about 5W of power at the maximum 2.6GHz rated frequency.
Assuming it's the same story with the A14X and you have eight of them + bigger caches, you"ll be looking at the same Power draw as the AMD and Intel H series mobile chips.
I'm sure the A14X will be a very strong chip, but it's not going to be beyond what is available from Intel and especially AMD.


----------



## Vayra86 (Aug 18, 2020)

Searing said:


> It's always funny to watch the anti Apple CPU people crawl out of the cracks and say inane things. Yes Geekbench is perfectly fine for comparisons, and latency is not a problem on Apple devices. We have much more than Geekbench for comparisons, we can do server tests ala SPEC and Apple comes out ahead. We can export video or calculate Pi and it comes out ahead. We can even play games like Fortnite (before) and come out ahead. It shouldn't surprise anyone that the huge dual core CPUs in iPhones are fast when they use more transistors than Intel, and they will also have a 5nm EUV advantage over Intel, making it trivial to beat Intel.
> 
> A14X will run Shadow of the Tomb Raider and any other PS4/Xbox game. Not surprising since it will match a GTX 1060 easily enough, but only need 10-15W. The Switch is circa 2014 hardware (Galaxy Note 4 CPU plus half a GTX 750) so imagine a Switch that is 6 years more advanced and there you have it.
> 
> ARM is just an instruction set. The FX-8350 from AMD and the Intel 10900k have as much in common (both x86) as any Apple CPU and any other ARM CPU. Barely anything in common. You can't say "ARM is slow or ARM sucks" unless you are truly ignorant. The only thing keeping Intel half decent is high clock speeds, in fact run your 10900k in dual core mode at 2.4ghz and compare with the iPhone and you'll see how slow it really is in comparison. Like for like. And those clock speeds are going to climb massively when Apple sticks the A14X in a desktop or laptop.



Oh no, I have this from my own experience as well. Try doing something alongside your main work on an Apple device... it will slow down to a crawl, it will even hang at times. I have this experience on phones and tablets of the brand. The same thing applies with Android, and there its possibly even worse. You have lots of cores, but any work pushed concurrently is going to drastically impact overall performance. A big part of the equation is like you say, power. Another big one is refresh rates and input device - touch is incredibly slow which is why the pencil Ipad shows the best case scenario for IOS.

The point is mainly... ARM is a very tight ecosystem right now, x86 is pretty broad, but the latter is also much more refined in its versatility. It will certainly handle multiple varying tasks better and maintain responsiveness and its performance scales more linearly instead of dropping off hard at some point.

Keep in mind the general use cases vary along with the different CPUs. The ARM devices get tailor made stuff for their limited resources. And that is also why the Geekbenches are so utterly pointless.


----------



## Searing (Aug 18, 2020)

No it is not 5w per core it is 5w total including the GPU and everything. The iPad is 4 main cores and GPU for 10w. 8 cores and double GPU would be less than 20w. No arm does not slow down if you do more tasks. I’m a web developer and I have an iPad Pro and iPhone SE and MacBook Pro and a $5000 PC in front of me. Thank God I can use a mouse and keyboard with my iPad Pro now. And this is two year old hardware the new one will absolutely crush my windows laptop.


----------



## Dredi (Aug 18, 2020)

Vayra86 said:


> Yes, Apple, as long as you have your slow as molasses IOS to pair with your great CPUs, it all looks very smooth and fast. Meanwhile, latency is pretty high on all your devices. Its a nice hiding trick but in raw performance, its not all that special as many think it is.
> 
> I'll take x86 and some real responsiveness, ty
> 
> ...



30ms total system latency is pretty good though, considering the input device? I bet your system would not be appreciably faster with a comparable input method. Even in games like CS:GO the total system latency (with a 1000Hz mouse) is around thay same 30ms. https://blurbusters.com/gsync/preview2/

edit: your table has the 60Hz display version of the ipad pro. Just by swapping to the later ones with 120Hz display you should get at least 8ms off that result, making it even more impressive. I mean the 30ms figure you posted for the 2017 model means that the total lag is less than two frames, so assuming the need for a frame buffer (there definitely is one) it can’t realistically really be any faster.


----------



## Vayra86 (Aug 18, 2020)

Searing said:


> No it is not 5w per core it is 5w total including the GPU and everything. The iPad is 4 main cores and GPU for 10w. 8 cores and double GPU would be less than 20w. No arm does not slow down if you do more tasks. I’m a web developer and I have an iPad Pro and iPhone SE and MacBook Pro and a $5000 PC in front of me. Thank God I can use a mouse and keyboard with my iPad Pro now. And this is two year old hardware the new one will absolutely crush my windows laptop.



Once Apple gets to the point of running x86-equivalent software on ARM we can truly see what's what. Until then, I'm not quite convinced just yet.


----------



## laszlo (Aug 18, 2020)

Apple is confident enough to move on arm ;in conclusion - they already know  that performance/power will be better than what current(maybe even future) cpu's can offer.

as they'll make it gives them the unique opportunity to create it the best way to be used by i assume a new os ; i don't think we'll see higher latencies maybe lower ....anyway we need to wait the 1st batch leaks..


----------



## Dredi (Aug 18, 2020)

Vayra86 said:


> Once Apple gets to the point of running x86-equivalent software on ARM we can truly see what's what. Until then, I'm not quite convinced just yet.


The devkit results are pretty good already, with customer devices coming in the fall (with full desktop SW stack).


----------



## M2B (Aug 18, 2020)

Searing said:


> No it is not 5w per core it is 5w total including the GPU and everything. The iPad is 4 main cores and GPU for 10w. 8 cores and double GPU would be less than 20w. No arm does not slow down if you do more tasks. I’m a web developer and I have an iPad Pro and iPhone SE and MacBook Pro and a $5000 PC in front of me. Thank God I can use a mouse and keyboard with my iPad Pro now. And this is two year old hardware the new one will absolutely crush my windows laptop.







If you run all the CPU & GPU cores at Max frequency the chip can consume up to 20W, but of course apple doesn't allow that to happen and limits the maximum power available.


----------



## Vya Domus (Aug 18, 2020)

Imagine believing that a single digit W SoC will outperform a 45W Intel chip with 4.8 Ghz single core turbo.

Apple fanboys are something else.


----------



## Dredi (Aug 18, 2020)

Vya Domus said:


> Imagine believing that a single digit W SoC will outperform a 45W Intel chip with 4.8 Ghz single core turbo.
> 
> Apple fanboys are something else.


SPEC has its own issues as a metric, but at least in that they are already within a punching distance. They would not bring a pro series laptop to the market with the new chip unless it would perform.


M2B said:


> Apple's current lightning core inside the iPhone 11 consumes about 5W of power at the maximum 2.6GHz rated frequency.
> Assuming it's the same story with the A14X and you have eight of them + bigger caches, you"ll be looking at the same Power draw as the AMD and Intel H series mobile chips.
> I'm sure the A14X will be a very strong chip, but it's not going to be beyond what is available from Intel and especially AMD.


New process as well, 5nm EUV vs N7P. According to TSMC it would give 15-25% more frequency at the same power. Assuming that the new chip is designed around a higher maximum power draw we might see even 4GHz in single thread loads, which would push it way beyond skylake in SPEC.


----------



## Mescalamba (Aug 18, 2020)

Alternative CPUs are moot until they can dual boot Win. Trying to run x86 games of the past on something else might entertaining for a while, but when you need to work hard to make it work every single time, it becomes really annoying really fast.


----------



## Dredi (Aug 18, 2020)

Mescalamba said:


> Alternative CPUs are moot until they can dual boot Win. Trying to run x86 games of the past on something else might entertaining for a while, but when you need to work hard to make it work every single time, it becomes really annoying really fast.


I don’t think that these are meant to be gaming console replacements.


----------



## Vayra86 (Aug 18, 2020)

laszlo said:


> Apple is confident enough to move on arm ;in conclusion - they already know  that performance/power will be better than what current(maybe even future) cpu's can offer.
> 
> as they'll make it gives them the unique opportunity to create it the best way to be used by i assume a new os ; i don't think we'll see higher latencies maybe lower ....anyway we need to wait the 1st batch leaks..



No no, they know they can maintain a certain user experience within the boundaries they set for these users.

Apple controls the software side, which is why they can do this and why they can make it seem like ARM is suddenly a magical do-all. They've done the exact same with Intel's x86 CPUs on current software, where they also offer very little hardware for the money but still have a good user experience, and decent enough performance.

At the same time you don't see Apple in HPC for heavy crunch loads and they are non existant in any half serious enterprise landscape, except as individual workstations. Server, data... nope. They offer machines that are great terminals, and pretty shitty at everything else. And even as a terminal, don't you dare think of advanced/power user functionality. Its just not there.


----------



## Frick (Aug 18, 2020)

Vayra86 said:


> No no, they know they can maintain a certain user experience within the boundaries they set for these users.
> 
> Apple controls the software side, which is why they can do this and why they can make it seem like ARM is suddenly a magical do-all. They've done the exact same with Intel's x86 CPUs on current software, where they also offer very little hardware for the money but still have a good user experience, and decent enough performance.
> 
> At the same time you don't see Apple in HPC for heavy crunch loads and they are non existant in any half serious enterprise landscape, except as individual workstations. Server, data... nope. They offer machines that are great terminals, and pretty shitty at everything else. And even as a terminal, don't you dare think of advanced/power user functionality. Its just not there.



Are you calling Mac Pros terminals?


----------



## Dredi (Aug 18, 2020)

Frick said:


> Are you calling Mac Pros terminals?


I guess he is. 

 I’d also like to know why/how apple would enter the HPC market, when they didn’t have any own hardware up until now.


----------



## Vayra86 (Aug 18, 2020)

Frick said:


> Are you calling Mac Pros terminals?



Might not be the best descriptor, although with the increasing dependance on cloud... hmmm


----------



## Fourstaff (Aug 18, 2020)

Given the rate of improvement, even if A14X is even half as powerful as 9880H they will catch up within a few generations.


----------



## laszlo (Aug 18, 2020)

Vayra86 said:


> No no, they know they can maintain a certain user experience within the boundaries they set for these users.
> 
> Apple controls the software side, which is why they can do this and why they can make it seem like ARM is suddenly a magical do-all. They've done the exact same with Intel's x86 CPUs on current software, where they also offer very little hardware for the money but still have a good user experience, and decent enough performance.
> 
> At the same time you don't see Apple in HPC for heavy crunch loads and they are non existant in any half serious enterprise landscape, except as individual workstations. Server, data... nope. They offer machines that are great terminals, and pretty shitty at everything else. And even as a terminal, don't you dare think of advanced/power user functionality. Its just not there.



let's not agree to disagree at this point as we don't have a clue what they prepare...


----------



## Vya Domus (Aug 18, 2020)

Fourstaff said:


> Given the rate of improvement, even if A14X is even half as powerful as 9880H they will catch up within a few generations.



In a few generations Intel (or AMD) will have new processors as well, the 9880H is still basically an ancient Skylake CPU. Also the rate of improvement has hard limits, there are so many execution units and in-flight instructions you can add before it makes no difference in the real world.


----------



## Fourstaff (Aug 18, 2020)

Vya Domus said:


> In a few generations Intel (or AMD) will have new processors as well, the 9880H is still basically an ancient Skylake CPU. Also the rate of improvement has hard limits, there are so many execution units and in-flight instructions you can add before it makes no difference in the real world.


 
That's true, but if they are at Skylake level of performance it will be "good enough" for most people. According to Steam hardware survey, most people are still at 6 cores or less: https://store.steampowered.com/hwsurvey/cpus/. Hardly any with cutting edge 8C or better processors.


----------



## Frick (Aug 18, 2020)

Vayra86 said:


> Might not be the best descriptor, although with the increasing dependance on cloud... hmmm



By that logic any internet connected machine that has client software is a terminal.


----------



## Vayra86 (Aug 18, 2020)

Frick said:


> By that logic any internet connected machine that has client software is a terminal.



That would be correct. Consider a Chromebook...

With the push for cloud, we are fast going for global mainframes. Anyway. Grossly offtopic I guess


----------



## Punkenjoy (Aug 18, 2020)

The first 80% performance are easily obtainable when making a CPU, it's the last 20% that get harder. And the closer you get to 100% the more work you have to put. 

It's also always funny that everybody future CPU is beating 1-2 years old CPU. But in the end, they will fight different architecture. 

And Apple have a lot of silicon dedicated to many accelerator and since they live in a closed environement where they control everything, they can easily make use of them. That is actually a good strategy but it come with downside. 

The truth is it will be hard to really get a real idea of performance between Apple CPU and the rest of the market. It might end up in a fight between a closed and controlled platform where everything can be set the way apple want and an open environment where everyone is free to do what they want.


----------



## king of swag187 (Aug 18, 2020)

Vya Domus said:


> No, it wont. God I hate geekbench, literately the only benchmark that you see or hear about whenever there is something Apple related.


Finally, someone who realizes how flawed it is. Also, really? @author of the article, using some random "tech" youtuber who has no idea what he's talking about for a news piece? Wow this site has gone down in quality recently



Vayra86 said:


> Yes, Apple, as long as you have your slow as molasses IOS to pair with your great CPUs, it all looks very smooth and fast. Meanwhile, latency is pretty high on all your devices. Its a nice hiding trick but in raw performance, its not all that special as many think it is.
> 
> I'll take x86 and some real responsiveness, ty
> 
> ...



realistically what the hell is this even supposed to mean


----------



## dragontamer5788 (Aug 18, 2020)

Lets actually talk Geekbench for a sec. I know Geekbench3 was highly flawed, but why does everyone think that Geekbench4 is bad?

Here's Geekbench4's workload: https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf

Now I recognize that a lot of Geekbench4's benchmarks fit inside of L1 cache, but that's more of a testament to how big L1 caches have gotten. (128kB on the iPhone). Lets be frank: if 128kB L1 cache is what's needed for the modern consumer, then we should be blaming AMD / Intel for failing to grow their L1 to 128kB (AMD / Intel still have 32kB L1 data caches).

Lets really look at Geekbench4's benchmarks. Unlike Geekbench3, AES is downgraded to be just another test instead of its own category. (And mind you, AMD Zen2 and Intel Xeons have doubled their AES pipelines recently: AES remains an important workload). There's JPEG compression (emulating a camera), HTML5 parse, LUA scripting, SQLite database, and PDF rendering. Lots of good workloads here. Very similar to a wide variety of workloads of the modern, average consumer. Even an LLVM compile (3900 lines of code).

There's a bunch of "synthetics" too: 450kB LZMA compression, Djikestra, Canny (Computer-vision), a 300x300 Raytracer, etc. etc. A bunch of tiny synthetics.

--------------

Geekbench4 is what it is: a small test for testing L1 cache and Turbos of modern processors. Its probably closer to the average phone-user or even desktop-user's workflow than SPEC, LINPACK, or HCPG.

But yes, the iPhone crushes Geekbench. Because the iPhone has 128kB L1 cache. But is that a legitimate reason to call the test inaccurate? We can't just hate a test because we disagree with the results. You should instead attack the fundamental setup of the test, and tell us why its inaccurate.

Its pretty insane that the iPhone has a 128kB L1 cache per core. Yeah, that's its secret to crushing Geekbench4 and its pretty obvious. But Intel Skylake's L2 cache is only 256kB and AMD Zen2's L2 is 512kB. Having such a large L1 cache is a testament to the A12 design (larger caches are usually slower. Having such a large cache as L1 must have been difficult to make).


----------



## Searing (Aug 18, 2020)

M2B said:


> View attachment 165917
> 
> If you run all the CPU & GPU cores at Max frequency the chip can consume up to 20W, but of course apple doesn't allow that to happen and limits the maximum power available.



You say 20W yet you post a picture that clearly says 4.61W for SPEC integer and 5.04W for SPEC floating point. So no, it isn't 20W. And the chart also shows the 2.6ghz A13 beating the i9-9900k in integer performance, so imagine the A14.

Sometimes I feel like I'm a developer and have a basic grasp of graph reading, and I'm arguing with people who "know things" and will even post proof opposite what they are saying.

Total system power is 3W to 6W during gaming.






Vya Domus said:


> Imagine believing that a single digit W SoC will outperform a 45W Intel chip with 4.8 Ghz single core turbo.
> 
> Apple fanboys are something else.



Educate yourself. Limited TDP is where Intel does very badly right now, that's why AMD is ahead with Ryzen 4000. Run your 9900k with 2 cores and 5W and watch it squirm. We are not fanboys, we've been watching Intel do absolutely nothing for years, no improvements in manufacturing node, and no changes to core or GPU design. Hopefully Tiger Lake keeps Intel in the race.


----------



## M2B (Aug 18, 2020)

Searing said:


> You say 20W yet you post a picture that clearly says 4.61W for SPEC integer and 5.04W for SPEC floating point. So no, it isn't 20W.



That 5W figure is for A SINGLE LIGHTNING CORE at max frequency, the phone itself won't surpass the power/thermal limits of course and if you run a proper multicore workload it will throttle down. Seems like You are not that good at graph reading Mr. Developer.


----------



## TheoneandonlyMrK (Aug 18, 2020)

Vya Domus said:


> Imagine believing that a single digit W SoC will outperform a 45W Intel chip with 4.8 Ghz single core turbo.
> 
> Apple fanboys are something else.


Dreamy lot aren't they , it's like apples cinebench , that geek bench ,except worse.

At least cinebench can be and is loop run so 30second turbo mode's don't cheat the figures, total ball's comparison software.


----------



## Searing (Aug 18, 2020)

M2B said:


> That 5W figure is for A SINGLE LIGHTNING CORE at max frequency, the phone itself won't surpass the power/thermal limits of course and if you run a proper multicore workload it will throttle down. Seems like You are not that good at graph reading Mr. Developer.



That's not how it works. Again, developer here. You don't go to double the power consumption using both cores, there's no complete power gating here. That's why I showed you the 6W max for everything in the GPU test.

The iPhone doesn't turn off one core and run at half power when you do the test. In singe core performance it takes less than 6W and beats the 9900k in integer performance already. In multicore it throttles down slightly and runs two cores at about the same power. And anyways we are talking about the 5nm EUV A14, that one will beat Intel easily. Give it higher clock speeds ala laptop or desktop form factor and it will beat Intel in FP also most likely.

Don't quote Anandtech and then ignore where they say Apple is faster in integer than the 9900k already and that was a year ago. I have a Epyc 24 core server, a 10900 development machine and a Macbook Pro and Ryzen 4000 latop in the house, and iPad Pro and iPhone (work pays for stuff). I have no problems with performance. The question is why do you believe it isn't fast? Anandtech, every benchmark, every program, and tons of youtube videos are out there, go have fun.


----------



## TheoneandonlyMrK (Aug 18, 2020)

Searing said:


> That's not how it works. Again, developer here. You don't go to double the power consumption using both cores. That's why I showed you the 6W max for everything in the GPU test.
> 
> The iPhone doesn't turn off one core and run at half power when you do the test. In singe core performance it takes less than 6W and beats the 9900k in integer performance already. In multicore it throttles down slightly and runs two cores at about the same power. And anyways we are talking about the 5nm EUV A14, that one will beat Intel easily. Give it higher clock speeds ala laptop or desktop form factor and it will beat Intel in FP also most likely.


Simply impossible, the node they use(5Nm) limits any possibility of higher clocks, you think they are not already pushing it via burst algorithms, and apples chips would get wrecked on a sustained workload against that 9900K, never mind a more modern x86 core.

if its a philips head use a philips screwy
flat head use a old shool screwy


just surfin or lightweight tasks use arm .
do actual work or run simulations 24/7 etc use x86.

simple.


----------



## Searing (Aug 18, 2020)

theoneandonlymrk said:


> Simply impossible, the node they use(5Nm) limits any possibility of higher clocks, you think they are not already pushing it via burst algorithms, and apples chips would get wrecked on a sustained workload against that 9900K, never mind a more modern x86 core.



We are talking about the chips, not the form factor. Stick them in an iPad the sustained performance is higher. Sustained means nothing, you just stick the A14X in a laptop form factor and it would be sustained. They are only at 6W and 2.6ghz and you think they can't get higher. Ok.... *backs away slowly*

ARM is an ISA it has nothing to do with how fast the CPU can be.









						ARM-based Japanese supercomputer is now the fastest in the world
					

Fugaku is being used in COVID-19 research




					www.theverge.com


----------



## Vya Domus (Aug 18, 2020)

Searing said:


> Educate yourself.



No, you should be the one educating yourself on what's feasible and what isn't.



Searing said:


> Sustained means nothing



Come on. Sustained means nothing, right, the one thing that you know Apple's chips are horrible at in terms of scalability means nothing. Got it.




dragontamer5788 said:


> *Geekbench4 is what it is: a small test for testing L1 cache and Turbos of modern processors.*
> *...*
> *Its pretty insane that the iPhone has a 128kB L1 cache per core. Yeah, that's its secret to crushing Geekbench4 and its pretty obvious. *



It's also pretty obvious why it's a horrible benchmark precisely because of that. We both know those patterns have little to do with the real world, Samsung tried to do optimize their cores for Geekbench as well and indeed they are second to Apple except their chips perform worse in real world tasks than vanilla ARM designs that get half the score Samsung's cores do. Is that still not enough to prove something is terribly wrong with this benchmark ?



dragontamer5788 said:


> Having such a large L1 cache is a testament to the A12 design (larger caches are usually slower. Having such a large cache as L1 must have been difficult to make).



Anyone can put large caches, there is nothing amazing about that. In fact, it's a pretty poor strategy especially in a mobile chip, caches don't just get slower when they become larger they also use a lot of power as well. Probably one of the reasons why their chips always had horrendous multi-threaded scalability, having one core turbo up with such a wide design is fine, when you have 2 or 4 or more you inevitably need to drop the frequencies into the ground. That's fine for a phone, it fits the typical usage pattern but on a desktop not so much.



M2B said:


> Mr. Developer.



Cut him some slack, he said he's a web developer, this stuff is not exactly within his area of expertise.


----------



## Searing (Aug 18, 2020)

Vya Domus said:


> No, you should be the one educating yourself on what's feasible and what isn't.
> 
> 
> 
> ...



Every forum is full of ignorant people like you just ignoring every benchmark (it is Geekbench 5 now, and there are many other benchmarks you can use), ignoring every expert Anandtech included, ignoring actual real world results. World's fastest computer is ARM based? Ignore it. Amazon offering ARM server instances? Ignore it. This is why the world passes some people by. They just can't accept that something has changed. There is an interesting question about psychology here, why does ARM being fast bother you? Why do you not accept basic reality? ARM is just an ISA, 68000 was fast, PowerPC was fast, x86 was fast, ARM was fast, it is just an ISA.

"Come on. Sustained means nothing, right, the one thing that you know Apple's chips are horrible at in terms of scalability means nothing. Got it." Any chip can run with sustained performance with a bit more cooling and power, yes it means nothing. We are comparing the CPUs, not the form factor.


----------



## Vya Domus (Aug 18, 2020)

Searing said:


> Every forum is full of ignorant people like you just ignoring every benchmark (it is Geekbench 5 now, and there are many other benchmarks you can use), ignoring every expert Anandtech included, ignoring actual real world results. World's fastest computer is ARM based? Ignore it. Amazon offering ARM server instances? Ignore it. This is why the world passes some people by. They just can't accept that something has changed.



You mean Anadtech the site that exposed several times how little worth Geekbench has as an accurate benchmark through their tests : https://www.anandtech.com/show/12520/the-galaxy-s9-review/4



> It’s when we try to compare the Exynos 9810 versus the Snapdragon 845 where we start to see issues when trying to reconcile the fact that the Galaxy S9 is powered by both SoCs. With its new microarchitecture and significant silicon budget, *the Exynos 9810 only manages a 22% and 17% lead over the Snapdragon 845, a stark contrast to the much larger discrepancy that we had previously analysed in GeekBench 4 measured coming in at 37% and 68% for integer and floating point workloads*.



You know what they say, you can lead an Apple fanboy to water ...



Searing said:


> We are comparing the CPUs, not the form factor.



Because it's the form factor that gives you performance not the CPU itself and it's underlying architecture ? What are you smoking ?

The chip has to the be designed to be scalable under an increased power envelope. The fact that you believe you can just put any chip out their under better cooling and more power and it will just magically run faster shows how primitive your logic and understanding is on the matter.



Searing said:


> There is an interesting question about psychology here, why does ARM being fast bother you? Why do you not accept basic reality? ARM is just an ISA, 68000 was fast, PowerPC was fast, x86 was fast, ARM was fast, it is just an ISA.



ARM is an ISA as you said, it can't be fast, careful there you preach what you don't believe yourself it seems. Only thing related to ARM that I mentioned are it's vanilla designs which are fast, _in a mobile device_.


----------



## M2B (Aug 18, 2020)

Searing said:


> That's not how it works. Again, developer here. You don't go to double the power consumption using both cores, there's no complete power gating here. That's why I showed you the 6W max for everything in the GPU test.
> 
> The iPhone doesn't turn off one core and run at half power when you do the test. In singe core performance it takes less than 6W and beats the 9900k in integer performance already. In multicore it throttles down slightly and runs two cores at about the same power. And anyways we are talking about the 5nm EUV A14, that one will beat Intel easily. Give it higher clock speeds ala laptop or desktop form factor and it will beat Intel in FP also most likely.
> 
> Don't quote Anandtech and then ignore where they say Apple is faster in integer than the 9900k already and that was a year ago. I have a Epyc 24 core server, a 10900 development machine and a Macbook Pro and Ryzen 4000 latop in the house, and iPad Pro and iPhone (work pays for stuff). I have no problems with performance. The question is why do you believe it isn't fast? Anandtech, every benchmark, every program, and tons of youtube videos are out there, go have fun.



That's not how AnandTech measures power.

Take a look at this:




As you can see, the measured power draw of a small thunder core inside iPhone 11 is around 0.3W, this literally proves my point, that 0.3W figure can't be for the whole SoC, RIGHT? It's just a single thunder core. The same story is true for their big core graph.


----------



## TheoneandonlyMrK (Aug 18, 2020)

Searing said:


> We are talking about the chips, not the form factor. Stick them in an iPad the sustained performance is higher. Sustained means nothing, you just stick the A14X in a laptop form factor and it would be sustained. They are only at 6W and 2.6ghz and you think they can't get higher. Ok.... *backs away slowly*
> 
> ARM is an ISA it has nothing to do with how fast the CPU can be.
> 
> ...


You quoted the part wherein i mentioned chip technology yet you commented on the throwaway form factor comment ,go back, have a go at a smart ass way of beating the laws of nodes again, they decide speed that and the design whats bionic designed for again, is it speed? purely speed?, speed costs transistor budget and energy simple ,readup on chip design, you cant make a fork a into a spoon.

Stick them in an Ipad and you got a good web browser yes but do any gaming ,3d modeling , engineering or simulation work on it and it will lag way behind that 9900K, which couldn't possibly sit in that form factor tbf.

sustained means nothing to your perception, That 5Nm chip is made for the platform its in, stick it in a laptop and it will clock about the same ,it's silicon limit is what it is.


----------



## Searing (Aug 18, 2020)

hahahaha like I said, something about Apple brings out the ignorant haters. Have fun spewing nonsense. There's so much in the last 3 comments, no point. You didn't read or understand the earlier comments anyways.

Suddenly Apple has a 2.6ghz silicon limit, you can't do anything except light work, ARM is an ISA that can't be fast blah blah (despite the world's fastest computer being based on ARM), still trying to suggest Apple uses more power than they do, when you can literally measure it at any time, pretending Anandtech didn't say the A13 was shockingly fast and performant.

No benchmark represents all performance, it creates a statistic that represents complicated information with one number (I guess you'd hate my masters mathematics and statistics education since you hate my developer experience also, btw I hate macs, but I have to make all the web code work with apple devices, iOS in particular). Geekbench 5 is one. Spec2006 is one. How long it takes you to export a video on iPad (faster than my PC since it uses dedicated hardware) also one. Go find 100 benchmarks that all show Apple CPUs at the top in performance efficiency and come back here. That was the A13, wait for the A14X.

Apple is leaving Intel behind for a reason. My 10900 is fast, but nothing special. Same cores from 4 years ago.


----------



## TheoneandonlyMrK (Aug 18, 2020)

Searing said:


> hahahaha like I said, something about Apple brings out the ignorant haters. Have fun spewing nonsense. There's so much in the last 3 comments, no point. You didn't read or understand the earlier comments anyways.
> 
> Suddenly Apple has a 2.6ghz silicon limit, you can't do anything except light work, ARM is an ISA that can't be fast blah blah (despite the world's fastest computer being based on ARM), still trying to suggest Apple uses more power than they do, when you can literally measure it at any time, pretending Anandtech didn't say the A13 was shockingly fast and performant.
> 
> ...


fine example you raised " How long it takes you to export a video on iPad (faster than my PC since it uses dedicated hardware) also one. Go find 100 benchmarks that all show Apple *CPU*s at the top in performance efficiency and come back here. That was the A13, wait for the A14X"

your example uses a coprocessor, a special accelerator, something others also do, and certainly a hardware feature that helps with efficiency and performance while doing daily tasks ,like browsing , but they are not the cpu are they, apple wins on efficiency is a given, as i said ,put to task, as i think every desktop processor should be ,its entire life, then the 9900k gets way more work done, as for ryzen 4800X/5800X, who knows.

I use everything but Apple devices personally but i have used them , and there's no hate just understanding, I see their wares, i like the Os but no still ,Apple simply cannot do all that i want from one device ,quickly.

funny perspectives aren't they, honestly it doesn't matter to me, I think they will sell well and perform well for their target market , mostly.
and few will complain ,because the perspective that gets one of those devices into your hands in the first place means you knew what you wanted and were going to both get from the device and do with it , apple is built on consistency,ease and reliabilty, with panash, f knows how you spell that ,even checkers baffled.

you still havent told us how you have both cutting edge nodes and high clocks yet either.


----------



## Vya Domus (Aug 18, 2020)

Searing said:


> ARM is an ISA that can't be fast



Unironically believing that an ISA is what determines if something is fast or not is by far the dumbest and nonsensical idea from all of the posts here. You outclassed all of us.


----------



## Searing (Aug 18, 2020)

Vya Domus said:


> Unironically believing that an ISA is what determines if something is fast or not is by far the dumbest and nonsensical idea from all of the posts here.



I know. He said that, not me, I was quoting him.


----------



## Vya Domus (Aug 18, 2020)

Searing said:


> I know. He said that, not me, I was quoting him.



I can't find anywhere that quote. 



Searing said:


> ARM is an ISA that can't be fast blah blah *(despite the world's fastest computer being based on ARM*)



Implying there is a correlation between ISA and speed.

You did the same a couple of posts above :



Searing said:


> ARM is just an ISA, 68000 was fast, PowerPC was fast, *x86 was fast, ARM was fast*, it is just an ISA.


----------



## Searing (Aug 18, 2020)

Vya Domus said:


> I can't find anywhere that quote.
> 
> 
> 
> ...



Now you've really crossed in to the deep end. I was stating things in opposition to him saying ARM can only be slow and you are actually quoting me pretending I said the opposite. I SAID the ISA doesn't matter. I get it, you want to win every argument. I post quotes from reviewers or experts or show slides, no matter. You just keep going with your own thoughts. Good day, bye.


----------



## Vya Domus (Aug 18, 2020)

Searing said:


> I was stating things in opposition to him saying ARM can only be slow.



Yeah, by saying that it's fast instead  . Nice 200 IQ backpedaling bro.



Searing said:


> I post quotes from reviewers or experts or show slides, no matter.



So did I, except you ignored them because it went against your fantasy world.



Searing said:


> Now you've really crossed in to the deep end.



I'm glad it took that long, you did so from your very first words with your avid fanboysm. Nice try though.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> Lets actually talk Geekbench for a sec. I know Geekbench3 was highly flawed, but why does everyone think that Geekbench4 is bad?
> 
> Here's Geekbench4's workload: https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf
> 
> ...


You realise all modern processor's are designed for Turbo, and dash to rest operation.
Any bench shorter than the Tau value isn't worth shit regardless IMHO, not really, you can gauge performance to a degree but it's not the whole picture , and that's geek bench for you , short bursts , a test designed for phones and light use cases.


----------



## Darmok N Jalad (Aug 19, 2020)

I don’t really see how it’s that far of a stretch at this point. This A14X is Apple’s latest and greatest on a 5nm node, getting to scale up in power, versus a tired, Skylake-based chip on a very old 14nm node and crammed down to its thermal minimum. Now if the A14X matched the 9900K, that would be much harder to believe. I don’t think Apple would make this move unless they had something solid lined up. I guess it won’t be too much longer before we find out, and benchmarks beyond Geekbench will be available on ARMacOS.


----------



## dragontamer5788 (Aug 19, 2020)

theoneandonlymrk said:


> You realise all modern processor's are designed for Turbo, and dash to rest operation.



And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.



> Any bench shorter than the Tau value isn't worth shit regardless IMHO



Explain. Why? The situation is clearly common. I can look at "top" or Windows Process Manager and see that my utilization is damn near 0% most of the time when.

We hardware nerds love to pretend that we're running our systems at high utilization with high efficiency, as if we were bitcoin miners or Fold@Home geeks all the time. But that's just not the reality of the day-to-day. Even programming at work has started to get offloaded to dedicated "build servers" and continuous integration facilities, off of the desktop / workstation at my workdesk.

Browsing HTML documentation for programming is hugely important, and a lot of that is "TURBO race to idle" kind of workloads. The chip idles, then suddenly the HTML5 DOM shows up, maybe with a bit of Javascript to run before reaching its final form. But take this webpage for instance: This forum page is ~18.8 KBs HTML5, which is small enough to fit inside the L1 cache of all of our machines.

That's the stuff Geekbench is measuring: opening PDF documents, browsing HTML5, parsing DOM, interpreting JPEG images. It seems somewhat realistic to me, with various internal webpages constantly open at my work computer and internal wikis I'm updating constantly.

---------

I don't even like Apple. I don't own a single Apple product. But you're going to have to explain the flaws of Geekbench if you really want to support your discussion points.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.
> 
> 
> 
> ...


You ignored my point because it doesn't fit your perspective , but my point is quite simple and is already explained adequately.

I also said it's my opinion sooo.

And my pc has said 80-100% load for years now, as I also said.

You know what gets hardly any abuse, my phone it's surfed on and fit's your skewed perspective.

Geek bench on it makes sense.


----------



## Vayra86 (Aug 19, 2020)

Searing said:


> Every forum is full of ignorant people like you just ignoring every benchmark (it is Geekbench 5 now, and there are many other benchmarks you can use), ignoring every expert Anandtech included, ignoring actual real world results. World's fastest computer is ARM based? Ignore it. Amazon offering ARM server instances? Ignore it. This is why the world passes some people by. They just can't accept that something has changed. There is an interesting question about psychology here, why does ARM being fast bother you? Why do you not accept basic reality? ARM is just an ISA, 68000 was fast, PowerPC was fast, x86 was fast, ARM was fast, it is just an ISA.
> 
> "Come on. Sustained means nothing, right, the one thing that you know Apple's chips are horrible at in terms of scalability means nothing. Got it." Any chip can run with sustained performance with a bit more cooling and power, yes it means nothing. We are comparing the CPUs, not the form factor.



You're saying it yourself and others have said it too, you just fail to realize it.

'PowerPC was fast'... it was even part used in a Playstation, yet today you don't see a single one in any gaming or consumer machine. In enterprise though? Yep. Its a tool that works best in that setting.

'ARM is fast'... correct. We have ThunderX chips that offer shitloads of cores and can use lots of RAM. They're not the ones we see in a phone though. We also have Apple's low-core-count, single-task optimized mobile chips. You won't see those in an enterprise environment. That's not 'ignoring it', it is separating A from B correctly.

Sustained means nothing in THE USE CASE Apple has selected for these chips. That is where all chips are going. More specialized. More specific to optimal performance in a desired setting. Even Intel's own range of CPUs, even in all those years they were 'sleeping' have pursued that goal. They are still re-using the same core design in a myriad of power envelopes and make it work top to bottom - in Enterprise, in laptop, and they've been trying to get there on mobile. The latter is the ONE AREA where they cannot seem to succeed, a bit similar to Nvidia's Tegra designs that are always somewhat too high power and perform well, but are too bulky after all to be as lean as ARM is under 5W. End result: Nvidia still didn't get traction with its ARM CPUs for any mobile device.

In the meantime, Apple sees Qualcomm and others develop chips towards the 'x86 route'. They get higher core counts, more and more hardware thrown at ever less efficient software, expanded functions. That is where the direction of ARM departs from Apple's overall strategy - they want optimized hard- and software systems. You seem to fail to make that distinction thinking Apple's ARM approach is 'The ARM approach'. Its not, the ISA is young enough to make fundamental design decisions.

Like @theoneandonlymrk said eloquently: philips head? Philips screwdriver.



dragontamer5788 said:


> And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.
> 
> 
> 
> ...



There you go and that is why I said, Apple is going to offer you terminals, not truly powerful devices. Intel laptop CPUs are not much different, very bursty and slow as shit under prolonged loads. I haven't seen a single one that doesn't throttle like mad after a few minutes. They do it decently... but sustained performance isn't really there.

I will underline this again
Apple found a way to use ARM to guarantee their intended user experience.
_This is NOT a performance guarantee. Its an experience guarantee._

You need to place this in the perspective of how Apple phones didn't really have true multitasking while Android did. Apple manages its scheduler in such a way that it gets the performance when the user demands it. They path out what a user will be looking at and make sure they show him something that doesn't feel like waiting to load. A smooth animation (that takes almost a second), for example, is also a way to remove the perception of latency or lacking performance. CPU won't burst? Open the app and show an empty screen, fill data points later. Its nothing new. Websites do it too irrespective of architecture, especially the newer frameworks are full of this crap. Very low information density and large sections of plain colors are not just a design style, its a way to cater to mobile limitations.

If you use Apple devices for a while, take a long look at this and monitor load and you can see how it works pretty quickly. There is no magic sauce.


----------



## Vya Domus (Aug 19, 2020)

Vayra86 said:


> Apple found a way to use ARM to guarantee their intended user experience.
> _This is NOT a performance guarantee. Its an experience guarantee._



Indeed, a considerable chunk of their SoCs are just dedicated signal processors for different purposes transistor budget wise. To put things into perspective, it has 8.5 billion transistors, that's as much as an average GPU ... it better be fast. 

Not to mention that they don't just put large L1 caches, everything related to on chip memory is colossal in size. And again, everyone can do that, that's not the merit of an ARM design or not.


----------



## dragontamer5788 (Aug 19, 2020)

theoneandonlymrk said:


> You ignored my point because it doesn't fit your perspective





Vayra86 said:


> There you go and that is why I said, Apple is going to offer you terminals, not truly powerful devices. Intel laptop CPUs are not much different, very bursty and slow as shit under prolonged loads. I haven't seen a single one that doesn't throttle like mad after a few minutes. They do it decently... but sustained performance isn't really there.



I'm not sure if you guys know what I know.

Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.






						SMT Solving on an iPhone
					

Why buy an expensive desktop computer when your iPhone is a faster SMT solver?



					www.cs.utexas.edu
				




Though 20 seconds is slow, the blogpost indicates that they continuously ran the 3-SAT solver over and over again, so the iPhone was behaving at its thermal limits.  The Z3 solver tries to solve the 3-SAT NP Complete problem. At this point, it has been demonstrated that Apple's A12 has a faster L1, L2, and memory performance than even Intel's chips in a very difficult, single-threaded task.

----------

Apple's chip team has demonstrated that its small 5W chip is in fact pretty good at some very difficult benchmarks. It shouldn't be assumed that iPhones are slower anymore. They're within striking distance of Desktops in single-core performance in some of the densest compute problems.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> I'm not sure if you guys know what I know.
> 
> Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.
> 
> ...


20 seconds, nuff said I'm out, to I'll leave you to stroke apples ego.

Another great example of a waste of time.

A true, great benchmark , 20 seconds.

Single core, dense compute problems, lmfao.


----------



## dragontamer5788 (Aug 19, 2020)

SMT Solvers, like Z3, solve a class of NP complete problems with pretty dense compute characteristics. Or are you unaware of what Z3 is?

Or are you unaware what "dense compute" means? HCPG is sparse compute (memory intensive), while Linpack is dense (cpu intensive). Z3 probably is in the middle, more dense than HCPG but not as dense as Linpack (I don't know for sure. Someone else may correct me on that).

--------

When comparing CPUs, its important to choose denser compute problems, or else you're just testing the memory interface (ex: STREAM benchmark is as sparse as you can get, and doesn't really test anything aside from your DDR4 clockrate). I'd assume Z3 satisfies the requirements of dense compute for the purposes of making a valid comparison between architectures. But if we go too dense, then GPUs win (which is *also* unrealistic. Linpack is too dense and doesn't match anyone's typical computer use. Heck, its too dense to be practical for supercomputers)


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> SMT Solvers, like Z3, solve a class of NP complete problems with pretty dense compute characteristics. Or are you unaware of what Z3 is?
> 
> Or are you unaware what "dense compute" means? HCPG is sparse compute (memory intensive), while Linpack is dense (cpu intensive). Z3 probably is in the middle, more dense than HCPG but not as dense as Linpack (I don't know for sure. Someone else may correct me on that).
> 
> ...


When comparing CPU, it's important to stick to the same CPU your comparing, and not to then revert to a two generation older CPU when you're argument is failing, he tested verses a 7700K.

In the note's

"This benchmark is in the QF_BV fragment of SMT, so Z3 discharges it using bit-blasting and SAT solving.
This result holds up pretty well even if the benchmark runs in a loop 10 times—the iPhone can sustain this performance and doesn’t seem thermally limited.1 That said, the benchmark is still pretty short.
Several folks asked me if this is down to non-determinism—perhaps the solver takes different paths on the different platforms, due to use of random numbers or otherwise—but I checked fairly thoroughly using Z3’s verbose output and that doesn’t seem to be the case.
Both systems ran Z3 4.8.1, compiled by me using Clang with the same optimization settings. I also tested on the i7-7700K using Z3’s prebuilt binaries (which use GCC), but those were actually slower.
What’s going on?
How could this be possible? The i7-7700K is a desktop CPU; when running a single-threaded workload, it draws around 45 watts of power and clocks at 4.5 GHz. In contrast, the iPhone was unplugged, probably doesn’t draw 10% of that power, and runs (we believe) somewhere in the 2 GHz range. Indeed, after benchmarking I checked the iPhone’s battery usage report, which said Slack had used 4 times more energy than the Z3 app despite less time on screen.

Apple doesn’t expose enough information to understand Z3’s performance on the iPhone, 




This result holds up pretty well even if the benchmark runs in a loop 10 times—the iPhone can sustain this performance and doesn’t seem thermally limited.1 That said, the benchmark is still pretty short.

He said prior it uses one core only on Apple, really leverage what's there eh, or the light load might sustain a boost better cos of that single core use but this leads to my point B.

He doesn't know how it's actually running on the apple, so can't know if it is leveraging accelerator's to hit that target.

Still 7700k verses A12 ,14nm+(only one plus not mine) verses 5Nm .

That's not telling anyone much about how the A14 would compare to a CPU out today not Eol, never mind the next generation Ryzen and Cove cores it would face.

All in fail.

Do I know what Z3 is, wtaf does it matter.

We are discussing CPU performance not , coder pawn.

I don't use it 99.9% of users also don't, I am aware of it though and aware of the fact it too is irrelevant like geek bench.


----------



## dragontamer5788 (Aug 19, 2020)

theoneandonlymrk said:


> He doesn't know how it's actually running on the apple, so can't know if it is leveraging accelerator's to hit that target.



Uh huh.



> Do I know what Z3 is, wtaf does it matter.



Well, given your statement above, I'm pretty sure you don't know what Z3 is. Z3 solves NP-complete optimization problems. Knapsack, Traveling Salesman, etc. etc. There's no "accelerator" chip for this kind of problem, not yet anyway. So you're welcome to take your foot out of your mouth now.

Z3 wasn't even made by Apple. Its a Microsoft Research AI project that happens to run really, really well on iPhones. (Open Source too. Feel free to point out where in the Github code this "accelerator chip" is being used. Hint: you won't find it, its a pure C++ project with some Python bits)



> We are discussing CPU performance not , coder pawn.



The CPU performance on the IPhone is surprisingly good in Z3. I'd like to see you explain why this is the case. There's a whole bunch of other benchmarks too that the iPhone does well on. But unlike Geekbench, something like Z3 actually has coder-ethos as a premier AI project. So you're not going to be able to dismiss Z3 as easily as Geekbench.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> Uh huh.
> 
> 
> 
> ...


 apples to oranges.

Can you explain where I said I was an expert in your coding speciality to put my foot in my mouth!?

I'm aware of f£#@£g Bigfoot but do I know many facts on him?.


So getting back to the 9900k it's 10-20% better than the 7700k your on about now.

And the Cove and zen3 cores are a good 17% better again (partly alleged).
That's at least 30% on the 7700K and Intel especially work hard to optimise for some code type's.

And you are still at it with second timed benches.

If I do something on a computer that takes time to run but it's a one off and a few seconds, I wouldn't even count it as a workload, at all.

Possibly a step in a process, but not a workload.

I have a workload or two and they don't Finish in 20 seconds.

Recent changes to GPU architecture and boost algorithms puts most GPU benchmarks and some people's benchmarking in the same light to me, Now, tests have to be sustained for a few minutes minimum or they're not good enough for me.

I'm happy to just start calling each other names if you want but best pm the mods don't like it.


----------



## Vya Domus (Aug 19, 2020)

theoneandonlymrk said:


> How could this be possible? The i7-7700K is a desktop CPU; when running a single-threaded workload, it draws around 45 watts of power and clocks at 4.5 GHz.



Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.

It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.


----------



## TheoneandonlyMrK (Aug 19, 2020)

Vya Domus said:


> Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.
> 
> It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.


That was part of the article he linked, it was in quotations , I agree with you though.
And that's similar to my point , one 20sec workload is no workload at all.

And the comparisons are all over the show.
I would argue the performance of a 7700k is hard core irrelevant, few are buying quad cores to game on now, even Intel stopped pushing quads A12=(1core) 7700k. = 9900k = 11900(shrug) apparently, shrug.


----------



## Searing (Aug 19, 2020)

[/QUOTE]


Darmok N Jalad said:


> I don’t really see how it’s that far of a stretch at this point. This A14X is Apple’s latest and greatest on a 5nm node, getting to scale up in power, versus a tired, Skylake-based chip on a very old 14nm node and crammed down to its thermal minimum. Now if the A14X matched the 9900K, that would be much harder to believe. I don’t think Apple would make this move unless they had something solid lined up. I guess it won’t be too much longer before we find out, and benchmarks beyond Geekbench will be available on ARMacOS.



Yeah the anti-ARM people are basically like "it isn't fast, because I say it isn't" ... i'm done arguing, wait for MacOS ARM and they'll see. They'll probably find a way to ignore the 100 other ways it is fast and focus on their hobby horse of dissing a benchmark or two. Even though they are perfectly able to get an iPad Pro and see it do all sorts of tasks at high speed. My PC hasn't gotten faster in almost 5 years now... basically I don't use more than 8 threads for the most part, so the 6700k is about the same as my 10900 computer (you can watch a lot of ~30 percent utilization with a 10900). I'm happy to see Apple actually speed things up.


----------



## TheoneandonlyMrK (Aug 19, 2020)

Yeah the anti-ARM people are basically like "it isn't fast, because I say it isn't" ... i'm done arguing, wait for MacOS ARM and they'll see. They'll probably find a way to ignore the 100 other ways it is fast and focus on their hobby horse of dissing a benchmark or two. Even though they are perfectly able to get an iPad Pro and see it do all sorts of tasks at high speed. My PC hasn't gotten faster in almost 5 years now... basically I don't use more than 8 threads for the most part, so the 6700k is about the same as my 10900 computer (you can watch a lot of ~30 percent utilization with a 10900). I'm happy to see Apple actually speed things up.
[/QUOTE]


That's daft, point to someone other than you that's said arm is not fast.


That's exactly the point, some of us can use an iPad pro too, many have, and put it back down, hence the opinions, 

You pointed to two short busrt benchmarks as your hidden unseen by plebs truth?!, one of which was so obscure you think you got one over on me, yeah 98% of nerds haven't ran that bench ffs, yeah guy you told me, you think I couldn't show a few benches with PC's beating iPhone chip's?

I wouldn't waste my time, I have said before I have no doubt these have their place and will sell well but they are not talking the performance crown and real work will stay on x86 or PowerPC.


----------



## dragontamer5788 (Aug 19, 2020)

Vya Domus said:


> Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.



I agree with everything you said above.



> It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.



I disagree with your conclusion however.

A programmer working on FEA (Ex: simulated car crashes), Weather Modeling, or Neural Networks will constantly run large matrix-multiplication problems. Over, and over, and over again for days or months. In these cases, GPUs, and wide-SIMD (like 512-bit AVX on Intel or A64FX) will be a huge advantage. If GPUs are a major player, you still need a CPU with high I/O (ie: EPYC or POWER9 / OpenCAPI) to service the GPUs fast enough.

A programmer working on CPU-design will constantly run verification / RTL proofs, of which are coded very similarly to Z3 or other automated solvers. (And unlike matrix-multiplication, Z3 and other automated-logic code is highly divergent and irregular. Its very difficult to write multithreaded code and load-balance the work between multiple cores. There's a lot of effort in this area, but from my understanding, CPUs are still > GPUs in this field). Strangely enough, A12 is one of the best chips here, despite it being a tiny 5W processor.

A programmer working on web servers will run RAM-constrained benchmarks, like Redis or Postgresql. (And thus POWER9 / POWER10 will probably be the best chip: big L3 cache and huge RAM bandwidth).

--------

We have many computers to choose from. We should pick the computer that best matches our personal needs. Furthermore, looking at specific problems (like Z3 in this case), gives us an idea of why the Apple A12 performs the way it does. Clearly the large 128kB L1 cache plays to the A12's advantage in Z3.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> I agree with everything you said above.
> 
> 
> 
> ...


So of all those examples, the mighty A12 can almost keep up with a two year old intel quad in one, while any of the other brands of CPU you mentioned do them all pretty good(to say the least in some cases), and because of this 20seconds of greatness this proves apple are nearly there beating Intel...

I don't see it.

Damn the irony

"We have many computers to choose from. We should pick the computer that best matches our personal needs. "

Philips head=Philips screwy.

How many Devs are on this ? What proportion of computing device users do they make up 0.00021% is it mostly just a niche?


----------



## dragontamer5788 (Aug 19, 2020)

theoneandonlymrk said:


> How many Devs are on this ? What proportion of computing device users do they make up 0.00021% is it mostly just a niche?



I mean, if we're talking about applicability to the largest audience, Geekbench is testing HTML5 DOM traversals and Javascript code. (Stuff that SPECInt, Drystone, and other benchmarks fail to test for).

Wanna go back to discussing Geekbench Javascript / Web tests? That's surely the highest proportion of users. In fact, everyone browsing this forum is probably performing AES-encryption/decryption (for HTTPS), and HTML5 DOM rendering. Please explain to me how such an AES-Decryption + HTML5 DOM test is unreasonable or inaccurate.



> the mighty A12 can almost keep up with a two year old intel quad in one,



Your hyperbole is misleading. The A12 was 11% faster than the Intel in the said Z3 test. I'm discussing a situation (Z3) where the Apple chip on 5W in the iPhone form factor is outright beating a full sized *91W desktop* processor.

Yes. Its a comparison between apples and peanuts (and ironically, Apple a much smaller peanut in this analogy). But Apple is soon to launch a laptop-class A14 chip. Some of us are trying to read the tea-leaves for what that means. The A14 is almost assuredly built on top of the same platform as the A12 chip. Learning what the future A14 laptop will be good or bad will be an important question as the A14 based laptops hit the market.


----------



## Vya Domus (Aug 19, 2020)

dragontamer5788 said:


> We have many computers to choose from. We should pick the computer that best matches our personal needs.



Some computers are better than others at specific tasks but some provide very good general performance across the board. I have trust in an Intel or AMD processor to be good enough at everything, I don't place however the same trust in an Apple mobile chip because I know it wont be. This is why I said it's a waste of time to look at this very specific scenarios.


----------



## TheoneandonlyMrK (Aug 19, 2020)

dragontamer5788 said:


> I mean, if we're talking about applicability to the largest audience, Geekbench is testing HTML5 DOM traversals and Javascript code. (Stuff that SPECInt, Drystone, and other benchmarks fail to test for).
> 
> Wanna go back to discussing Geekbench Javascript / Web tests? That's surely the highest proportion of users. In fact, everyone browsing this forum is probably performing AES-encryption/decryption (for HTTPS), and HTML5 DOM rendering. Please explain to me how such an AES-Decryption + HTML5 DOM test is unreasonable or inaccurate.
> 
> ...


Your hyperbole is pure bullshit 1 core being used on even the 7700k isn't 75 Watts.
People use such to browse the web yes, we agree there, a light use case most do on their phones .

You have failed to address the point that there are three newer generations of intel chip with an architectural change on the way.

And none of you have explained how  they can stay on the latest low power mode yet somehow mythically clock as high as a high power device.

All while arguing against your test being lightweight and irrelevant, which you just agreed it's use case typically is.

Look at what silicon they will make it on ,the price they want to hit and you get a CX8 8 core 4/4 hybrid.

Great.


----------



## dragontamer5788 (Aug 20, 2020)

> Your hyperbole is pure bullshit



Couldn't have said it better myself.

My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.

--------

Since you're clearly unable to take down my side of the discussion, I'll "attack myself" on behalf of you.

SIMD units are incredibly important to modern consumer workloads. From Photoshop, to Video Editing, to audio encoding, multimedia is very commonly consumed AND produced even by the most casual of users. With only 3 FP/Vector pipelines of 128-bit width, the Apple A12 (and future chips) will simply be handicapped in this entire class of important benchmarks. Even worse: these 128-bit SIMD units are hampered by longer latencies (2-clocks) compared to Intel's 512-bit wide x 1-clock latency (or AMD Zen2's 256-bit wide by 1-clock latency).

Future users expecting strong multimedia performance: be it Photoshop filters, messing with electronic Drums or other audio-processing, and simple video editing, will simply be unable to compete against current generation x86 systems, let alone next-gen's Zen3 or Icelake.


----------



## TheoneandonlyMrK (Aug 20, 2020)

dragontamer5788 said:


> Couldn't have said it better myself.
> 
> My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.
> 
> ...


I rebutled many of your points, you're blind to it , 7700k /10700k performance increases for one, next generation  for two.

The Fact that the main use case You posited for your benchmark viability was web browser action!.

Getting technical Intel have foveros and FPGA tech, as soon as they have a desktop CPU with an FPGA and HBM, it could be game over so far as benches go against anything on anything, enabled by one API.
Power pc simply are in another league.
AMD will iterate core count way beyond apples horizon and incorporate better fabrics and Ip..

And despite it all most will still just surf on their phones.

And less not more people will do real work on ios.

I'm getting off this roundabout, are opinions differ let's see where five years gets us.

You don't answer any questions just epeen your leet Dev skills like we should give a shit.
We shouldn't , it's irrelevant to me what they/you like to use , it only matters to me and 99% of the public what we want to do, and our opinions differ on the scale of variability of workload here it seems.

Bye.


----------



## Darmok N Jalad (Aug 20, 2020)

dragontamer5788 said:


> Couldn't have said it better myself.
> 
> My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.
> 
> ...


Ironically, I’m a hobbyist photographer who does all his edits on an iPad Pro. It’s just as fast as anything I’ve used on a desktop—sliders apply pretty much in real time. That’s even with an occasional 80MP RAW. I’ve also made and exported movies in iMovie on the iPad, and it was seamless and fast on export. Would a pro want to do this? Probably not, but that might be more how iOS isn’t as ideal as a desktop OS for repetitive tasks, so I’m curious to see how Apple’s SOCs will handle even larger RAW files, imports of hundreds of images and batch updates. I guess my point is that the “feel” today isn’t so far off, but I don’t know how well that will translate from iOS to MacOS, where true multitasking is an everyday expectation versus the limited experience that it is on iOS today.


----------



## squallheart (Aug 20, 2020)

Searing said:


> A14X will run Shadow of the Tomb Raider and any other PS4/Xbox game. Not surprising since it will match a GTX 1060 easily enough, but only need 10-15W. The Switch is circa 2014 hardware (Galaxy Note 4 CPU plus half a GTX 750) so imagine a Switch that is 6 years more advanced and there you have it.



I am curious how you came to that conclusion.
I actually looked up GFXbench, which is cross-platform and fairly well regarded and there is no known bias to any platform.









						GFXBench - Unified cross-platform 3D graphics benchmark database
					

The first unified cross-platform 3D graphics benchmark database for comparing Android, iOS, Windows 8, Windows Phone 8 and Windows RT capable devices based on graphics processing power.




					gfxbench.com
				











						GFXBench - Unified cross-platform 3D graphics benchmark database
					

The first unified cross-platform 3D graphics benchmark database for comparing Android, iOS, Windows 8, Windows Phone 8 and Windows RT capable devices based on graphics processing power.




					gfxbench.com
				




I am comparing the A12Z (from the 4th  gen Ipad pro , faster than the A12X) to the 1060

For Aztec high offscreen, the most demanding test, A12Z recorded 133.8 vs 291.1 on the GTX1060.

So my question to you is, are you expecting the A14X to more than double in graphics performance?


----------



## dragontamer5788 (Aug 20, 2020)

theoneandonlymrk said:


> Bye.



Is it really bye? Or are you one of those dudes who don't really mean what they say?



theoneandonlymrk said:


> Getting technical Intel have foveros and FPGA tech, as soon as they have a desktop CPU with an FPGA and HBM, it could be game over so far as benches go against anything on anything, enabled by one API.
> Power pc simply are in another league.
> AMD will iterate core count way beyond apples horizon and incorporate better fabrics and Ip..



Have you ever used an FPGA? Have you ever used Power? Power has anemic SIMD units. The benefit of Power9 / Power10 is memory bandwidth and L3.

FPGAs are damn hard to program for. Not only is Verilog / VHDL rarely taught, synthesis takes a lot longer than compiles. OpenCL / CUDA is far simpler, honest. We'll probably see more from Intel's Xe than from FPGAs.

Not to mention, high-end FPGAs are like $4000+ if we're talking things competitive vs GPUs or CPUs.


----------



## Vayra86 (Aug 20, 2020)

dragontamer5788 said:


> I'm not sure if you guys know what I know.
> 
> Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.
> 
> ...



You're echoing Apple marketing and are desperately finding talking points to make it worth our while. Its admirable. But don't sell it as a lack of understanding from others, because its really not. There is no magic in CPU land, contrary to what you might think. Its a balancing act and Apple has specifically made its own SoC to get its own specific, required balance on a chip. That balance works well for the use case Apple has selected it to do.

Its like @theoneandonlymrk says, again, eloquently... these highly specific workloads say little about overall CPU performance. Having a CPU repeatedly do the same task is not a real measure of overall performance. Its a measure of its performance in that specific workload. If you do that for different architectures, the comparison is off. You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs. Its irrelevant the chip can repeat that test a million times over. Its still a burst mode, specific workload-based view, and not the whole picture.

Its just that simple. Apple isn't smarter than the rest. They have specialized themselves to very specific workloads, specific devices, with specific use cases. That is why any sort of advanced user / system modding stuff on Apple is nigh impossible and if it IS, Apple has carefully prepared the path you need to walk for it. This is a company that manages your user experience. On most other (non mobile) OS'es, the situation is turned around: you get an OS with lots of tools, have fun with it, the only thing you can't touch is kernel... unless you try harder.

The new direction for Apple, and I've said it as a joke, but its really not... terminals. ARM and the chip Apple has created is fantastic for logging in,  and getting the heavy lifting done off-site. Cloud. Apple's been big on it, and they'll go bigger. They are drooling all over Chromebooks because imagine the margins! They can sell an empty shell with an internet connection that can 'feel' like it is a true Apple device, barely include hardware, and still get the Apple Premium on it.

That is what the ARM push is about, alongside another step forward in full IP ownership of soft- and hardware.


----------



## dragontamer5788 (Aug 20, 2020)

Vayra86 said:


> You need a full, rounded suite of benches to get a real handle on ovevrall performance between different CPUs.



My discussion revolves around Z3 because theoneandonlymrk clearly refused to accept SPECint2006 and Geekbench4 as benchmarks. Anandtech has the Xeon 8176 @3.8 GHz here: https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7


Xeon 8176 @3.8 GHzApple A12  Vortex @ 2.5 GHz400.perlbench46.445.38401.bzip22528.54403.gcc3144.56429.mcf40.649.92445.gobmk27.638.54456.hmmer35.644.04458.sjeng30.836.60462.libquantum86.2113.40464.h264ref64.566.59471.omnetpp37.935.73473.astar24.727.25483.xalancbmk63.747.07
So now we have SPECInt2006 *AND* Geekbench4 suites where the A12 Vortex is crushing single-threaded performance. That's just the reality of the A12 chip. The results speak for themselves, the A12 Vortex at 2.5 GHz outright beats the Xeon in 75% of the SPECint2006 suite.

Yeah, the A12 is really good at 64-bit singlethreaded code. Surprisingly good. (Note: H264 is implemented with SIMD instructions typically. The SPEC Int2006 benchmark is a 64-bit reference implementation. So this doesn't really test H264 in practice)

----------

Look, I don't even own an  iPhone. I don't give a care about iPhones, and I don't plan to deal with any of Apple's walled garden bullshit. I don't like their business model, I don't like Apple. I don't like their stupid reality distortion fields.

But I've seen the benchmarks. their A12 chip is pretty damn good in single-threaded performance. As a CPU-nerd, that makes me intrigued and interested. But not really enough to buy an iPhone yet.


----------



## Vya Domus (Aug 20, 2020)

As far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?


----------



## dragontamer5788 (Aug 20, 2020)

Vya Domus said:


> As far as I can tell AVX512 wasn't used in the compiler for that Xeon. I understand from your logic that everything is fair and square so we better make sure every chip is making use of every one it's advantages right ?



According to Anandtech, it was GCC 7.2 -Ofast. So I don't think AVX512 was enabled. But I don't expect much improvement from perlbench, gcc, astar, or bzip2.

Xeon 8172 has one of the best SIMD-vector processors on the market. So yes, it would be "more fair" to let the Xeon use its SIMD units to the degree that is convenient (ie: GCC's autovectorizer), as long as say intrinsics and/or hand-crafted assembly aren't being used. A few memcpys or strcmp functions here and there might get a bit faster, but I don't expect any dramatic improvements in the Xeon's speed.

---------

EDIT: I can't find a benchmark that runs the Xeon 8176 exactly as we like. The closest run I found is: https://www.spec.org/cpu2006/results/res2017q3/cpu2006-20170710-47735.html

This runs 112 identical copies of the benchmarks across the 56-cores (x2 threads). Divide the run by 56 to get a "pseudo-single core" score. Yes, with AVX512 enabled (-xCORE-AVX512). We can see that none of the SPECInt2006 scores vary dramatically from Anandtech's single-threaded results.

I don't think AVX512 will matter too much on 64-bit oriented code like the SPECInt2006 suite.


----------



## Searing (Aug 20, 2020)

squallheart said:


> I am curious how you came to that conclusion.
> I actually looked up GFXbench, which is cross-platform and fairly well regarded and there is no known bias to any platform.
> 
> 
> ...



First of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).

Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.

Considering that the A12X/Z are basically 2 years old I think Apple can more than add 50 percent to GPU performance. I'm actually expecting 1.5x for the iPad and 2x for the ARM Mac models, at least. That would also be enough to beat their existing models, like the 5500M.

(the 2020 iPad Pro's A12Z is twice as fast as the A10x from 2017's CPU, and 50 percent faster GPU wise)


----------



## squallheart (Aug 21, 2020)

Searing said:


> First of all Apple needs to conclusively beat the GTX 1060 mobile chip, not the desktop one (since both are mobile form factors, and Apple can just up clock their GPU to get desktop performance, the same nVidia does). GPUs are also very complex, you'll have many very different results depending on what the game is or benchmark (CPU is very simple in comparison, contrary to the Geekbench haters).
> 
> Surface 2 with GTX 1060 scores 330k in 3Dmark Ice Storm and the iPad Pro A12Z scores 220k (GPU test, the overall score the iPad has a faster CPU than the Surface so that makes Apple look even better). So they only need 50 percent more performance.
> 
> ...



You didn't specify you were comparing to the mobile variant.

https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)

FYI that is a really old test, based on rather ancient APIs.

If you look at other Anandtech's benchmarks of mobile soc, the GFXbenchmark  Aztec is used a lot more often, and he also calculates fps/w of those results so I would think that's a better comparison.

Anyways, I guess the bottom line is you think it will get a 50% performance boost over the A12X/Z. I have no qualms with that and I think it's reasonable. I am still not convinced that it will match the laptop variant for the 1070 though unless it is extremely thermally constraint.


----------



## Searing (Aug 21, 2020)

squallheart said:


> You didn't specify you were comparing to the mobile variant.
> 
> https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6 (I see you got the results from here)
> 
> ...



You don't have to specify it, I'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple. And different games will have different results. For example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.

Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions.


----------



## squallheart (Aug 21, 2020)

Searing said:


> I'm just saying there are different forms of victory, one is beating the mobile, one the desktop. It could easily beat both, it is up to Apple.



You have a tendency to make claims without any support IMO.  "Easily" beat a discrete desktop GPU?



Searing said:


> the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.


Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.

"Also it gets complicated if we are talking about the iPad version, or the presumably much stronger desktop versions."

Your original statement was that the new Ipad can easily run games from PS4/XBOX and be faster than a 1070. Don't move the goal post here.


----------



## Searing (Aug 22, 2020)

squallheart said:


> You have a tendency to make claims without any support IMO.  "Easily" beat a discrete desktop GPU?
> 
> 
> Again how is this relevant? The benchmark I used was cross-platform, without any known bias. If you want to talk about specific hardware, the bandwidth alone is severely limited it uses lpddr4/5 ram.
> ...



You are plainly ignoring the meaning of the words. Can Apple silicon easily beat a GTX 1060? Yes. I never said 1070, and haven't moved any goalposts, you are just making stuff up now. And it can easily run PS4 games, that is mainly a comment on the quality of the CPU side of things and the fact that the iPad for example has double the memory bus width of the Switch and other mobile products. The iPad Pro already has a desktop GPU 128bit bus just like a GTX 1650 but it can't use GDDR6 for power consumption reasons of course. Stick GDDR6 in a desktop Apple CPU/GPU chip and voila, then it really gets exciting.


----------



## squallheart (Sep 21, 2020)

Searing said:


> For example the ALU portion (arithmetic logic unit) of Apple GPUs is really strong. It all depends on what kind of visuals you are rendering.


To be honest when you wrote that it really showed that you didn't know much about GPU.  All GPUs carryout Arithmetic logic unit which is essentially just hardware that does math. To put it simply, they are all part of the performance of the GPU and a combination of that as well as factors like the memory bandwidth will lead to a certain level of rasterization performance.

The way you described ALU makes it sound like it excels at a certain kind of graphical workload just tells me you fail to understand it's still part of rasterization performance. Not sure why you can't just concede made a groundless claim and instead choose to double down by putting that ALU comment.


----------

