• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple A14X Bionic Rumored To Match Intel Core i9-9880H

Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Lets actually talk Geekbench for a sec. I know Geekbench3 was highly flawed, but why does everyone think that Geekbench4 is bad?

Here's Geekbench4's workload: https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf

Now I recognize that a lot of Geekbench4's benchmarks fit inside of L1 cache, but that's more of a testament to how big L1 caches have gotten. (128kB on the iPhone). Lets be frank: if 128kB L1 cache is what's needed for the modern consumer, then we should be blaming AMD / Intel for failing to grow their L1 to 128kB (AMD / Intel still have 32kB L1 data caches).

Lets really look at Geekbench4's benchmarks. Unlike Geekbench3, AES is downgraded to be just another test instead of its own category. (And mind you, AMD Zen2 and Intel Xeons have doubled their AES pipelines recently: AES remains an important workload). There's JPEG compression (emulating a camera), HTML5 parse, LUA scripting, SQLite database, and PDF rendering. Lots of good workloads here. Very similar to a wide variety of workloads of the modern, average consumer. Even an LLVM compile (3900 lines of code).

There's a bunch of "synthetics" too: 450kB LZMA compression, Djikestra, Canny (Computer-vision), a 300x300 Raytracer, etc. etc. A bunch of tiny synthetics.

--------------

Geekbench4 is what it is: a small test for testing L1 cache and Turbos of modern processors. Its probably closer to the average phone-user or even desktop-user's workflow than SPEC, LINPACK, or HCPG.

But yes, the iPhone crushes Geekbench. Because the iPhone has 128kB L1 cache. But is that a legitimate reason to call the test inaccurate? We can't just hate a test because we disagree with the results. You should instead attack the fundamental setup of the test, and tell us why its inaccurate.

Its pretty insane that the iPhone has a 128kB L1 cache per core. Yeah, that's its secret to crushing Geekbench4 and its pretty obvious. But Intel Skylake's L2 cache is only 256kB and AMD Zen2's L2 is 512kB. Having such a large L1 cache is a testament to the A12 design (larger caches are usually slower. Having such a large cache as L1 must have been difficult to make).
You realise all modern processor's are designed for Turbo, and dash to rest operation.
Any bench shorter than the Tau value isn't worth shit regardless IMHO, not really, you can gauge performance to a degree but it's not the whole picture , and that's geek bench for you , short bursts , a test designed for phones and light use cases.
 
Joined
Mar 16, 2017
Messages
2,100 (0.75/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
I don’t really see how it’s that far of a stretch at this point. This A14X is Apple’s latest and greatest on a 5nm node, getting to scale up in power, versus a tired, Skylake-based chip on a very old 14nm node and crammed down to its thermal minimum. Now if the A14X matched the 9900K, that would be much harder to believe. I don’t think Apple would make this move unless they had something solid lined up. I guess it won’t be too much longer before we find out, and benchmarks beyond Geekbench will be available on ARMacOS.
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
You realise all modern processor's are designed for Turbo, and dash to rest operation.

And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.

Any bench shorter than the Tau value isn't worth shit regardless IMHO

Explain. Why? The situation is clearly common. I can look at "top" or Windows Process Manager and see that my utilization is damn near 0% most of the time when.

We hardware nerds love to pretend that we're running our systems at high utilization with high efficiency, as if we were bitcoin miners or Fold@Home geeks all the time. But that's just not the reality of the day-to-day. Even programming at work has started to get offloaded to dedicated "build servers" and continuous integration facilities, off of the desktop / workstation at my workdesk.

Browsing HTML documentation for programming is hugely important, and a lot of that is "TURBO race to idle" kind of workloads. The chip idles, then suddenly the HTML5 DOM shows up, maybe with a bit of Javascript to run before reaching its final form. But take this webpage for instance: This forum page is ~18.8 KBs HTML5, which is small enough to fit inside the L1 cache of all of our machines.

That's the stuff Geekbench is measuring: opening PDF documents, browsing HTML5, parsing DOM, interpreting JPEG images. It seems somewhat realistic to me, with various internal webpages constantly open at my work computer and internal wikis I'm updating constantly.

---------

I don't even like Apple. I don't own a single Apple product. But you're going to have to explain the flaws of Geekbench if you really want to support your discussion points.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.



Explain. Why? The situation is clearly common. I can look at "top" or Windows Process Manager and see that my utilization is damn near 0% most of the time when.

We hardware nerds love to pretend that we're running our systems at high utilization with high efficiency, as if we were bitcoin miners or Fold@Home geeks all the time. But that's just not the reality of the day-to-day. Even programming at work has started to get offloaded to dedicated "build servers" and continuous integration facilities, off of the desktop / workstation at my workdesk.

Browsing HTML documentation for programming is hugely important, and a lot of that is "TURBO race to idle" kind of workloads. The chip idles, then suddenly the HTML5 DOM shows up, maybe with a bit of Javascript to run before reaching its final form. But take this webpage for instance: This forum page is ~18.8 KBs HTML5, which is small enough to fit inside the L1 cache of all of our machines.

That's the stuff Geekbench is measuring: opening PDF documents, browsing HTML5, parsing DOM, interpreting JPEG images. It seems somewhat realistic to me, with various internal webpages constantly open at my work computer and internal wikis I'm updating constantly.

---------

I don't even like Apple. I don't own a single Apple product. But you're going to have to explain the flaws of Geekbench if you really want to support your discussion points.
You ignored my point because it doesn't fit your perspective , but my point is quite simple and is already explained adequately.

I also said it's my opinion sooo.

And my pc has said 80-100% load for years now, as I also said.

You know what gets hardly any abuse, my phone it's surfed on and fit's your skewed perspective.

Geek bench on it makes sense.
 
Joined
Sep 17, 2014
Messages
22,442 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Every forum is full of ignorant people like you just ignoring every benchmark (it is Geekbench 5 now, and there are many other benchmarks you can use), ignoring every expert Anandtech included, ignoring actual real world results. World's fastest computer is ARM based? Ignore it. Amazon offering ARM server instances? Ignore it. This is why the world passes some people by. They just can't accept that something has changed. There is an interesting question about psychology here, why does ARM being fast bother you? Why do you not accept basic reality? ARM is just an ISA, 68000 was fast, PowerPC was fast, x86 was fast, ARM was fast, it is just an ISA.

"Come on. Sustained means nothing, right, the one thing that you know Apple's chips are horrible at in terms of scalability means nothing. Got it." Any chip can run with sustained performance with a bit more cooling and power, yes it means nothing. We are comparing the CPUs, not the form factor.

You're saying it yourself and others have said it too, you just fail to realize it.

'PowerPC was fast'... it was even part used in a Playstation, yet today you don't see a single one in any gaming or consumer machine. In enterprise though? Yep. Its a tool that works best in that setting.

'ARM is fast'... correct. We have ThunderX chips that offer shitloads of cores and can use lots of RAM. They're not the ones we see in a phone though. We also have Apple's low-core-count, single-task optimized mobile chips. You won't see those in an enterprise environment. That's not 'ignoring it', it is separating A from B correctly.

Sustained means nothing in THE USE CASE Apple has selected for these chips. That is where all chips are going. More specialized. More specific to optimal performance in a desired setting. Even Intel's own range of CPUs, even in all those years they were 'sleeping' have pursued that goal. They are still re-using the same core design in a myriad of power envelopes and make it work top to bottom - in Enterprise, in laptop, and they've been trying to get there on mobile. The latter is the ONE AREA where they cannot seem to succeed, a bit similar to Nvidia's Tegra designs that are always somewhat too high power and perform well, but are too bulky after all to be as lean as ARM is under 5W. End result: Nvidia still didn't get traction with its ARM CPUs for any mobile device.

In the meantime, Apple sees Qualcomm and others develop chips towards the 'x86 route'. They get higher core counts, more and more hardware thrown at ever less efficient software, expanded functions. That is where the direction of ARM departs from Apple's overall strategy - they want optimized hard- and software systems. You seem to fail to make that distinction thinking Apple's ARM approach is 'The ARM approach'. Its not, the ISA is young enough to make fundamental design decisions.

Like @theoneandonlymrk said eloquently: philips head? Philips screwdriver.

And I realize that this is a useful feature for rendering webpages on cellphones. And therefore, a benchmark that measures this behavior (especially since rendering webpages is probably 90% of what cellphones and laptops do all day) is a critical measurement of performance for the modern consumer.



Explain. Why? The situation is clearly common. I can look at "top" or Windows Process Manager and see that my utilization is damn near 0% most of the time when.

We hardware nerds love to pretend that we're running our systems at high utilization with high efficiency, as if we were bitcoin miners or Fold@Home geeks all the time. But that's just not the reality of the day-to-day. Even programming at work has started to get offloaded to dedicated "build servers" and continuous integration facilities, off of the desktop / workstation at my workdesk.

Browsing HTML documentation for programming is hugely important, and a lot of that is "TURBO race to idle" kind of workloads. The chip idles, then suddenly the HTML5 DOM shows up, maybe with a bit of Javascript to run before reaching its final form. But take this webpage for instance: This forum page is ~18.8 KBs HTML5, which is small enough to fit inside the L1 cache of all of our machines.

That's the stuff Geekbench is measuring: opening PDF documents, browsing HTML5, parsing DOM, interpreting JPEG images. It seems somewhat realistic to me, with various internal webpages constantly open at my work computer and internal wikis I'm updating constantly.

---------

I don't even like Apple. I don't own a single Apple product. But you're going to have to explain the flaws of Geekbench if you really want to support your discussion points.

There you go and that is why I said, Apple is going to offer you terminals, not truly powerful devices. Intel laptop CPUs are not much different, very bursty and slow as shit under prolonged loads. I haven't seen a single one that doesn't throttle like mad after a few minutes. They do it decently... but sustained performance isn't really there.

I will underline this again
Apple found a way to use ARM to guarantee their intended user experience.
This is NOT a performance guarantee. Its an experience guarantee.

You need to place this in the perspective of how Apple phones didn't really have true multitasking while Android did. Apple manages its scheduler in such a way that it gets the performance when the user demands it. They path out what a user will be looking at and make sure they show him something that doesn't feel like waiting to load. A smooth animation (that takes almost a second), for example, is also a way to remove the perception of latency or lacking performance. CPU won't burst? Open the app and show an empty screen, fill data points later. Its nothing new. Websites do it too irrespective of architecture, especially the newer frameworks are full of this crap. Very low information density and large sections of plain colors are not just a design style, its a way to cater to mobile limitations.

If you use Apple devices for a while, take a long look at this and monitor load and you can see how it works pretty quickly. There is no magic sauce.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Apple found a way to use ARM to guarantee their intended user experience.
This is NOT a performance guarantee. Its an experience guarantee.

Indeed, a considerable chunk of their SoCs are just dedicated signal processors for different purposes transistor budget wise. To put things into perspective, it has 8.5 billion transistors, that's as much as an average GPU ... it better be fast.

Not to mention that they don't just put large L1 caches, everything related to on chip memory is colossal in size. And again, everyone can do that, that's not the merit of an ARM design or not.
 
Last edited:
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
You ignored my point because it doesn't fit your perspective

There you go and that is why I said, Apple is going to offer you terminals, not truly powerful devices. Intel laptop CPUs are not much different, very bursty and slow as shit under prolonged loads. I haven't seen a single one that doesn't throttle like mad after a few minutes. They do it decently... but sustained performance isn't really there.

I'm not sure if you guys know what I know.

Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.


Though 20 seconds is slow, the blogpost indicates that they continuously ran the 3-SAT solver over and over again, so the iPhone was behaving at its thermal limits. The Z3 solver tries to solve the 3-SAT NP Complete problem. At this point, it has been demonstrated that Apple's A12 has a faster L1, L2, and memory performance than even Intel's chips in a very difficult, single-threaded task.

----------

Apple's chip team has demonstrated that its small 5W chip is in fact pretty good at some very difficult benchmarks. It shouldn't be assumed that iPhones are slower anymore. They're within striking distance of Desktops in single-core performance in some of the densest compute problems.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
I'm not sure if you guys know what I know.

Lets take a look at a truly difficult benchmark. One that takes over 20 seconds so that "Turbo" isn't a major factor.


Though 20 seconds is slow, the blogpost indicates that they continuously ran the 3-SAT solver over and over again, so the iPhone was behaving at its thermal limits. The Z3 solver tries to solve the 3-SAT NP Complete problem. At this point, it has been demonstrated that Apple's A12 has a faster L1, L2, and memory performance than even Intel's chips in a very difficult, single-threaded task.

----------

Apple's chip team has demonstrated that its small 5W chip is in fact pretty good at some very difficult benchmarks. It shouldn't be assumed that iPhones are slower anymore. They're within striking distance of Desktops in single-core performance in some of the densest compute problems.
20 seconds, nuff said I'm out, to I'll leave you to stroke apples ego.

Another great example of a waste of time.

A true, great benchmark , 20 seconds.

Single core, dense compute problems, lmfao.
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
SMT Solvers, like Z3, solve a class of NP complete problems with pretty dense compute characteristics. Or are you unaware of what Z3 is?

Or are you unaware what "dense compute" means? HCPG is sparse compute (memory intensive), while Linpack is dense (cpu intensive). Z3 probably is in the middle, more dense than HCPG but not as dense as Linpack (I don't know for sure. Someone else may correct me on that).

--------

When comparing CPUs, its important to choose denser compute problems, or else you're just testing the memory interface (ex: STREAM benchmark is as sparse as you can get, and doesn't really test anything aside from your DDR4 clockrate). I'd assume Z3 satisfies the requirements of dense compute for the purposes of making a valid comparison between architectures. But if we go too dense, then GPUs win (which is also unrealistic. Linpack is too dense and doesn't match anyone's typical computer use. Heck, its too dense to be practical for supercomputers)
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
SMT Solvers, like Z3, solve a class of NP complete problems with pretty dense compute characteristics. Or are you unaware of what Z3 is?

Or are you unaware what "dense compute" means? HCPG is sparse compute (memory intensive), while Linpack is dense (cpu intensive). Z3 probably is in the middle, more dense than HCPG but not as dense as Linpack (I don't know for sure. Someone else may correct me on that).

--------

When comparing CPUs, its important to choose denser compute problems, or else you're just testing the memory interface (ex: STREAM benchmark is as sparse as you can get, and doesn't really test anything aside from your DDR4 clockrate). I'd assume Z3 satisfies the requirements of dense compute for the purposes of making a valid comparison between architectures. But if we go too dense, then GPUs win (which is also unrealistic. Linpack is too dense and doesn't match anyone's typical computer use. Heck, its too dense to be practical for supercomputers)
When comparing CPU, it's important to stick to the same CPU your comparing, and not to then revert to a two generation older CPU when you're argument is failing, he tested verses a 7700K.

In the note's

"This benchmark is in the QF_BV fragment of SMT, so Z3 discharges it using bit-blasting and SAT solving.
This result holds up pretty well even if the benchmark runs in a loop 10 times—the iPhone can sustain this performance and doesn’t seem thermally limited.1 That said, the benchmark is still pretty short.
Several folks asked me if this is down to non-determinism—perhaps the solver takes different paths on the different platforms, due to use of random numbers or otherwise—but I checked fairly thoroughly using Z3’s verbose output and that doesn’t seem to be the case.
Both systems ran Z3 4.8.1, compiled by me using Clang with the same optimization settings. I also tested on the i7-7700K using Z3’s prebuilt binaries (which use GCC), but those were actually slower.
What’s going on?
How could this be possible? The i7-7700K is a desktop CPU; when running a single-threaded workload, it draws around 45 watts of power and clocks at 4.5 GHz. In contrast, the iPhone was unplugged, probably doesn’t draw 10% of that power, and runs (we believe) somewhere in the 2 GHz range. Indeed, after benchmarking I checked the iPhone’s battery usage report, which said Slack had used 4 times more energy than the Z3 app despite less time on screen.

Apple doesn’t expose enough information to understand Z3’s performance on the iPhone,




This result holds up pretty well even if the benchmark runs in a loop 10 times—the iPhone can sustain this performance and doesn’t seem thermally limited.1 That said, the benchmark is still pretty short.

He said prior it uses one core only on Apple, really leverage what's there eh, or the light load might sustain a boost better cos of that single core use but this leads to my point B.

He doesn't know how it's actually running on the apple, so can't know if it is leveraging accelerator's to hit that target.

Still 7700k verses A12 ,14nm+(only one plus not mine) verses 5Nm .

That's not telling anyone much about how the A14 would compare to a CPU out today not Eol, never mind the next generation Ryzen and Cove cores it would face.

All in fail.

Do I know what Z3 is, wtaf does it matter.

We are discussing CPU performance not , coder pawn.

I don't use it 99.9% of users also don't, I am aware of it though and aware of the fact it too is irrelevant like geek bench.
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
He doesn't know how it's actually running on the apple, so can't know if it is leveraging accelerator's to hit that target.

Uh huh.

Do I know what Z3 is, wtaf does it matter.

Well, given your statement above, I'm pretty sure you don't know what Z3 is. Z3 solves NP-complete optimization problems. Knapsack, Traveling Salesman, etc. etc. There's no "accelerator" chip for this kind of problem, not yet anyway. So you're welcome to take your foot out of your mouth now.

Z3 wasn't even made by Apple. Its a Microsoft Research AI project that happens to run really, really well on iPhones. (Open Source too. Feel free to point out where in the Github code this "accelerator chip" is being used. Hint: you won't find it, its a pure C++ project with some Python bits)

We are discussing CPU performance not , coder pawn.

The CPU performance on the IPhone is surprisingly good in Z3. I'd like to see you explain why this is the case. There's a whole bunch of other benchmarks too that the iPhone does well on. But unlike Geekbench, something like Z3 actually has coder-ethos as a premier AI project. So you're not going to be able to dismiss Z3 as easily as Geekbench.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Uh huh.



Well, given your statement above, I'm pretty sure you don't know what Z3 is. Z3 solves NP-complete optimization problems. Knapsack, Traveling Salesman, etc. etc. There's no "accelerator" chip for this kind of problem, not yet anyway. So you're welcome to take your foot out of your mouth now.

Z3 wasn't even made by Apple. Its a Microsoft Research AI project that happens to run really, really well on iPhones. (Open Source too. Feel free to point out where in the Github code this "accelerator chip" is being used. Hint: you won't find it, its a pure C++ project with some Python bits)



The CPU performance on the IPhone is surprisingly good in Z3. I'd like to see you explain why this is the case. There's a whole bunch of other benchmarks too that the iPhone does well on. But unlike Geekbench, something like Z3 actually has coder-ethos as a premier AI project. So you're not going to be able to dismiss Z3 as easily as Geekbench.
apples to oranges.

Can you explain where I said I was an expert in your coding speciality to put my foot in my mouth!?

I'm aware of f£#@£g Bigfoot but do I know many facts on him?.


So getting back to the 9900k it's 10-20% better than the 7700k your on about now.

And the Cove and zen3 cores are a good 17% better again (partly alleged).
That's at least 30% on the 7700K and Intel especially work hard to optimise for some code type's.

And you are still at it with second timed benches.

If I do something on a computer that takes time to run but it's a one off and a few seconds, I wouldn't even count it as a workload, at all.

Possibly a step in a process, but not a workload.

I have a workload or two and they don't Finish in 20 seconds.

Recent changes to GPU architecture and boost algorithms puts most GPU benchmarks and some people's benchmarking in the same light to me, Now, tests have to be sustained for a few minutes minimum or they're not good enough for me.

I'm happy to just start calling each other names if you want but best pm the mods don't like it.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
How could this be possible? The i7-7700K is a desktop CPU; when running a single-threaded workload, it draws around 45 watts of power and clocks at 4.5 GHz.

Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.

It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.

It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.
That was part of the article he linked, it was in quotations , I agree with you though.
And that's similar to my point , one 20sec workload is no workload at all.

And the comparisons are all over the show.
I would argue the performance of a 7700k is hard core irrelevant, few are buying quad cores to game on now, even Intel stopped pushing quads A12=(1core) 7700k. = 9900k = 11900(shrug) apparently, shrug.
 
Joined
Nov 4, 2019
Messages
234 (0.13/day)
[/QUOTE]
I don’t really see how it’s that far of a stretch at this point. This A14X is Apple’s latest and greatest on a 5nm node, getting to scale up in power, versus a tired, Skylake-based chip on a very old 14nm node and crammed down to its thermal minimum. Now if the A14X matched the 9900K, that would be much harder to believe. I don’t think Apple would make this move unless they had something solid lined up. I guess it won’t be too much longer before we find out, and benchmarks beyond Geekbench will be available on ARMacOS.

Yeah the anti-ARM people are basically like "it isn't fast, because I say it isn't" ... i'm done arguing, wait for MacOS ARM and they'll see. They'll probably find a way to ignore the 100 other ways it is fast and focus on their hobby horse of dissing a benchmark or two. Even though they are perfectly able to get an iPad Pro and see it do all sorts of tasks at high speed. My PC hasn't gotten faster in almost 5 years now... basically I don't use more than 8 threads for the most part, so the 6700k is about the same as my 10900 computer (you can watch a lot of ~30 percent utilization with a 10900). I'm happy to see Apple actually speed things up.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506


Yeah the anti-ARM people are basically like "it isn't fast, because I say it isn't" ... i'm done arguing, wait for MacOS ARM and they'll see. They'll probably find a way to ignore the 100 other ways it is fast and focus on their hobby horse of dissing a benchmark or two. Even though they are perfectly able to get an iPad Pro and see it do all sorts of tasks at high speed. My PC hasn't gotten faster in almost 5 years now... basically I don't use more than 8 threads for the most part, so the 6700k is about the same as my 10900 computer (you can watch a lot of ~30 percent utilization with a 10900). I'm happy to see Apple actually speed things up.
[/QUOTE]


That's daft, point to someone other than you that's said arm is not fast.


That's exactly the point, some of us can use an iPad pro too, many have, and put it back down, hence the opinions,

You pointed to two short busrt benchmarks as your hidden unseen by plebs truth?!, one of which was so obscure you think you got one over on me, yeah 98% of nerds haven't ran that bench ffs, yeah guy you told me, you think I couldn't show a few benches with PC's beating iPhone chip's?

I wouldn't waste my time, I have said before I have no doubt these have their place and will sell well but they are not talking the performance crown and real work will stay on x86 or PowerPC.
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
Compilers, even with the same flags can generate different optimized code for each ISA. Also CPUs have many quirks, for instance while 64bit floating point multiplies might be faster on some architecture, divides might be painfully slower compared to others and such (I gave this example because divides are notoriously slow). That benchmark looks to be intensive in terms of integer arithmetic, it's well known Apple has a wide integer unit and the cache advantage has been talked about to death already. There is nothing impressive about that, you can run some vector linear algebra with 10^4 x 10^4 matrices and I am sure the Intel CPU would be faster then by quite a margin.

I agree with everything you said above.

It's a waste of time to look at independent problems, because that's not what is found in the real world. We don't solve SMT, we don't run linear algebra all the time, most user software is mixed in terms of workloads and doesn't follow regular patterns such that caches are very effective.

I disagree with your conclusion however.

A programmer working on FEA (Ex: simulated car crashes), Weather Modeling, or Neural Networks will constantly run large matrix-multiplication problems. Over, and over, and over again for days or months. In these cases, GPUs, and wide-SIMD (like 512-bit AVX on Intel or A64FX) will be a huge advantage. If GPUs are a major player, you still need a CPU with high I/O (ie: EPYC or POWER9 / OpenCAPI) to service the GPUs fast enough.

A programmer working on CPU-design will constantly run verification / RTL proofs, of which are coded very similarly to Z3 or other automated solvers. (And unlike matrix-multiplication, Z3 and other automated-logic code is highly divergent and irregular. Its very difficult to write multithreaded code and load-balance the work between multiple cores. There's a lot of effort in this area, but from my understanding, CPUs are still > GPUs in this field). Strangely enough, A12 is one of the best chips here, despite it being a tiny 5W processor.

A programmer working on web servers will run RAM-constrained benchmarks, like Redis or Postgresql. (And thus POWER9 / POWER10 will probably be the best chip: big L3 cache and huge RAM bandwidth).

--------

We have many computers to choose from. We should pick the computer that best matches our personal needs. Furthermore, looking at specific problems (like Z3 in this case), gives us an idea of why the Apple A12 performs the way it does. Clearly the large 128kB L1 cache plays to the A12's advantage in Z3.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
I agree with everything you said above.



I disagree with your conclusion however.

A programmer working on FEA (Ex: simulated car crashes), Weather Modeling, or Neural Networks will constantly run large matrix-multiplication problems. Over, and over, and over again for days or months. In these cases, GPUs, and wide-SIMD (like 512-bit AVX on Intel or A64FX) will be a huge advantage. If GPUs are a major player, you still need a CPU with high I/O (ie: EPYC or POWER9 / OpenCAPI) to service the GPUs fast enough.

A programmer working on CPU-design will constantly run verification / RTL proofs, of which are coded very similarly to Z3 or other automated solvers. (And unlike matrix-multiplication, Z3 and other automated-logic code is highly divergent and irregular. Its very difficult to write multithreaded code and load-balance the work between multiple cores. There's a lot of effort in this area, but from my understanding, CPUs are still > GPUs in this field). Strangely enough, A12 is one of the best chips here, despite it being a tiny 5W processor.

A programmer working on web servers will run RAM-constrained benchmarks, like Redis or Postgresql. (And thus POWER9 / POWER10 will probably be the best chip: big L3 cache and huge RAM bandwidth).

--------

We have many computers to choose from. We should pick the computer that best matches our personal needs. Furthermore, looking at specific problems (like Z3 in this case), gives us an idea of why the Apple A12 performs the way it does. Clearly the large 128kB L1 cache plays to the A12's advantage in Z3.
So of all those examples, the mighty A12 can almost keep up with a two year old intel quad in one, while any of the other brands of CPU you mentioned do them all pretty good(to say the least in some cases), and because of this 20seconds of greatness this proves apple are nearly there beating Intel...

I don't see it.

Damn the irony

"We have many computers to choose from. We should pick the computer that best matches our personal needs. "

Philips head=Philips screwy.

How many Devs are on this ? What proportion of computing device users do they make up 0.00021% is it mostly just a niche?
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
How many Devs are on this ? What proportion of computing device users do they make up 0.00021% is it mostly just a niche?

I mean, if we're talking about applicability to the largest audience, Geekbench is testing HTML5 DOM traversals and Javascript code. (Stuff that SPECInt, Drystone, and other benchmarks fail to test for).

Wanna go back to discussing Geekbench Javascript / Web tests? That's surely the highest proportion of users. In fact, everyone browsing this forum is probably performing AES-encryption/decryption (for HTTPS), and HTML5 DOM rendering. Please explain to me how such an AES-Decryption + HTML5 DOM test is unreasonable or inaccurate.

the mighty A12 can almost keep up with a two year old intel quad in one,

Your hyperbole is misleading. The A12 was 11% faster than the Intel in the said Z3 test. I'm discussing a situation (Z3) where the Apple chip on 5W in the iPhone form factor is outright beating a full sized 91W desktop processor.

Yes. Its a comparison between apples and peanuts (and ironically, Apple a much smaller peanut in this analogy). But Apple is soon to launch a laptop-class A14 chip. Some of us are trying to read the tea-leaves for what that means. The A14 is almost assuredly built on top of the same platform as the A12 chip. Learning what the future A14 laptop will be good or bad will be an important question as the A14 based laptops hit the market.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
We have many computers to choose from. We should pick the computer that best matches our personal needs.

Some computers are better than others at specific tasks but some provide very good general performance across the board. I have trust in an Intel or AMD processor to be good enough at everything, I don't place however the same trust in an Apple mobile chip because I know it wont be. This is why I said it's a waste of time to look at this very specific scenarios.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
I mean, if we're talking about applicability to the largest audience, Geekbench is testing HTML5 DOM traversals and Javascript code. (Stuff that SPECInt, Drystone, and other benchmarks fail to test for).

Wanna go back to discussing Geekbench Javascript / Web tests? That's surely the highest proportion of users. In fact, everyone browsing this forum is probably performing AES-encryption/decryption (for HTTPS), and HTML5 DOM rendering. Please explain to me how such an AES-Decryption + HTML5 DOM test is unreasonable or inaccurate.



Your hyperbole is misleading. The A12 was 11% faster than the Intel in the said Z3 test. I'm discussing a situation (Z3) where the Apple chip on 5W in the iPhone form factor is outright beating a full sized 91W desktop processor.

Yes. Its a comparison between apples and peanuts (and ironically, Apple a much smaller peanut in this analogy). But Apple is soon to launch a laptop-class A14 chip. Some of us are trying to read the tea-leaves for what that means. The A14 is almost assuredly built on top of the same platform as the A12 chip. Learning what the future A14 laptop will be good or bad will be an important question as the A14 based laptops hit the market.
Your hyperbole is pure bullshit 1 core being used on even the 7700k isn't 75 Watts.
People use such to browse the web yes, we agree there, a light use case most do on their phones .

You have failed to address the point that there are three newer generations of intel chip with an architectural change on the way.

And none of you have explained how they can stay on the latest low power mode yet somehow mythically clock as high as a high power device.

All while arguing against your test being lightweight and irrelevant, which you just agreed it's use case typically is.

Look at what silicon they will make it on ,the price they want to hit and you get a CX8 8 core 4/4 hybrid.

Great.
 
Joined
Apr 24, 2020
Messages
2,709 (1.62/day)
Your hyperbole is pure bullshit

Couldn't have said it better myself.

My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.

--------

Since you're clearly unable to take down my side of the discussion, I'll "attack myself" on behalf of you.

SIMD units are incredibly important to modern consumer workloads. From Photoshop, to Video Editing, to audio encoding, multimedia is very commonly consumed AND produced even by the most casual of users. With only 3 FP/Vector pipelines of 128-bit width, the Apple A12 (and future chips) will simply be handicapped in this entire class of important benchmarks. Even worse: these 128-bit SIMD units are hampered by longer latencies (2-clocks) compared to Intel's 512-bit wide x 1-clock latency (or AMD Zen2's 256-bit wide by 1-clock latency).

Future users expecting strong multimedia performance: be it Photoshop filters, messing with electronic Drums or other audio-processing, and simple video editing, will simply be unable to compete against current generation x86 systems, let alone next-gen's Zen3 or Icelake.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Couldn't have said it better myself.

My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.

--------

Since you're clearly unable to take down my side of the discussion, I'll "attack myself" on behalf of you.

SIMD units are incredibly important to modern consumer workloads. From Photoshop, to Video Editing, to audio encoding, multimedia is very commonly consumed AND produced even by the most casual of users. With only 3 FP/Vector pipelines of 128-bit width, the Apple A12 (and future chips) will simply be handicapped in this entire class of important benchmarks. Even worse: these 128-bit SIMD units are hampered by longer latencies (2-clocks) compared to Intel's 512-bit wide x 1-clock latency (or AMD Zen2's 256-bit wide by 1-clock latency).

Future users expecting strong multimedia performance: be it Photoshop filters, messing with electronic Drums or other audio-processing, and simple video editing, will simply be unable to compete against current generation x86 systems, let alone next-gen's Zen3 or Icelake.
I rebutled many of your points, you're blind to it , 7700k /10700k performance increases for one, next generation for two.

The Fact that the main use case You posited for your benchmark viability was web browser action!.

Getting technical Intel have foveros and FPGA tech, as soon as they have a desktop CPU with an FPGA and HBM, it could be game over so far as benches go against anything on anything, enabled by one API.
Power pc simply are in another league.
AMD will iterate core count way beyond apples horizon and incorporate better fabrics and Ip..

And despite it all most will still just surf on their phones.

And less not more people will do real work on ios.

I'm getting off this roundabout, are opinions differ let's see where five years gets us.

You don't answer any questions just epeen your leet Dev skills like we should give a shit.
We shouldn't , it's irrelevant to me what they/you like to use , it only matters to me and 99% of the public what we want to do, and our opinions differ on the scale of variability of workload here it seems.

Bye.
 
Last edited:
Joined
Mar 16, 2017
Messages
2,100 (0.75/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Couldn't have said it better myself.

My facts are facts. Your "discussion" is buillshit hyperbole written as a spewing pile of one-sentence long paragraphs.

--------

Since you're clearly unable to take down my side of the discussion, I'll "attack myself" on behalf of you.

SIMD units are incredibly important to modern consumer workloads. From Photoshop, to Video Editing, to audio encoding, multimedia is very commonly consumed AND produced even by the most casual of users. With only 3 FP/Vector pipelines of 128-bit width, the Apple A12 (and future chips) will simply be handicapped in this entire class of important benchmarks. Even worse: these 128-bit SIMD units are hampered by longer latencies (2-clocks) compared to Intel's 512-bit wide x 1-clock latency (or AMD Zen2's 256-bit wide by 1-clock latency).

Future users expecting strong multimedia performance: be it Photoshop filters, messing with electronic Drums or other audio-processing, and simple video editing, will simply be unable to compete against current generation x86 systems, let alone next-gen's Zen3 or Icelake.
Ironically, I’m a hobbyist photographer who does all his edits on an iPad Pro. It’s just as fast as anything I’ve used on a desktop—sliders apply pretty much in real time. That’s even with an occasional 80MP RAW. I’ve also made and exported movies in iMovie on the iPad, and it was seamless and fast on export. Would a pro want to do this? Probably not, but that might be more how iOS isn’t as ideal as a desktop OS for repetitive tasks, so I’m curious to see how Apple’s SOCs will handle even larger RAW files, imports of hundreds of images and batch updates. I guess my point is that the “feel” today isn’t so far off, but I don’t know how well that will translate from iOS to MacOS, where true multitasking is an everyday expectation versus the limited experience that it is on iOS today.
 
Joined
Aug 15, 2017
Messages
18 (0.01/day)
A14X will run Shadow of the Tomb Raider and any other PS4/Xbox game. Not surprising since it will match a GTX 1060 easily enough, but only need 10-15W. The Switch is circa 2014 hardware (Galaxy Note 4 CPU plus half a GTX 750) so imagine a Switch that is 6 years more advanced and there you have it.

I am curious how you came to that conclusion.
I actually looked up GFXbench, which is cross-platform and fairly well regarded and there is no known bias to any platform.


I am comparing the A12Z (from the 4th gen Ipad pro , faster than the A12X) to the 1060

For Aztec high offscreen, the most demanding test, A12Z recorded 133.8 vs 291.1 on the GTX1060.

So my question to you is, are you expecting the A14X to more than double in graphics performance?
 
Top