• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apple A17 Bionic SoC Performance Targets Could be Lowered

Joined
Jul 13, 2016
Messages
3,279 (1.07/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
It is very much a valid excuse. Apple is working with an entirely different architecture while Intel and AMD still use the X86 architecture. And yes, Intel has increased their performance, but at what cost? Their latest CPUs are pretty close to requiring liquid nitrogen to be able to be cooled effectively without going past their thermal limits, and draw an insane amount of power. Apple is working within the ARM architecture, an architecture designed with low power draw and low thermals in mind.

You don't seem to be aware that Intel's latest laptop processors have better energy efficiency than the M2. You must not watch HWUB or be aware that processors like the 13400 are actually pretty efficient. That's Intel mind you, which is a more favorable comparison to Apple than AMD's products.

Sure, if you cherry pick the worst efficiency Intel processor it might look bad. Then again that's cherry picking so any argument with that as it's basis is easily dismissed.

And yes, x86 is a much older architecture that has to maintain comparability with decades of applications. That requires ingenuity. You'd expect X86 gains per generation to be lower, not higher than a brand new uArch that dropped backwards compat in order to squeeze out maximum performance and min power consumption.

You have to honestly be very unexperienced in the technology space to not be impressed with the work Apple has done to provide significant amounts of performance within the power efficiency boundaries of the ARM architecture. And the M2 is based on the same process node as the M1 line of SoCs. Obviously there wasn't going to be a significant performance gain from that.

I did say it was impressive as you even quoted in your first reply to me. It was the M2 that I said was disappointing.

There's a lesson to be learned here, those that fling insults are often met with ironic consequences frequently beset on themselves.
 

hs4

Joined
Feb 15, 2022
Messages
106 (0.10/day)
The A15 outperformed x86 laptop CPUs up to the Intel 8th gen/Zen+ gen at the time of its introduction. The M1 was dominant, achieving twice the performance of the Intel 10th gen. at half the power consumption.

Conversely, Apple is not accustomed to making large CPUs. The M1 Ultra is not suitable for video editing workstations due to its lack of PCIe expandability. Even Final Cut Pro X users were complained to Mac Studio and seriously considering switching to DaVinci Resolve.

I built the following x86 system (for linux) for memory hungry applications and the cost difference is stark compared to a Mac Studio with similar performance. even if a GPU was needed it would still be around $3000 with an RTX4070Ti. In terms of power efficiency, the 13900K and 7950X are also practical enough to be comparable to the M1 Ultra, even when reduced to PL2, which is equivalent to Eco mode. In fact, my system runs quietly even with air cooling.

$2000 ~ 13900K - 192 GB - 2TB SSD
$6200 ~ M1 Ultra - 128 GB - 2TB SSD

If the yield of N3B is low, M3 and M3 Max may come out later than expected.

I think the x86 camp will also strike back with the Lunar lake / Strix Point generation. Even in a relatively modest scenario, I think Arrow lake/Lunar lake will outperform M3 and achieve better power efficiency, and in an ambitious scenario, I think Lunar lake 2+8 or 1+4 can compete with A17 in both performance and efficiency, with a planned foray into smartphones.
 
Joined
Aug 26, 2019
Messages
570 (0.30/day)
If you are fine paying thousands for a 3 year life cycle device
~$400-500 every ~7years, or something like that. I’ve lost track; I’m still running the 1st gen iPhone SE.

Buying an SE and using it until 2gens later is the best deal I know of for a smartphone, and from a company whose business model isn’t data mining is nice.
 
Joined
Aug 26, 2019
Messages
570 (0.30/day)
Keep telling yourself that ;)
How about it’s not their primary source of revenue. :)

Apple most certainly isn’t saintly, but when the alternative is literally Google, what more needs to be said.
 
Joined
Nov 24, 2007
Messages
51 (0.01/day)
Because N3B is described as one of the worst node updates in history. I can't believe Cook would be moronic enough to use the N3B node. Nearly all companies are waiting for N3E. Given the much higher price for N3B than N4P and the puerile uplifts in transistor density and power reduction, they shoud either wait or offer iPhone 15 with N4P.

Go to semianalysis for a full review of TSMC's 3nm node and how poorly it rates.

That's because Apple gets a massive discount on N3B; everyone else will be paying full-price on day 1 for N3E. There's a reason Apple likes being first on TSMC's nodes.

Apple will pay TSMC for known good die rather than standard wafer prices, at least for the first three to four quarters of the N3 ramp as yields climb to around 70%, Brett Simpson, senior analyst at Arete Research, said in a report provided to EE Times.

They DO NOT make general purpose CPU's



None of this uses "accelerators". Y'all attach way too much to the Apple brand and your personal reaction to it.

//

No, not really. They've had some success in the field but they need to prove they can consistently innovate. The M1 is the really only impressive design I've seen from Apple but that is not nearly enough. The M2 was pretty disappointing.

It's as if you missed all the charts in the article. Apple makes plenty of errors, but y'all critique them on the wildest things.



CPU perf / W cadence (via IPC or clocks) is one of the only undeniably hard-to-reproduce traits of Apple's CPU architects.

//

@Redwoodz: you think Apple doesn't design CPUs? And it's Arm's due? Just completely backwards. Apple uses Arm's ARMv8 ISA, just like AMD uses Intel's x86 ISA. Apple and AMD, even using someone else's ISA, actually do develop 100% in-house CPU microarchitectures.

//

Apple has just barely above average nT perf / W, but HWUB found Intel has dominant 1T perf / W. Intel & AMD simply have narrower designs; they'll never consume <10W at their sky-high 5+ GHz clocks.

In IPC, Apple is unmatched. The big problem is that Apple has just had basically zero progress for two generations now (A15, A16). Some might argue Apple always releases new CPUs, but so does Intel. AMD might take an 18 to 24 month break, but Intel won't.

1682828183649.png


1682828222893.png


//

Notebookcheck found the same: 1T perf/W is still dominant, though M2 is somewhat less efficient than M1. But still in a different zipcode than whatever AMD & Intel are draining their power in.

1682828827346.png


//

M2 was bitten by the same CPU ossification as A15; basically zero IPC uplift. However, M2 has a ~15% GPU perf uplift with an ~11% GPU perf/W uplift. The CPU side is disappointing in a vacuum, but I'm more disappointed that AMD & Intel can't replicate this yet in their latest designs (still waiting on Phoenix reviews…AMD, April is basically done, hurry up).

1682828654690.png


1682828720498.png
 
Joined
May 14, 2023
Messages
2 (0.00/day)
"terms of pure processing power, and only lags slightly behind with its GPU's capabilities."
That is simply not true. In terms of cpu power a16 bionic is about 20 percent faster.
In terms of gpu power a16 bionic is 20 percent slower.
In terms of npu power we cant really know because of the lame tradition, apple pioneered (pun intended) to hide its theoretical performance numbers. But from what is available now on the web it seems a16 bionic also lags behind snapdragon 8 gen 2 in this regard.

Comparison between a16 and sd 8 gen 2 is the first fair comparison between android chips and apple silicon for a long time because they both use the same lithography. Overall they are both very capable chips at the same level of computing power and the differences in cpu or gpu are only design choices.

There is nothing particularly "magical" in the apple or quallcom "architecture". Engineers from both teams know their job and will perform about equally given the same substrate.

Now regarding the comparison between the apple a16( and upcoming a17) and Intel desktop processors, do we need to stress that it is invalid? It cant get more invalid than this. The latest Intel chips are made on a much inferior lithography and they compensate with higher power consumption for that. Intel's current lithography is for all intents and purposes about equivalent to TSMC's 7nm. It is not even close to TSMC's 5nm and it is lightyears behind even a bad implementation of a 3nm node by TSMC.

To be fair Intel's lithography is homemade while Apple and Quallcom just buy TSMC's enginnering prowess because they cant build anything themselves . And to be even more fair at the core level no-one would be able to boast and shout and gesture if it were not for ASML and maybe some other company that i dont know that provide the lithography machines. If it were not for the expertise of the dutch who keep the flag of the techinological progress you would not get any new cool phone apple and android fanboys.
 
Joined
Nov 24, 2007
Messages
51 (0.01/day)
Comparison between a16 and sd 8 gen 2 is the first fair comparison between android chips and apple silicon for a long time because they both use the same lithography.

You mean, if you ignore 3 of the past 5 years? :roll: Qualcomm has had virtual node parity at the same foundry with Apple for many recent cycles. The A16 & Snapdragon 8 Gen 2 being on the same node families is neither special nor unexpected.

You can have made comparisons for years now.

Snapdragon 8+ Gen1 = TSMC N5 class (May 2022)
Apple A15 = TSMC N5 class (Sept 2021)

or

Snapdragon 865 = TSMC N7 class (Dec 2019)
Apple A13 = TSMC N7 class (Sept 2019)

or

Snapdragon 855 = TSMC N7 class (Dec 2018)
Apple A12 = TSMC N7 class (Sept 2018)

//

There is nothing particularly "magical" in the apple or quallcom "architecture". Engineers from both teams know their job and will perform about equally given the same substrate.

To be clear, Qualcomm relies purely on Arm's microarchitectures for all its current releases. NUVIA's IP in mobile is years away. Qualcomm's engineers haven't developed a fully-custom mobile CPU microarchitecture in almost decade (IIRC, the Snapdragon 820 in 2015). This should read, "nothing particularly magical about Apple's or Arm's microarchitectures."

//

Now regarding the comparison between the apple a16( and upcoming a17) and Intel desktop processors, do we need to stress that it is invalid? It cant get more invalid than this.

Ideally, we should compare Apple's M-series vs Intel's desktop CPUs. But, sure, as A & M use the same uArch, people extrapolate.

//

Intel's node exclusivity is by Intel's design and choice. Intel has used TSMC for decades, but Intel prefers internal foundries for IA (aka CPU compute) cores. It's like saying "this comparison is invalid because this car uses a custom in-house engine and that one was co-developed among three OEMs." Well, yeah: the first manufacturer picked custom.

Intel could have chosen TSMC years prior, too; Intel just never made the time nor humility to port. Virtually every other CPU firm in the world has used external fabs (Samsung a primary exception). Intel doesn't get handicaps because Intel made a bad decision. Intel's engineers & corporate make loved their pride too much, so they will live with their decisions now.

In the end, "Fantasy Nodes" (e.g., "If only AMD were on TSMC N5! If only Samsung were on TSMC N3! If only Intel were on TSMC N7 years ago!) is a fun game, but uArch + node are chosen years earlier.

//

But the comparisons need not include power. We can absolutely compare Apple vs Arm vs AMD vs Intel on IPC + peak performance + clock scaling + RAM bandwidth + cache latencies in thousands of applications. In IPC, Apple & Arm are significantly ahead of their x86 counterparts, for example.
 
Joined
May 14, 2023
Messages
2 (0.00/day)
You mean, if you ignore 3 of the past 5 years? :roll: Qualcomm has had virtual node parity at the same foundry with Apple for many recent cycles. The A16 & Snapdragon 8 Gen 2 being on the same node families is neither special nor unexpected.

You can have made comparisons for years now.

Snapdragon 8+ Gen1 = TSMC N5 class (May 2022)
Apple A15 = TSMC N5 class (Sept 2021)

or

Snapdragon 865 = TSMC N7 class (Dec 2019)
Apple A13 = TSMC N7 class (Sept 2019)

or

Snapdragon 855 = TSMC N7 class (Dec 2018)
Apple A12 = TSMC N7 class (Sept 2018)

//



To be clear, Qualcomm relies purely on Arm's microarchitectures for all its current releases. NUVIA's IP in mobile is years away. Qualcomm's engineers haven't developed a fully-custom mobile CPU microarchitecture in almost decade (IIRC, the Snapdragon 820 in 2015). This should read, "nothing particularly magical about Apple's or Arm's microarchitectures."

//



Ideally, we should compare Apple's
//

Intel's node exclusivity is by Intel's design and choice. Intel has used TSMC for decades, but Intel prefers internal foundries for IA (aka CPU compute) cores. It's like saying "this comparison is invalid because this car uses a custom in-house engine and that one was co-developed among three OEMs." Well, yeah: the first manufacturer picked custom.

Intel could have chosen TSMC years prior, too; Intel just never made the time nor humility to port. Virtually every other CPU firm in the world has used external fabs (Samsung a primary exception). Intel doesn't get handicaps because Intel made a bad decision. Intel's engineers & corporate make loved their pride too much, so they will live with their decisions now.

In the end, "Fantasy Nodes" (e.g., "If only AMD were on TSMC N5! If only Samsung were on TSMC N3! If only Intel were on TSMC N7 years ago!) is a fun game, but uArch + node are chosen years earlier.

//

But the comparisons need not include power. We can absolutely compare Apple vs Arm vs AMD vs Intel on IPC + peak performance + clock scaling + RAM bandwidth + cache latencies in thousands of applications. In IPC, Apple & Arm are significantly ahead of their x86 counterparts, for example.
Thanks for responding. Aiming clarity i will only respond once again, when i have enough time so it is done properly. I only skimmed through your response.

You mean, if you ignore 3 of the past 5 years? :roll: Qualcomm has had virtual node parity at the same foundry with Apple for many
Ok, here I am.
To be clear, I absolutely recognise the importance of architecture. But I also recognise the effects of lithography, supporting RAM, hard disk, optimisation of software and so on. But when I see too much emphasis on one of them it puts me off. Depending on time period , you will see the company ahead in computation power touting the advantages of their architecture. It was once Intel, it is now Apple. This is only for hype and marketing purposes

I could take one by one the examples you mentioned, but this would derail the discussion out of its intended goal.
For example , it is clear, regarding qualcomm, that it managed to deliver something comparable to Apple not directly but after developing 2 architectures on TSMC's 5nm. But this is also true for Apple Intel and AMD. Often the best advantage of the new node is seen in a couple of years and not in the first attempt. So, Qualcomm lagging behind Apple with Snapdragomn 8 plus, gen 1 is not an argument.

Again, this is not my main point. I want to bring an example comparing an Apple chip and an Intel chip . Striving to compare chips operating within comparable power envelopes, within comparable lithographies and see if architecture is the beginning and the end of importance ( it is not) But this time we compare an Apple product with an Intel one.
So let's compare Intel 1265u and a12z bionic .
Lithographies at about the same density and performance. TSMC's 7nm against Intel's 10 nm
Power consumption rated at 15w for both.

Geekbench 5 scores for a12z 1100 and 4500.
Geekbech 5 scores for Intel core 1265u 1800 and 6500.

So, at the same power envelope and about equivalent lithography Apple lags behind by a considerable margin.Seemingly....
Same results you get for their embedded gpus.

So, is Intel's architecture so much better than Apple's. ? Probably not, because of potential more liberal power draw intervals. We will never
accurately know, because companies lately are secretive , not disclosing architectural details like mother Apple that showed the way.. So Intel is actually able to build power efficient chips at the same level of performance. As is Apple able to build power consuming monsters, like m1 max.
Given the right materials, a particular strategy(computation at low or high power) all chip designers can produce similar results.

But rarely companies, build chips from top to bottom. Lately , they are more incapable than in the past and resort to foundries to get their silicon baked. Not that they have ever been able to build the furnace , but at least they used to buy one and bake their chips. Now , they don't know or willing to bake them themselves, with the possible exception of Intel.
 
Last edited:
Top