• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial Apple's A12X Shows Us How The ARM MacBook Is Closer Than Ever

Joined
Sep 7, 2017
Messages
3,244 (1.24/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Reminds me the times when Macoids were running around with "fastest eva!" claims with IBM's chip inside, early 200x.

Yeah, right, go for it, Apple.

They were great chips, until the last run. Sort of on par with Intel (604e = Pentium Pro, G3 = PII, etc), but Intel went crazy in the megahertz wars. Maybe IBM would have figured it out if given enough time, but Apple was their main co-designer/customer and they jumped ship. I mean, IBM's Power chips are better than Xeon, so I don't see why PowerPC wouldn't have evolved as well.

The real failure of PowerPC is not many adopted it (and Apple probably helped kill it off anyways, when they destroyed Mac clones). That was it's real intent - for IBM to own the PC market again. They wanted NT, Macs, and anything else on it.
 
Joined
Dec 10, 2015
Messages
545 (0.17/day)
Location
Here
System Name Skypas
Processor Intel Core i7-6700
Motherboard Asus H170 Pro Gaming
Cooling Cooler Master Hyper 212X Turbo
Memory Corsair Vengeance LPX 16GB
Video Card(s) MSI GTX 1060 Gaming X 6GB
Storage Corsair Neutron GTX 120GB + WD Blue 1TB
Display(s) LG 22EA63V
Case Corsair Carbide 400Q
Power Supply Seasonic SS-460FL2 w/ Deepcool XFan 120
Mouse Logitech B100
Keyboard Corsair Vengeance K70
Software Windows 10 Pro (to be replaced by 2025)
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
how a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?
To find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
 
Joined
Jan 8, 2017
Messages
9,389 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
does the PC crowd still think tablets are toys?

Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
 
Joined
Dec 10, 2011
Messages
429 (0.09/day)
Well certainly I'm not Apple cultists by any stretch (have only 1 Apple thing - iPhone7 as was sick and tired of Android and its superb). Their pricing structure is abominable, support not much better assuming your stuff is at most 3 years old (big shout for Louis Rossmann 'The Educator' :toast:), otherwise support is a 4 letter one - GTFO. I wish Apple started selling their stuff without warranties of any kind (except initial say 30 day period with possible DOA). Charge half the price and stop pretend there is a support.

On the other hand. Watched iPad presentation and I find it interestingly tempting, but... it doesn't run macOS (where I can get stuff I work on - Clip Studio/Corel Painter), but mobile iOS (stuff like Procreate is pathetic) and that's a deal breaker for me. If it was macOS ecosystem I would jump on it in a jiffy. :snapfingers: It is vastly superior in every possible way vs products like by now totally archaic Wacom Mobile Studio Pro 16 (model 13 is so lame I don't even say more).

If you never painted outdoors don't pretend you know everything. WMSP16 is great tool (essentially Windows tablet PC) when you just want to take your art stuff and move away from room and desk and cables. Go and paint in the park or something. But here is the deal. Because it is full blown PC, you can use desktop apps. iPad is a weird thing. It would be great if it would be a macOS version of WMSP, but it is not. Sadly. :(
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ?
Yes, exactly.
And even in terms of ergonomics; a tablet have to lie flat or be in a stand, and touch-only is imprecise and inefficient for most serious work.

I surely see a use for tablets, but purely as "toys". One of the neatest things I've found with tablets is to use them for sheet music, or viewing photos.

I would like to see cheaper 12-15" tablets, the Ipad Pros are at least twice what I think they are worth. But a tablet is always going to be a supplement.

… then sure I bet there is someone out there playing Doom unironically on their TI calculator as well.
Oh, but there is: LGR - "Doom" on a Calculator! Ti-83 Plus Games Tutorial
:D
 
Joined
Apr 19, 2018
Messages
1,227 (0.51/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
You forgot to mention that desktop class chips, I assume we're still talking notebooks here, use at least 15W avg TDP with PL2 of generally 25W & the ARM counterparts do well even with half of that i.e. 7W or thereabouts.

I still don't get why everyone is so hung up on GB numbers, are there better cross platform benchmarks around? Is Intel this infallible or does the PC crowd still think tablets are toys? The same was said about Intel vs AMD, before Zen, & we know how that turned out.

How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
 
Joined
Apr 12, 2013
Messages
7,477 (1.77/day)
How about that Photoshop demo? Was that something a Chromebook from 2013 could pull off? Because that's the kind of performance your insinuating. You are correct that GeekBench is not a totally platform and architecture agnostic benchmark, despite what it's authors claim, but I think it's obvious that whatever the A12X's real performance, it's damn close enough to Intel, and can't all be magic tricks and hokus pokus.
I'm not sure what you're saying i.e. do you agree with the premise that Ax can replace x86 in MB or even MBP? We'll leave the entire desktop lineup debate to a more appropriate time where Apple has more than one die, because I don't see the same chip going in an iphone & a i9 replacement.
Yeah, we do. I am baffled every time I see Apple do a demo with something like Photoshop, who the hell actually does serious work on their tablet on a daily basis ? Actually, let me put it the other way around, who would afford to pay 1200$ or however expensive this iPad is and not have a high end laptop around for that ?

Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.
So you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them? What do you say to people who end up bootcamping Windows every time they open a Mac? And who does serious work on a laptop, don't we have desktops for that or workstation/server class PC? What does serious work even mean in this case?
To find out, we need to take a look at the source code and find out what it is actually benchmarking on various platforms.

I assume it benchmarks various simulated workloads, including things like compression/decompression, encryption, video decoding/encoding, image formats etc. If the benchmark decides to rely on just using the standard instruction set, then you get an impression of the pure performance. If on the other hand the benchmark uses various specific instructions to accelerate certain workloads, then the benchmark becomes a measurement of those specific algorithms, not generic performance.

The x86 CPU in a desktop is very good at generic workloads, and while it has a few application specific instructions too. But this is nothing compared to many ARM implementations. The CPUs in smartphones and tablets does however rely heavily on specific instructions to accelerate workloads. This does of course give good energy efficiency for those specific algorithms that are accelerated in software built to use them, but anything outside that will perform poorly. It should be obvious that there is a limited amount of such accelerations that can be included on a chip, and support for new algorithms can't be added until they are developed. This means that such hardware becomes obsolete very quickly. But this of course fits very well with the marketing strategy of Apple and other smartphone/tablet makers; they can customize the chips to accelerate the features they want, and Apple control their software too which gives them an extra advantage, leading to new products every cycle which looks much better at some new popular task.
No different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.

There's 3 implementations from x86 as well, would you like to call them out?

Which is actually a good thing & the reason why iPhone beats every other phone out there, in most synthetic & real world benchmarks, in fact virtually all of them. This is also the reason why x86 won't beat a custom ARM chip, across the board, should Apple decide to replace the former in their future laptop &/or desktop parts.
 
Last edited:
Joined
Sep 7, 2017
Messages
3,244 (1.24/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
PCs are like the Ford Raptors of the computing world. They do just about everything well. It's the ultimate all purpose vehicle imo. And that will always be wanted computing wise as well. Even with the trend in computers now being specialized devices.
 
Joined
Apr 12, 2013
Messages
7,477 (1.77/day)
Yeah the "PC" is jack of all trades & master of some, that's all it needs to do. It doesn't have to be the best at everything, never mind the fact that it isn't the best in lots of things atm.
 
Joined
Sep 17, 2014
Messages
22,292 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Its fun to say Apple has an ARM design that is competing with non-ARM hardware, but I really don't care as long as the software is not cross compatible.

As long as that isn't a universal thing, ARM and x86 will always be two separate worlds. As long as it is labor intensive to port back and forth, or emulate, the performance of ARM is irrelevant. The move to migration from x86 to ARM is way too slow for it to matter.

MacOS on ARM... who cares? Proprietary OS, and it gets only more isolated and less versatile by moving away from x86, in terms of software. And its not like MacOS was ever really good at that. Look at the reason Windows is still huge: enterprise + gaming. Both are elements MacOS fails to provide properly.
 
Joined
Jan 8, 2017
Messages
9,389 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
So you're argument is that iOS/iPad is a toy therefore no one can do any serious work on them?

I am not arguing anything, they are primarily used for entertainment not productivity. Apple insists to call it "Pro" for marketing purposes and to justify it's crazy price tag. You're not buying a thousand dollar tablet to watch Netflix on it, you are actually a content creator. :laugh:
 
Joined
Mar 6, 2017
Messages
3,320 (1.19/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
 
Joined
Jan 8, 2017
Messages
9,389 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years.

CISC remains CISC no matter the implementation and it's always going to more robust. Complex x86 instructions optimized at the micro-op level, will always be faster than the ARM equivalent.
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
No different than SSE, AVX, AES, SHA "accelerated" benchmarks. In fact x86 has way more instruction sets for certain workloads than ARM.
No, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
 
Joined
Jun 26, 2017
Messages
90 (0.03/day)
Location
Germany
Processor Core i5-6500
Motherboard MSI Z170A Krait Gaming
Memory 2x8GB Kingston HyperX Fury Black
Video Card(s) Radeon HD7970
Display(s) 2x BenQ E2200HD
Case Xigmatek Utgard
Power Supply OCZ Fatal1ty 550W
Mouse Mad Catz M.M.O.TE
Keyboard Coolermaster Quickfire TK MX Blue
Software Windows 10 64bit
Tablets are indeed primarily toys, I have never seen/heard anyone use them outside playing games and watching Netflix. And if you are going to tell that well there has to be someone that uses them as such, then sure I bet there is someone out there playing Doom unironically on their TI calculator as well. That still doesn't make it any less of a joke.

Honestly, before it got stolen along with the rest of my hardware, I used my Tablet more than my PC. For everything from reading books or comics over webbrowsing and Netflix in the bed to even simple text editing and light spreadsheet work (for the latter two parts I was using my Bluetooth Keyboard that was previously only used as a remote for my HTPC). My main PC was only fired up for gaming. Unless I was playing something on PS3, PS4 or Switch. Which did happen more, than I would have anticipated before I got those consoles.
 
Joined
Apr 12, 2013
Messages
7,477 (1.77/day)
No, you hare mixing things up.
SSE and AVX are SIMD operations, these are general purpose. ARM have their own optional counterparts for these.
AES and SHA are application specific instructions.
I'm not, how many applications make use of AVX or rather can make use of AVX? Heck SSE is only as old as 1999, while PC computing is much older than that. What are the equivalent ARM counterparts you're talking about?

edit - Scratch that, I get it what you're saying.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.11/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Considering just 3 years ago Apple was running their laptops off the absolutely horrible Core M 5Y31, I can certainly see the A12X or one of its sucessors taking that processors place. Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money, make the product "look cooler" in some way, and add some kind of bullshit marketing point about it.
 
Joined
Oct 27, 2009
Messages
1,174 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
how a little arm cpu can do about 5000 in single core in geek bench when a 5 ghz oc 9700k is about 6500?

because they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,389 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Apple is not afraid of putting under-powered crap processors in their computers if it means they can make more money

The irony is that these chips likely cost more to make than anything comparable from Intel.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.11/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
The irony is that these chips likely cost more to make than anything comparable from Intel.

Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
 
Joined
Apr 12, 2013
Messages
7,477 (1.77/day)
because they have been faking their numbers on ios for soo many years they didn't think of what would happen when they reached desktop numbers and everyone started asking them why their laptops weren't on arm...

Here is by far the fastest arm server chip...
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/6/
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

And it is... competative ish.... using more cores and more power.
You have any evidence of faking iOS benches? I know SS did it, Huawei as well as Intel, yet to see Apple devices called out for faking benchmark in recent times.

You mean the only server chip, QC's project is dead & any other ARM based vendor seems miles off in their efforts to deliver a viable server chip.

And that's related to desktops or notebooks how? Not to mention Apple is a completely different beast with close to a decade worth of experience behind them.
Yeah, but after you add in Intel's cut, it is probably cheaper for Apple to make their own CPUs than buy them from Intel.
Absolutely, Intel has insane margins, just like Apple.
 
Joined
Sep 2, 2015
Messages
90 (0.03/day)
Location
Nova Scotia
System Name Old Old Old.
Processor AMD X2 5200+ 2.6Ghz @5665+ 2.83GHz 1.4v
Motherboard ASUS M2NPV-VM
Memory Corsair XMS2 PC6400 Dual channel 1GBx2 CL5-5-5-15-20 @ 944MHz DDR2
Video Card(s) ATI Radeon 2600XT 256MB core@857MHz ram@1179MHz GDDR4
Storage alot of 'em
Display(s) ASUS 23" VC239H 1920x1080 IPS 5ms
Audio Device(s) Diamond 5.1
Power Supply Enermax Liberty 400w dual rail
Mouse Logitech MX518
Keyboard Logitech G11
Yeah, different benchmark benchmarking different things.


We've been hearing this from the RISC camp since the late 80s; x86 have too much legacy overhead, RISC is more "efficient" and perhaps "faster". Even back then this was only partially true, but it's important to understand the premises. Back then, CISC chips like 80386 and 80486 were larger chips compared to some of their RISC counterparts, and this was before CPUs hit the power wall, so die size was the deciding factor for scaling clock speed. The reduced instruction set of RISC resulted in smaller designs which was cheaper to make and could be clocked higher, potentially reaching higher performance levels in some cases. But RISC always had much lower performance per clock, so higher clock speed was always a requirement for RISC to be performing.

Since the 80s, CPU designs have changed radically. Modern x86 implementations have nothing in common with their ancestors, with design features such as pipelining, OoO execution, cache, prefetching, branch prediction, superscalar, SIMD and application specific acceleration. As clock speeds have increased beyond 3 GHz, new bottlenecks have emerged; like the power wall and memory wall. x86 today is just an ISA, implemented as different microarchitectures. All major x86 implementations since the mid 90s have adapted a "RISC like" microarchitecture, where x86 is translated into architecture-specific micro-operations, a sort of hybrid approach, to get the best of both worlds.

x86 and ARM implementations have adapted all the techniques mentioned above to achieve our current performance level. Many ARM implementations have used much more application specific instructions. Along with SIMD extensions, these are no longer technically purely RISC designs. Applications specific instructions is the reason why you can browse the web on your Android phone with a CPU consuming ~0.5W, watch or record h.264 videos in 1080p and so on. Some chips even have instructions to accelerate Java bytecode. If modern smartphones were pure RISC designs, they would never be usable like we know them. The same goes for Blu-ray players; if you open one up you'll probably find a ~5W MIPS CPU in there, and it relies either on a separate ASIC or special instructions for all the heavy lifting. One fact still remains, RISC still needs more instructions to do basic operations, and since the power wall is limiting clock speed, RISC will remain behind until they find a way to translate it to more efficient CISC-style operations.

I want to refer to some of the findings from the "VRG RISC vs CISC study" from the University of Wisconsin-Madison:
View attachment 109779

The only real efficiency advantage we see with ARM is in low power CPUs. But this has nothing to do with the ISA, just Intel failing to make their low-end x86 implementations scale well, this is why we see some ARM designs can compete with Atom.

that study was based on nearly 10 year old tech. Ax processors have come a long way since the iphone 3GS lol.
Screen-Shot-2015-06-30-at-3.05.30-PM.png

Not saying the study's wrong just that Apple has made significant improvements to their version of the ARM core over the past decade. a Study based on processors of a dozen generations back will be an inaccurate representation of the current generations of Ax processors. Intel on the other hand are still using the same architecture and have seen only small increases in performance. in fact according to what i can find at a glance, even the A11 was an incredible 100x faster than a 3GS Name another processor in the past decade which can claim such a feat.... hell even the A8 in the iphone 6 was 50x faster than the 3gs. I'd like to see a study done on the most recent cpu's. THAT would be interesting.

I keep reading among the posts in this thread about RISC vs CISC. First, modern x86_64 chips aren't CISC chips like they used to be back in the day with complex instruction sets. Well OK, they are on the surface but that's where it ends. Ever heard of something called a Micro-OP? There's an x86 translation layer or... instruction decoder in every modern processor that takes the x86 instructions and converts them to RISC-based micro-ops. Both Intel and AMD have done it for years. All in all, modern x86_64 chips are designed completely different from what they used to be back in the day, they have more in common with RISC than CISC.
This too^
 

Attachments

  • iphone-6-a8-soc-performance-cpu-gpu1.jpg
    iphone-6-a8-soc-performance-cpu-gpu1.jpg
    36.1 KB · Views: 291
Last edited:

darklight2k2

New Member
Joined
Nov 4, 2018
Messages
1 (0.00/day)
everybody spoke about the clock is forgetting some x86 istructions can't be completed in one clock only.
so even if modern processors try to optimize executing them, and also using branch prediction there's a limit to their IPC. As someone else said those are then split in micro-OPs, making the architecture really similar to RISC.
What's the real difference right now is the market that those chips are aimed to. Also developers seems unable to write multithreaded programs correctly; and this is made very obvious by the small adoption of Vulkan and DX12.
In pure sheer computational power ARM cpus already find their place. When x86 architecture will not be able to improve any further, we'll probably finally see the benefits of multicores and RISC cpu
 
D

Deleted member 158293

Guest
Looks like the x86 days are finally numbered...

AMD was out to lunch for a decade, and intel preferred to milk their willing customer base and pushing innovation at a snail's pace.

All the while ARM was laying down the foundation work they needed. It's not even about Apple's version of the chip. Now ARM is ready and pushing beyond their initial market.
 
Top