• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Bad Intel Quality Assurance Responsible for Apple-Intel Split?

Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
It actually adds up to me.

We had a defective skylake unit CPU right here on TechPowerup, don't recall the thread. It's darn near the only real true-blue defective CPU I have seen. I can buy this.

... so it's "fake news" because it is sourced from someone who at the time was perfectly positioned to have access to this information? Yeah, sorry, your logic doesn't hold there. You seem to be implying that former employee = disgruntled former employee, which is nonsense. There is no reason to suspect Piednoel to hold any grudge towards his former employer; he quit on his own volition and has no history of criticizing them previously.

I really wish people would stop abusing the term "fake news".

He says that word, but I don't think he knows what it means.
 
Joined
Jul 8, 2017
Messages
38 (0.01/day)
Huh? Do you even know François? I mean, he's a character, but I think he knows his shit, at least based on the times I've met him and talked to him.
[/CITATION]
He sh*tposting everytime since intel fired him

And its funny, why he say that, while he is one of the main engineer who work on intel architecture ... he designed a bad CPU and now shot at intel ? lol
 
Joined
Apr 30, 2020
Messages
999 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
It might be, though there's nothing stopping them from making an Arm-based SoC with heaps of cores and PCIe like those server SoCs that are showing up these days. Given that the Mac Pro uses all custom hardware anyway they could just redesign the motherboard around this and keep everything more or less the same. Of course driver support for PCIe devices would be tricky, but it already is for a lot of things on MacOS, so that's not that big of a change.

Denied what? That there are fallbacks? There is nothing a chip designer can do to prevent this (beyond removing older instruction sets I guess), as that is a pure software thing. Software checks the CPUID, whether it is on the list of "has [high performance instruction set X], if yes, run code path A, if no, run code path B.

What you were describing in your previous post sounds like the opposite of that - the ability to run AVX code on hardware without AVX support. This will not work, as the CPU doesn't understand the instructions and thus can't process them. Sure, there might exist translation layers, emulation and similar workarounds in some cases, but they are rare and inevitably tank performance far worse than writing code for a lower common denominator instruction set. The whole point of added instruction sets like AVX is to add the option to run certain specific operations at a higher performance level than could be done with more general purpose instructions - but you can of course do the same work on more general purpose instructions, just slower and with different code.

no, I mean running a code from up from sse2-sse4.1 to AVX with out the software ever calling for it. That's what the pdf stated.
 
Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
One tricky thing with ARM processors is that they rely a LOT on "outside" processing. I.e. you have have a lot of sub-processors/accelerators that handle things.

No more so than any SoC these days. What do you mean? They have extensions sure, but so does x86.

And its funny, why he say that, while he is one of the main engineer who work on intel architecture ... he designed a bad CPU and now shot at intel ? lol

He was like what, part of a team that designed a bad CPU (it wasn't that bad at launch btw, it was quality assuarance that failed)? You can't blame him alone and it doesn't discredit him for this story.

So yeah, quit the FUD. Fact is while this isn't pure fact yet, it isn't "fake news" either.
 
Last edited:

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,769 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
No more so than any SoC these days. What do you mean? They have extensions sure, but so does x86.
But they're not extensions, a lot of is actual sub processors within the SoC.
For example, a lot of ARM SoCs now have something like a Cortex-M0 as their PMC, they have another custom DSP that handles audio, they multiple DSPs that handle video encoding, decoding, transcoding, etc. simply because the ARM cores are not powerful enough and not general purpose enough to do a good job doing these things. Ok, so some of these things are needed to make an SoC work, but ARM based SoCs have many more sub processors than x86/x64 CPUs have.

Look at the Renoir die shots that were posted last week as an example, not taking the GPU or interface parts into account, how many sub-processors are there in these? AMD has their Platform Security Processor, but that's it afaik. As this is an APU, it obviously has a media engine as well, which most likely contains some kind of DSP at the very least.



Apple is relying on a lot more additional sub-processors to get tings done, as per below. They have an always-on processor, they have the crypto accelerator, a neural engine, a machine learning accelerator (aren't the last two the same thing, more or less?) and a camera processor. Ok, so the last one is because this is more of a tablet chip design, but my point here is that x86/x64 doesn't rely on as many extra bits, instead the rely on raw power, for better or worse. Video codecs is one of the simplest examples, as I pointed out in my previous post in this thread. Every time there's a new video codec, a new hardware block has to be added to ARM SoCs for them to be able to play back the codec, unless it's a very simple codec, since the CPU cores are often not capable of playing back video files based on new, more efficient codecs. Yes, this has been an issue in the past with x86/x64 systems too, both H.264 and H.265 had problems on older CPUs and would need 90-100% of the CPU to do software playback. However, on an ARM based SoC from the same period, the same files, simply wouldn't work, due to reliance on fixed function video decoders.



I'm not saying that x86/x64 platforms aren't using more and more of these sub-processors, but most of them seem to be closely tied in to the GPU, rather than the CPU. It's obviously hard to do an apples to apples comparison (no pun intended), as the platforms are so different architecturally. My point was simply that Apple is going to have to be on the cutting edge with these sub-processors all the time and if they bet on the wrong standard, then you won't be able to watch some content on your shiny new Mac, as the codec isn't support and might never be.
It's nigh on impossible to predict what will be the winning standards and as much as most companies bet on H.265, it seems now that, at least to some extent, that VP9 and AV1 are gaining popularity due to being royalty free. That means a lot of older ARM based SoCs will be unable to play back this content, due to lack of a decoder, whereas both can be played back on a regular PC just fine.

Sorry about coming back to the video codec thing all the time, but it really is the simplest example which will continue to cause the biggest problems in the future, as long as we don't have a single standard that everyone agrees to use.

Regardless, ARM processors to date, are a lot more limited in terms of what they can do on their own, without support from these additional sub and co-processors.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
It might be, though there's nothing stopping them from making an Arm-based SoC with heaps of cores and PCIe like those server SoCs that are showing up these days. Given that the Mac Pro uses all custom hardware anyway they could just redesign the motherboard around this and keep everything more or less the same. Of course driver support for PCIe devices would be tricky, but it already is for a lot of things on MacOS, so that's not that big of a change.
I just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
 
Last edited:
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
Why does it matter if it has dedicated hardware for specific workloads though? Intel makes billions selling FPGAs & accelerators, also in case you didn't know not everything is off die on Axx SoCs. Tell me one x86 instruction set (without fixed function hardware) which works better for cameras than dedicated ISP, or say any DSP found in QC or Apple's chip? It's this reason why I made that comment in the other thread, not everything runs better on x86 ~ ARM & dedicated hardware is often times much better! Tell Sony why their cutsom flash controller & dedicated compression is such a bad idea :rolleyes:
 
Last edited:
Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
But they're not extensions, a lot of is actual sub processors within the SoC.

Yeah, and intel has a iGPU on the SOC. As well as a PCIe root complex and USB/SATA controllers . What's your point? That's how SOCs work.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,769 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Yeah, and intel has a iGPU on the SOC. As well as a PCIe root complex and USB/SATA controllers . What's your point? That's how SOCs work.
I guess you didn't bother reading my post, so whatever...

I just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
Yes and no.

I think Apple is betting bit on their iOS/PadOS ecosystem when it comes to software. A lost of major software is already available for these platforms and I guess the final OS for the new ARM based Macs will be based a lot more on the mobile OSes and as such, many of the apps are likely to just need UI changes to work on larger and higher resolution screens. That's not a minor task in all fairness, but I believe it's easier to do than re-write x86/x64 software for ARM.

They're also making some bold claims about developers having to make next to no changes to their software to make it work on the new processors, but I'm not sure I'm buying that. A lot of that also seems to hinge on Rosetta 2 and then you're losing a lot of performance due to the translation layer. I mean, does anyone remember Transmeta? Sure, that was WLIV, not RISC, but it still had a translation layer, which was partially in hardware and as such should be a lot faster than doing it all in software, which is what I presume Rosetta 2 is doing.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Just because it works doesn't mean it will justify the four digit price Apple is going to demand for the hardware. ARM doesn't scale well by design because it's not super scalar. The x86 processors available today, on a single thread basis, are faster than they were a decade, two, and three decades ago. That's not by virtue of just increased clockspeeds, but by improvements in the super scalar architecture that breaks x86 instructions down into micro instructions that are executed in parrallel as much as possible. The best ARM can do in this regard is farm it out to an ASIC. People didn't buy Mac Pros for ASICs, they bought them for specific hardware capabilities. Emulation isn't going to make up for that.

I just wonder how long Apple will keep the Mac Pro around. Is the model out now truly the last or are they going to keep it around for a while based on x86. Seeing how Apple seemed to have burned the bridge with Intel, maybe the next Mac Pro will be powered by AMD? Apple hasn't ruled that out, as far as I know.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
Just because it works doesn't mean it will justify the four digit price Apple is going to demand for the hardware. ARM doesn't scale well by design because it's not super scalar. The x86 processors available today, on a single thread basis, are faster than they were a decade, two, and three decades ago. That's not by virtue of just increased clockspeeds, but by improvements in the super scalar architecture that breaks x86 instructions down into micro instructions that are executed in parrallel as much as possible. The best ARM can do in this regard is farm it out to an ASIC. People didn't buy Mac Pros for ASICs, they bought them for specific hardware capabilities. Emulation isn't going to make up for that.

I just wonder how long Apple will keep the Mac Pro around. Is the model out now truly the last or are they going to keep it around for a while based on x86. Seeing how Apple seemed to have burned the bridge with Intel, maybe the next Mac Pro will be powered by AMD? Apple hasn't ruled that out, as far as I know.
This is by no means a trivial task, but to a large degree they have a captive audience (so to speak) in a lot of markets. Audio professionals can't do the majority of their work on Windows PCs due to how Windows handles DPC latency (in a word: poorly). Video professionals are more flexible, but not those reliant on Final Cut - which Apple is obviously bringing forward to these new Macs. And a lot of the rest are used to using a Mac and want to continue doing so. The Adobe ecosystem is already on its way through the iPad, and will obviously be fully compatible with Arm Macs. CAD, 3D Modelling, etc. is likely more of a wash, but you should never discount the value of user familiarity - it might be cheaper to a lot of companies to buy a more expensive software license due to the architecture migration rather than having to re-train their staff to work in Windows. Etc., etc. The Mac Pro as we know it now might not survive (though there have been relatively recent statements by people high up in Apple suggesting that it will stick around), but they are definitely not dropping out of high performance computing. The new Mac Pro has sold like hotcakes; there was a huge pent-up demand for a high performance Mac ever since the 2013 trash can Mac Pro was launched and subsequently never updated. While Apple's cash cows are the iPhone and peripherals, their image is largely built on professional users of their desktops and laptops, and abandoning those in favor of a pure low-performance (iPad Pro equivalent and down or thereabouts) lineup would thus be a rather absurd thing for them to do. Of course they might still do so, but I sincerely doubt it.
 
Joined
May 24, 2007
Messages
5,432 (0.85/day)
Location
Tennessee
System Name AM5
Processor AMD Ryzen R9 7950X
Motherboard Asrock X670E Taichi
Cooling EK AIO Basic 360
Memory Corsair Vengeance DDR5 5600 64 Gb - XMP1 Profile
Video Card(s) AMD Reference 7900 XTX 24 Gb
Storage Crucial Gen 5 1 TB, Samsung Gen 4 980 1 TB / Samsung 8TB SSD
Display(s) Samsung 34" 240hz 4K
Case Fractal Define R7
Power Supply Seasonic PRIME PX-1300, 1300W 80+ Platinum, Full Modular
"citing former Intel principal engineer François Piednoël"
Fake news so

Trending how people follow the Trump saying "fake news". Basically used for anything which you don't agree with, true or untrue.
 
Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
I guess you didn't bother reading my post, so whatever...

No, I did. I just don't see how say a tensor subprocessor is any different than a discrete tensor chip on x86.
 
Last edited:
Joined
Mar 31, 2020
Messages
50 (0.03/day)
One tricky thing with ARM processors is that they rely a LOT on "outside" processing. I.e. you have have a lot of sub-processors/accelerators that handle things. This might work well for Apple, as they control the OS as well, but this is why, imho, Microsoft is having issues with Windows on ARM.
Beyond the GPU, you have things like media encoders/decoders (ARM processors aren't great at doing software video decoding and are even worse at encoding), network accelerators, crypto accelerators, etc. I mean, Apple provided a great example of this themselves.
...
This is sort of the core advantage of x86/x64, the CPU cores are a lot more multi-purpose and can process a lot of different data "better" than ARM cores. Obviously some of this comes down to software optimisation and some to pure raw GHz, as most ARM SoCs are still clocked far slower than the equivalent x86/x64 parts. However, as power efficient as ARM processors are, there are a lot of things they're unlikely to overtake the x86/x64 processors in doing, at least not in the foreseeable future.

Relying on accelerators/co-processors does have some advantages as well, as you can fairly easily swap out one IP block for another and have a slightly different SKU. I'm not sure this fits the Apple business model though. I guess they could also re-purpose a lot of the IP blocks between different SoC SKUs. The downside is as pointed out above, that if your SoC lacks an accelerator for something, you simply can't do it. Take Google's VP9 for example. It can quite easily be software decoded on an x86/x64 system, whereas on ARM based systems, you simply can't use it, unless you have a built in decoder specifically for that codec.

This also makes for far more complex SoCs and if one of these sub-processors fail, you have a dud chip, as you can't bin chips as a lower SKU if say the crypto accelerator doesn't work.

It's going to be interesting to see where Apple ends up, but personally I think this will be a slow transition that will take longer than they have said.
It'll also highly depend on Apple's customers, as I can't imagine everyone will be happy about this transition, especially those that dual boot and need access to Windows or another OS at times.
Apple make you do it their way or no way. Apple decides the HW&SW for offload/accelerating image/video/audio/AI you can use it or slow/fail. Both Apple & Android have forced developers to constantly make changes or rewrite apps to suit the HW & SW updates. MS Windows once allowed for long term compatibility but it's getting harder. The only advantage with Apple is making sure apps can backup data and reload onto new/fixed device. Modern Android security is blocking general backups (hit-and-miss with apps and cloud storage).
 
Joined
Mar 28, 2020
Messages
1,761 (1.02/day)
If that was the truth, Apple would have gone AMD for desktop models from 2018 and for laptops from this year, while getting ready for the final ARM transition anyway. If Skylake was that bad and considering the performance of Ryzen 2000 and Threadripper models, Apple would have already gone AMD.

Considering that Apple is already midway in the plan execution in 2018, there is no reason for them to stop and consider AMD as an alternative.

I just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.

I feel software is always about optimizing to make it work. While ARM SOCs are nowhere near as powerful as a x86 based processor, they tend to make up the deficit by spamming more cores. Most software makers will make an effort to optimize their software for Apple because despite the premium in the Apple ecosystem, they still sell well. Considering that Apple pulled this transition off, not the first time, I feel they will likely pull it off this time as well. Whether the end product will suit everyone, time will tell. We can at least get a sense of the performance when the first ARM base Mac gets released.
 
Joined
May 11, 2018
Messages
1,292 (0.53/day)
I think software side could be as important as hardware - lots and lots of programmers have switched to ARM app development, so even high budget x86 software giants like Adobe have problems with their software. Lightroom for instance has tons of old bugs - for years now it runs faster on Intel if you switch off hyperthreading!

So it doesn't help if the AMD makes highly efficient processors, or if Intel miraculously makes a new, much better processor line - the software side of x86 is even worse than the hardware. I partially blame Intel for forcing companies not to use multicore efficiently, because that would favor AMD's Zen - so outside of 3D rendering there are very few applications that fully use modern PC processors.
 
Joined
Sep 28, 2012
Messages
982 (0.22/day)
System Name Poor Man's PC
Processor Ryzen 7 9800X3D
Motherboard MSI B650M Mortar WiFi
Cooling Thermalright Phantom Spirit 120 with Arctic P12 Max fan
Memory 32GB GSkill Flare X5 DDR5 6000Mhz
Video Card(s) XFX Merc 310 Radeon RX 7900 XT
Storage XPG Gammix S70 Blade 2TB + 8 TB WD Ultrastar DC HC320
Display(s) Xiaomi G Pro 27i MiniLED
Case Asus A21 Case
Audio Device(s) MPow Air Wireless + Mi Soundbar
Power Supply Enermax Revolution DF 650W Gold
Mouse Logitech MX Anywhere 3
Keyboard Logitech Pro X + Kailh box heavy pale blue switch + Durock stabilizers
VR HMD Meta Quest 2
Benchmark Scores Who need bench when everything already fast?
Three word, Control, Cost and Profit :D
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I feel software is always about optimizing to make it work. While ARM SOCs are nowhere near as powerful as a x86 based processor, they tend to make up the deficit by spamming more cores. Most software makers will make an effort to optimize their software for Apple because despite the premium in the Apple ecosystem, they still sell well. Considering that Apple pulled this transition off, not the first time, I feel they will likely pull it off this time as well. Whether the end product will suit everyone, time will tell. We can at least get a sense of the performance when the first ARM base Mac gets released.
Performance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
What intrigues me the most is, why the hell was Apple so involved in the development of Intel's architectures ? I mean this doesn't seem like a simple collaboration with a customer that got the end product, it looks to me like they had access to some pretty deep and low level engineering that Intel was doing from early on in the development process. I know Apple was an important customer but it just seem odd they'd have so much access to all of this, I wonder how much know-how "migrated" to Apple in all of these years. Maybe that was the goal altogether.

Once again, control. Apple wants to control everything, and they have the cash reserves to buy that control.

Your point about the "migration" of knowledge is an interesting one - I do wonder how many Intel engineers "migrated" over to Apple's CPU engineering division during this time.

Perf/W is one thing, AnandTech's SPEC testing shows that Apple's current mobile chips are ahead of Skylake and its derivatives in IPC.

Using SPEC 2006... a benchmark that is 14 years old... and has been officially retired by its authors. Would you put any faith in a GPU review that used 3DMark05 to rate a Turing or Navi GPU? Didn't think so.

If it also scales up to 4GHz+ at reasonable power, those chips will be pretty powerful.

ARM released its first CPU that could hit 2GHz in 2009 at 40nm. Over a decade later, there are no commercial ARM CPUs that are able to hit even 3GHz at 7nm. That's the reason they jumped on the MOAR CORES bandwagon, because the architecture has hit a very fundamental clock speed wall that they haven't been able to overcome (similarly to Intel with NetBurst, and that uarch wasn't salvageable at the end of the day... makes you wonder...).

Intel for forcing companies not to use multicore efficiently, because that would favor AMD's Zen

[citation needed]

Software is bad because many of the ginormous companies that write the software that everyone uses as standard, are really bad at writing software. What they are good at is marketing and crushing or buying out any competitors so that they don't have to write good software. Adobe is probably the best-known example, but there are many others across all sectors (Sage is one in financials, for example).

When you couple the fact that these companies can't write good software, and the fact that writing multithreaded code is difficult, and the fact that most app workloads aren't easily parallelised, the end result is software that is either slow and inefficient, or even buggier than you'd expect.

Performance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.

But Apple has the "performance" users locked into their ecosystem so that it's too much of a pain to think of going anywhere else - or at least, they think they do.

Like you said, quite possibly this is another long-term Apple strategy, to get rid of the so-called "high-end" machines side of the business and only concentrate on making phones and netbooks (sorry, despite what Apple says, a so-called laptop with an ARM CPU will always be a netbook to me). Considering where the majority of Apple's profits come from, and the fact that the niche "high-end" market likely costs them a lot more relatively, it would make a lot of sense.
 
Last edited:

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,769 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
No, I did. I just don't see how say a tensor subprocessor is any different than a discrete tensor chip on x86.
No? Then you're not thinking very far. With a discrete part, you can swap it out. Apple's new Macs will force you to buy a new one to get support for new technology. In all fairness, I guess that's notebooks in general, but an x86/x64 system is still often able to do things using software decoders etc. which the new Macs can't.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
8,856 (3.87/day)
Location
Winnipeg, Canada
Processor AMD R7 5800X3D
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Frozen Edge 360, 3x TL-B12 V2, 2x TL-B12 V1
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact
Audio Device(s) JBL Bar 700
Power Supply Seasonic Vertex GX-1000, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
I’m sure I read an article like 10 years ago that pretty much said Apple was leaving ibm for intel until they can get their own hardware up and running. So them running intel hardware was always a temporary thing.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Performance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.
That's the thing though: Anandtech's testing shows that Apple's most recent Arm architectures have higher IPC (as measured in SPECint and SPECfp - arguably a limited scenario, but also as close to industry standard as you get) than Skylake and its siblings (don't know how it compares to Ice Lake or the upcoming Tiger Lake). As long as they can clock them high enough, absolute performance as such shouldn't be an issue. Given that the Mac Pro uses the relatively lacklustre up-to-28-core Xeons, which don't clock high at all, it certainly doesn't sound like too much of a challenge for Apple to make a... let's say 64-core "A15" variant for the Mac Pro that beats the IPC of the current Xeons, matches its clocks even under all-core load (those Xeons don't clock high in that scenario, so that wouldn't even require beating current phone Socs), but has heaps more cores (not to mention they could add in any accelerators they wanted).

Of course there's still the much more limited instruction set, but my impression is that the upcoming ARMv9 ISA will go a long way towards alleviating that and making ARM much more viable as a high performance general purpose architecture, especially by bringing with it alternatives to AVX and similar heavy compute operations. And you can bet your rear end Apple will be adopting that as early as possible (remember how early they were to jump on 64-bit ARM?).
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
SPECint and SPECfp aren't something that benefit from super scalar design. It can't exploit the micro op parallelism nor the branch prediction which are the primary features of x86 compared to ARM. Further, ARM have to have higher clocks to be comparable to x86 because memory operations are explicit in ARM where they are implied in x86. Some of execution units in x86, for example, are specifically for addressing memory in parallel to the execution of the main instruction.

Parallelism in software always has costs and the more threads there are, the higher the cost of overhead climbs. This is why simply throwing more cores at a problem won't necessarily improve performance, especially compared to x86 which implements parallelism in hardware at virtually no cost (besides transistors/power).
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
SPECint and SPECfp aren't something that benefit from super scalar design. It can't exploit the micro op parallelism nor the branch prediction which are the primary features of x86 compared to ARM. Further, ARM have to have higher clocks to be comparable to x86 because memory operations are explicit in ARM where they are implied in x86. Some of execution units in x86, for example, are specifically for addressing memory in parallel to the execution of the main instruction.

Parallelism in software always has costs and the more threads there are, the higher the cost of overhead climbs. This is why simply throwing more cores at a problem won't necessarily improve performance, especially compared to x86 which implements parallelism in hardware at virtually no cost (besides transistors/power).
Do none of the benchmarks in the Spec suite benefit from ILP? That certainly sounds like a relatively major weakness for a benchmark suite aiming to be broadly representative. And do we know that ARMv9 won't implement some form of ILP? (I can't seem to find much concrete info on ARMv9 at all, but given interest in Arm from high performance computing and server hardware makers I would imagine that to be quite high up the wishlist.) According to Anandtech ARMv9 is very close to being announced, so I guess we'll see.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
It can also be a political decision since we have obvious environmental problems and our climate targets are set.
x86 can't work normally in low power envelopes up to 2-3 watts, which greatly reduce the carbon footprint in the companies who would like to implement so aggressively high energy efficient components.

You are speaking of high performance from x86 but the cost is systems with a single CPU of over 150 watts, up to 400 watts and more.
 
Top