Thursday, June 25th 2020

Bad Intel Quality Assurance Responsible for Apple-Intel Split?

Apple's decision to switch from Intel processors for its Mac computers to its own, based on the Arm architecture, has shaken up the tech world, even though rumors of the transition have been doing rounds for months. Intel's first official response, coupled with facts such as Intel's CPU technology execution being thrown completely off gear due to foundry problems; pointed toward the likelihood of Intel not being able to keep up with Apple's growing performance/Watt demands. It turns out now, that Intel's reasons are a lot more basic, and date back to 2016.

According to a sensational PC Gamer report citing former Intel principal engineer François Piednoël, Apple's dissatisfaction with Intel dates back to some of its first 14 nm chips, based on the "Skylake" microarchitecture. "The quality assurance of Skylake was more than a problem," says Piednoël. It was abnormally bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. When your customer starts finding almost as much bugs as you found yourself, you're not leading into the right place," he adds.
It was around that time that decisions were taken at the highest levels in Apple to execute a machine architecture switch away from Intel and x86, the second of its kind following Apple's mid-2000s switch from PowerPC to Intel x86. For me this is the inflection point," says Piednoël. "This is where the Apple guys who were always contemplating to switch, they went and looked at it and said: 'Well, we've probably got to do it.' Basically the bad quality assurance of Skylake is responsible for them to actually go away from the platform." Apple's decision to dump Intel may have only been more precipitated with 2019 marking a string of cybersecurity flaws affecting Intel microarchitectures. The PC Gamer report cautions that Piednoël's comments should be taken with a pinch of salt, as he has been among the more outspoken engineers at Intel.Image Courtesy: ComputerWorld
Source: PC Gamer
Add your own comment

81 Comments on Bad Intel Quality Assurance Responsible for Apple-Intel Split?

#51
davideneco
TheLostSwedeHuh? Do you even know François? I mean, he's a character, but I think he knows his shit, at least based on the times I've met him and talked to him.
[/CITATION]
He sh*tposting everytime since intel fired him

And its funny, why he say that, while he is one of the main engineer who work on intel architecture ... he designed a bad CPU and now shot at intel ? lol
Posted on Reply
#52
DemonicRyzen666
ValantarIt might be, though there's nothing stopping them from making an Arm-based SoC with heaps of cores and PCIe like those server SoCs that are showing up these days. Given that the Mac Pro uses all custom hardware anyway they could just redesign the motherboard around this and keep everything more or less the same. Of course driver support for PCIe devices would be tricky, but it already is for a lot of things on MacOS, so that's not that big of a change.

Denied what? That there are fallbacks? There is nothing a chip designer can do to prevent this (beyond removing older instruction sets I guess), as that is a pure software thing. Software checks the CPUID, whether it is on the list of "has [high performance instruction set X], if yes, run code path A, if no, run code path B.

What you were describing in your previous post sounds like the opposite of that - the ability to run AVX code on hardware without AVX support. This will not work, as the CPU doesn't understand the instructions and thus can't process them. Sure, there might exist translation layers, emulation and similar workarounds in some cases, but they are rare and inevitably tank performance far worse than writing code for a lower common denominator instruction set. The whole point of added instruction sets like AVX is to add the option to run certain specific operations at a higher performance level than could be done with more general purpose instructions - but you can of course do the same work on more general purpose instructions, just slower and with different code.
no, I mean running a code from up from sse2-sse4.1 to AVX with out the software ever calling for it. That's what the pdf stated.
Posted on Reply
#53
R-T-B
TheLostSwedeOne tricky thing with ARM processors is that they rely a LOT on "outside" processing. I.e. you have have a lot of sub-processors/accelerators that handle things.
No more so than any SoC these days. What do you mean? They have extensions sure, but so does x86.
davidenecoAnd its funny, why he say that, while he is one of the main engineer who work on intel architecture ... he designed a bad CPU and now shot at intel ? lol
He was like what, part of a team that designed a bad CPU (it wasn't that bad at launch btw, it was quality assuarance that failed)? You can't blame him alone and it doesn't discredit him for this story.

So yeah, quit the FUD. Fact is while this isn't pure fact yet, it isn't "fake news" either.
Posted on Reply
#54
TheLostSwede
News Editor
R-T-BNo more so than any SoC these days. What do you mean? They have extensions sure, but so does x86.
But they're not extensions, a lot of is actual sub processors within the SoC.
For example, a lot of ARM SoCs now have something like a Cortex-M0 as their PMC, they have another custom DSP that handles audio, they multiple DSPs that handle video encoding, decoding, transcoding, etc. simply because the ARM cores are not powerful enough and not general purpose enough to do a good job doing these things. Ok, so some of these things are needed to make an SoC work, but ARM based SoCs have many more sub processors than x86/x64 CPUs have.

Look at the Renoir die shots that were posted last week as an example, not taking the GPU or interface parts into account, how many sub-processors are there in these? AMD has their Platform Security Processor, but that's it afaik. As this is an APU, it obviously has a media engine as well, which most likely contains some kind of DSP at the very least.



Apple is relying on a lot more additional sub-processors to get tings done, as per below. They have an always-on processor, they have the crypto accelerator, a neural engine, a machine learning accelerator (aren't the last two the same thing, more or less?) and a camera processor. Ok, so the last one is because this is more of a tablet chip design, but my point here is that x86/x64 doesn't rely on as many extra bits, instead the rely on raw power, for better or worse. Video codecs is one of the simplest examples, as I pointed out in my previous post in this thread. Every time there's a new video codec, a new hardware block has to be added to ARM SoCs for them to be able to play back the codec, unless it's a very simple codec, since the CPU cores are often not capable of playing back video files based on new, more efficient codecs. Yes, this has been an issue in the past with x86/x64 systems too, both H.264 and H.265 had problems on older CPUs and would need 90-100% of the CPU to do software playback. However, on an ARM based SoC from the same period, the same files, simply wouldn't work, due to reliance on fixed function video decoders.



I'm not saying that x86/x64 platforms aren't using more and more of these sub-processors, but most of them seem to be closely tied in to the GPU, rather than the CPU. It's obviously hard to do an apples to apples comparison (no pun intended), as the platforms are so different architecturally. My point was simply that Apple is going to have to be on the cutting edge with these sub-processors all the time and if they bet on the wrong standard, then you won't be able to watch some content on your shiny new Mac, as the codec isn't support and might never be.
It's nigh on impossible to predict what will be the winning standards and as much as most companies bet on H.265, it seems now that, at least to some extent, that VP9 and AV1 are gaining popularity due to being royalty free. That means a lot of older ARM based SoCs will be unable to play back this content, due to lack of a decoder, whereas both can be played back on a regular PC just fine.

Sorry about coming back to the video codec thing all the time, but it really is the simplest example which will continue to cause the biggest problems in the future, as long as we don't have a single standard that everyone agrees to use.

Regardless, ARM processors to date, are a lot more limited in terms of what they can do on their own, without support from these additional sub and co-processors.
Posted on Reply
#55
FordGT90Concept
"I go fast!1!11!1!"
ValantarIt might be, though there's nothing stopping them from making an Arm-based SoC with heaps of cores and PCIe like those server SoCs that are showing up these days. Given that the Mac Pro uses all custom hardware anyway they could just redesign the motherboard around this and keep everything more or less the same. Of course driver support for PCIe devices would be tricky, but it already is for a lot of things on MacOS, so that's not that big of a change.
I just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
Posted on Reply
#56
R0H1T
Why does it matter if it has dedicated hardware for specific workloads though? Intel makes billions selling FPGAs & accelerators, also in case you didn't know not everything is off die on Axx SoCs. Tell me one x86 instruction set (without fixed function hardware) which works better for cameras than dedicated ISP, or say any DSP found in QC or Apple's chip? It's this reason why I made that comment in the other thread, not everything runs better on x86 ~ ARM & dedicated hardware is often times much better! Tell Sony why their cutsom flash controller & dedicated compression is such a bad idea :rolleyes:
Posted on Reply
#57
R-T-B
TheLostSwedeBut they're not extensions, a lot of is actual sub processors within the SoC.
Yeah, and intel has a iGPU on the SOC. As well as a PCIe root complex and USB/SATA controllers . What's your point? That's how SOCs work.
Posted on Reply
#58
TheLostSwede
News Editor
R-T-BYeah, and intel has a iGPU on the SOC. As well as a PCIe root complex and USB/SATA controllers . What's your point? That's how SOCs work.
I guess you didn't bother reading my post, so whatever...
FordGT90ConceptI just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
Yes and no.

I think Apple is betting bit on their iOS/PadOS ecosystem when it comes to software. A lost of major software is already available for these platforms and I guess the final OS for the new ARM based Macs will be based a lot more on the mobile OSes and as such, many of the apps are likely to just need UI changes to work on larger and higher resolution screens. That's not a minor task in all fairness, but I believe it's easier to do than re-write x86/x64 software for ARM.

They're also making some bold claims about developers having to make next to no changes to their software to make it work on the new processors, but I'm not sure I'm buying that. A lot of that also seems to hinge on Rosetta 2 and then you're losing a lot of performance due to the translation layer. I mean, does anyone remember Transmeta? Sure, that was WLIV, not RISC, but it still had a translation layer, which was partially in hardware and as such should be a lot faster than doing it all in software, which is what I presume Rosetta 2 is doing.
Posted on Reply
#59
FordGT90Concept
"I go fast!1!11!1!"
Just because it works doesn't mean it will justify the four digit price Apple is going to demand for the hardware. ARM doesn't scale well by design because it's not super scalar. The x86 processors available today, on a single thread basis, are faster than they were a decade, two, and three decades ago. That's not by virtue of just increased clockspeeds, but by improvements in the super scalar architecture that breaks x86 instructions down into micro instructions that are executed in parrallel as much as possible. The best ARM can do in this regard is farm it out to an ASIC. People didn't buy Mac Pros for ASICs, they bought them for specific hardware capabilities. Emulation isn't going to make up for that.

I just wonder how long Apple will keep the Mac Pro around. Is the model out now truly the last or are they going to keep it around for a while based on x86. Seeing how Apple seemed to have burned the bridge with Intel, maybe the next Mac Pro will be powered by AMD? Apple hasn't ruled that out, as far as I know.
Posted on Reply
#60
Valantar
FordGT90ConceptI just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
FordGT90ConceptJust because it works doesn't mean it will justify the four digit price Apple is going to demand for the hardware. ARM doesn't scale well by design because it's not super scalar. The x86 processors available today, on a single thread basis, are faster than they were a decade, two, and three decades ago. That's not by virtue of just increased clockspeeds, but by improvements in the super scalar architecture that breaks x86 instructions down into micro instructions that are executed in parrallel as much as possible. The best ARM can do in this regard is farm it out to an ASIC. People didn't buy Mac Pros for ASICs, they bought them for specific hardware capabilities. Emulation isn't going to make up for that.

I just wonder how long Apple will keep the Mac Pro around. Is the model out now truly the last or are they going to keep it around for a while based on x86. Seeing how Apple seemed to have burned the bridge with Intel, maybe the next Mac Pro will be powered by AMD? Apple hasn't ruled that out, as far as I know.
This is by no means a trivial task, but to a large degree they have a captive audience (so to speak) in a lot of markets. Audio professionals can't do the majority of their work on Windows PCs due to how Windows handles DPC latency (in a word: poorly). Video professionals are more flexible, but not those reliant on Final Cut - which Apple is obviously bringing forward to these new Macs. And a lot of the rest are used to using a Mac and want to continue doing so. The Adobe ecosystem is already on its way through the iPad, and will obviously be fully compatible with Arm Macs. CAD, 3D Modelling, etc. is likely more of a wash, but you should never discount the value of user familiarity - it might be cheaper to a lot of companies to buy a more expensive software license due to the architecture migration rather than having to re-train their staff to work in Windows. Etc., etc. The Mac Pro as we know it now might not survive (though there have been relatively recent statements by people high up in Apple suggesting that it will stick around), but they are definitely not dropping out of high performance computing. The new Mac Pro has sold like hotcakes; there was a huge pent-up demand for a high performance Mac ever since the 2013 trash can Mac Pro was launched and subsequently never updated. While Apple's cash cows are the iPhone and peripherals, their image is largely built on professional users of their desktops and laptops, and abandoning those in favor of a pure low-performance (iPad Pro equivalent and down or thereabouts) lineup would thus be a rather absurd thing for them to do. Of course they might still do so, but I sincerely doubt it.
Posted on Reply
#61
Ravenas
davideneco"citing former Intel principal engineer François Piednoël"
Fake news so
Trending how people follow the Trump saying "fake news". Basically used for anything which you don't agree with, true or untrue.
Posted on Reply
#62
R-T-B
TheLostSwedeI guess you didn't bother reading my post, so whatever...
No, I did. I just don't see how say a tensor subprocessor is any different than a discrete tensor chip on x86.
Posted on Reply
#63
tygrus
TheLostSwedeOne tricky thing with ARM processors is that they rely a LOT on "outside" processing. I.e. you have have a lot of sub-processors/accelerators that handle things. This might work well for Apple, as they control the OS as well, but this is why, imho, Microsoft is having issues with Windows on ARM.
Beyond the GPU, you have things like media encoders/decoders (ARM processors aren't great at doing software video decoding and are even worse at encoding), network accelerators, crypto accelerators, etc. I mean, Apple provided a great example of this themselves.
...
This is sort of the core advantage of x86/x64, the CPU cores are a lot more multi-purpose and can process a lot of different data "better" than ARM cores. Obviously some of this comes down to software optimisation and some to pure raw GHz, as most ARM SoCs are still clocked far slower than the equivalent x86/x64 parts. However, as power efficient as ARM processors are, there are a lot of things they're unlikely to overtake the x86/x64 processors in doing, at least not in the foreseeable future.

Relying on accelerators/co-processors does have some advantages as well, as you can fairly easily swap out one IP block for another and have a slightly different SKU. I'm not sure this fits the Apple business model though. I guess they could also re-purpose a lot of the IP blocks between different SoC SKUs. The downside is as pointed out above, that if your SoC lacks an accelerator for something, you simply can't do it. Take Google's VP9 for example. It can quite easily be software decoded on an x86/x64 system, whereas on ARM based systems, you simply can't use it, unless you have a built in decoder specifically for that codec.

This also makes for far more complex SoCs and if one of these sub-processors fail, you have a dud chip, as you can't bin chips as a lower SKU if say the crypto accelerator doesn't work.

It's going to be interesting to see where Apple ends up, but personally I think this will be a slow transition that will take longer than they have said.
It'll also highly depend on Apple's customers, as I can't imagine everyone will be happy about this transition, especially those that dual boot and need access to Windows or another OS at times.
Apple make you do it their way or no way. Apple decides the HW&SW for offload/accelerating image/video/audio/AI you can use it or slow/fail. Both Apple & Android have forced developers to constantly make changes or rewrite apps to suit the HW & SW updates. MS Windows once allowed for long term compatibility but it's getting harder. The only advantage with Apple is making sure apps can backup data and reload onto new/fixed device. Modern Android security is blocking general backups (hit-and-miss with apps and cloud storage).
Posted on Reply
#64
watzupken
john_If that was the truth, Apple would have gone AMD for desktop models from 2018 and for laptops from this year, while getting ready for the final ARM transition anyway. If Skylake was that bad and considering the performance of Ryzen 2000 and Threadripper models, Apple would have already gone AMD.
Considering that Apple is already midway in the plan execution in 2018, there is no reason for them to stop and consider AMD as an alternative.
FordGT90ConceptI just can't see that happening because all of the software would have to be rewritten not only to change from x86 to ARM but to not use all the specialized instructions x86 offers, ridiculously high clockspeeds, super scalar design, and replace it with even more parallelism with a weaker common denominator. I think it's likely the market for Mac Pro will evaporate because cost/benefit isn't there to reinvent the wheel for subpar hardware on whatever comes next out of Apple. Makes more sense for the software vendors to switch focus to Windows and/or Linux. The amount of effort required is likely less and the markets are much bigger.
I feel software is always about optimizing to make it work. While ARM SOCs are nowhere near as powerful as a x86 based processor, they tend to make up the deficit by spamming more cores. Most software makers will make an effort to optimize their software for Apple because despite the premium in the Apple ecosystem, they still sell well. Considering that Apple pulled this transition off, not the first time, I feel they will likely pull it off this time as well. Whether the end product will suit everyone, time will tell. We can at least get a sense of the performance when the first ARM base Mac gets released.
Posted on Reply
#65
Bwaze
I think software side could be as important as hardware - lots and lots of programmers have switched to ARM app development, so even high budget x86 software giants like Adobe have problems with their software. Lightroom for instance has tons of old bugs - for years now it runs faster on Intel if you switch off hyperthreading!

So it doesn't help if the AMD makes highly efficient processors, or if Intel miraculously makes a new, much better processor line - the software side of x86 is even worse than the hardware. I partially blame Intel for forcing companies not to use multicore efficiently, because that would favor AMD's Zen - so outside of 3D rendering there are very few applications that fully use modern PC processors.
Posted on Reply
#66
1d10t
Three word, Control, Cost and Profit :D
Posted on Reply
#67
FordGT90Concept
"I go fast!1!11!1!"
watzupkenI feel software is always about optimizing to make it work. While ARM SOCs are nowhere near as powerful as a x86 based processor, they tend to make up the deficit by spamming more cores. Most software makers will make an effort to optimize their software for Apple because despite the premium in the Apple ecosystem, they still sell well. Considering that Apple pulled this transition off, not the first time, I feel they will likely pull it off this time as well. Whether the end product will suit everyone, time will tell. We can at least get a sense of the performance when the first ARM base Mac gets released.
Performance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.
Posted on Reply
#68
Assimilator
Vya DomusWhat intrigues me the most is, why the hell was Apple so involved in the development of Intel's architectures ? I mean this doesn't seem like a simple collaboration with a customer that got the end product, it looks to me like they had access to some pretty deep and low level engineering that Intel was doing from early on in the development process. I know Apple was an important customer but it just seem odd they'd have so much access to all of this, I wonder how much know-how "migrated" to Apple in all of these years. Maybe that was the goal altogether.
Once again, control. Apple wants to control everything, and they have the cash reserves to buy that control.

Your point about the "migration" of knowledge is an interesting one - I do wonder how many Intel engineers "migrated" over to Apple's CPU engineering division during this time.
ValantarPerf/W is one thing, AnandTech's SPEC testing shows that Apple's current mobile chips are ahead of Skylake and its derivatives in IPC.
Using SPEC 2006... a benchmark that is 14 years old... and has been officially retired by its authors. Would you put any faith in a GPU review that used 3DMark05 to rate a Turing or Navi GPU? Didn't think so.
ValantarIf it also scales up to 4GHz+ at reasonable power, those chips will be pretty powerful.
ARM released its first CPU that could hit 2GHz in 2009 at 40nm. Over a decade later, there are no commercial ARM CPUs that are able to hit even 3GHz at 7nm. That's the reason they jumped on the MOAR CORES bandwagon, because the architecture has hit a very fundamental clock speed wall that they haven't been able to overcome (similarly to Intel with NetBurst, and that uarch wasn't salvageable at the end of the day... makes you wonder...).
BwazeIntel for forcing companies not to use multicore efficiently, because that would favor AMD's Zen
[citation needed]

Software is bad because many of the ginormous companies that write the software that everyone uses as standard, are really bad at writing software. What they are good at is marketing and crushing or buying out any competitors so that they don't have to write good software. Adobe is probably the best-known example, but there are many others across all sectors (Sage is one in financials, for example).

When you couple the fact that these companies can't write good software, and the fact that writing multithreaded code is difficult, and the fact that most app workloads aren't easily parallelised, the end result is software that is either slow and inefficient, or even buggier than you'd expect.
FordGT90ConceptPerformance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.
But Apple has the "performance" users locked into their ecosystem so that it's too much of a pain to think of going anywhere else - or at least, they think they do.

Like you said, quite possibly this is another long-term Apple strategy, to get rid of the so-called "high-end" machines side of the business and only concentrate on making phones and netbooks (sorry, despite what Apple says, a so-called laptop with an ARM CPU will always be a netbook to me). Considering where the majority of Apple's profits come from, and the fact that the niche "high-end" market likely costs them a lot more relatively, it would make a lot of sense.
Posted on Reply
#69
TheLostSwede
News Editor
R-T-BNo, I did. I just don't see how say a tensor subprocessor is any different than a discrete tensor chip on x86.
No? Then you're not thinking very far. With a discrete part, you can swap it out. Apple's new Macs will force you to buy a new one to get support for new technology. In all fairness, I guess that's notebooks in general, but an x86/x64 system is still often able to do things using software decoders etc. which the new Macs can't.
Posted on Reply
#70
freeagent
I’m sure I read an article like 10 years ago that pretty much said Apple was leaving ibm for intel until they can get their own hardware up and running. So them running intel hardware was always a temporary thing.
Posted on Reply
#71
Valantar
FordGT90ConceptPerformance soared going from PowerPC to x86. The opposite is true in this case. That shift in performance made Mac Pro more attractive, not less.
That's the thing though: Anandtech's testing shows that Apple's most recent Arm architectures have higher IPC (as measured in SPECint and SPECfp - arguably a limited scenario, but also as close to industry standard as you get) than Skylake and its siblings (don't know how it compares to Ice Lake or the upcoming Tiger Lake). As long as they can clock them high enough, absolute performance as such shouldn't be an issue. Given that the Mac Pro uses the relatively lacklustre up-to-28-core Xeons, which don't clock high at all, it certainly doesn't sound like too much of a challenge for Apple to make a... let's say 64-core "A15" variant for the Mac Pro that beats the IPC of the current Xeons, matches its clocks even under all-core load (those Xeons don't clock high in that scenario, so that wouldn't even require beating current phone Socs), but has heaps more cores (not to mention they could add in any accelerators they wanted).

Of course there's still the much more limited instruction set, but my impression is that the upcoming ARMv9 ISA will go a long way towards alleviating that and making ARM much more viable as a high performance general purpose architecture, especially by bringing with it alternatives to AVX and similar heavy compute operations. And you can bet your rear end Apple will be adopting that as early as possible (remember how early they were to jump on 64-bit ARM?).
Posted on Reply
#72
FordGT90Concept
"I go fast!1!11!1!"
SPECint and SPECfp aren't something that benefit from super scalar design. It can't exploit the micro op parallelism nor the branch prediction which are the primary features of x86 compared to ARM. Further, ARM have to have higher clocks to be comparable to x86 because memory operations are explicit in ARM where they are implied in x86. Some of execution units in x86, for example, are specifically for addressing memory in parallel to the execution of the main instruction.

Parallelism in software always has costs and the more threads there are, the higher the cost of overhead climbs. This is why simply throwing more cores at a problem won't necessarily improve performance, especially compared to x86 which implements parallelism in hardware at virtually no cost (besides transistors/power).
Posted on Reply
#73
Valantar
FordGT90ConceptSPECint and SPECfp aren't something that benefit from super scalar design. It can't exploit the micro op parallelism nor the branch prediction which are the primary features of x86 compared to ARM. Further, ARM have to have higher clocks to be comparable to x86 because memory operations are explicit in ARM where they are implied in x86. Some of execution units in x86, for example, are specifically for addressing memory in parallel to the execution of the main instruction.

Parallelism in software always has costs and the more threads there are, the higher the cost of overhead climbs. This is why simply throwing more cores at a problem won't necessarily improve performance, especially compared to x86 which implements parallelism in hardware at virtually no cost (besides transistors/power).
Do none of the benchmarks in the Spec suite benefit from ILP? That certainly sounds like a relatively major weakness for a benchmark suite aiming to be broadly representative. And do we know that ARMv9 won't implement some form of ILP? (I can't seem to find much concrete info on ARMv9 at all, but given interest in Arm from high performance computing and server hardware makers I would imagine that to be quite high up the wishlist.) According to Anandtech ARMv9 is very close to being announced, so I guess we'll see.
Posted on Reply
#74
ARF
It can also be a political decision since we have obvious environmental problems and our climate targets are set.
x86 can't work normally in low power envelopes up to 2-3 watts, which greatly reduce the carbon footprint in the companies who would like to implement so aggressively high energy efficient components.

You are speaking of high performance from x86 but the cost is systems with a single CPU of over 150 watts, up to 400 watts and more.
Posted on Reply
#75
R-T-B
TheLostSwedeNo? Then you're not thinking very far. With a discrete part, you can swap it out. Apple's new Macs will force you to buy a new one to get support for new technology. In all fairness, I guess that's notebooks in general, but an x86/x64 system is still often able to do things using software decoders etc. which the new Macs can't.
Oh, I agree with that. But that's just typical apple. I thought you meant it was because the ARM design itself was somehow technologically inferior. I was saying they could do discrete, but we both know they won't.
Posted on Reply
Add your own comment
Jul 31st, 2024 17:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts