• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Editorial What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

I assume it's the same for Zen 5c with server variants having 512-bit, but I haven't read anything definite about that.
It is, zen 5 does have 4 sub-versions in total: 5 and 5c, each having either 256 or 512 datapaths for SIMD.
ARM and RISC-V are more of a threat to each other
Throwback to that website ARM did bashing RISC-V, clearly showing they were scared of it eating up their share in the embedded world lol
 
ARM and RISC-V are more of a threat to each other than they are to x86. If anything was going to beat x86 it would have been the DEC Alpha in the 90s.

If this is the outdated "RISC is better than CISC" argument from 30 years ago, there's nothing about x86's ISA that makes it more performant nor is there anything about ARM's ISA that makes it more power efficient.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.


In the end, one of the main reason what x86 core are less efficients is most x86 arch aim the server market where they do not need low power efficiency. They didnt spend a lot of R&D into low power architecture because it was not useful. ARM on the other side was aiming for the low power market and all manufacturer aimed their R&D for low power devices.

Intel and AMD made small attempt at low power but they would probably have needed way too much money to get competitive and anyway, they aim at high marging server market and not the low margin mobile market.
 
>>...AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise...

It was a Complete Disaster. Period. The point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...A decade later, the implementation of this instruction set on CPU cores remains wildly spotty

This is because AVX-512 is Too Fragmented. Period. Again, the point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...Intel implemented it first on an HPC accelerator, then its Xeon server processors...

Intel KNL Servers with a Xeon Phi series CPUs.

>>...before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner...

Energy-efficient... Really? It was an Energy Hog! I also would like to add that it was Too Expensive compared to NVIDIA GPUs.

>>...AMD implemented it just a couple of years ago...

Absolute mistake because most software developers do Not care about AVX-512 ISA.
 
Absolute mistake because most software developers do Not care about AVX-512 ISA.
Speak for yourself, AVX-512 is finally getting some really nice traction due to AMD, and provides a hefty performance uplift in many tasks.
 
beat x86 it would have been the DEC Alpha in the 90s.
I would dare saying that PowerPC also had a real chance, but as usual, both IBM and Motorola dropped the ball.

About this announcement, I would love the return of a shared/compatible CPU socket.

That would reduce the prices of future motherboards, I think.
 
The 80186 was (roughly) the microcontroller version of the 80286. Interestingly, I can find pics of them marked with Ⓜ AMD © INTEL, or © AMD, or Ⓜ © INTEL (still made by AMD), or Ⓜ © AMD © INTEL. AMD also used both type numbers, 80186 and Am186. This probably hints at their magnificent army of lawyers, engineers, reverse engineers, and reverse lawyers.
The 80186 was more an enhanced version the the 8086 than a variant of 80286. It had a handful of extra instructions and the illegal opcode exception notably lacking from the original, but it didn't have any of the fancy Protected Mode features introduced in 80286. Yes, Protected Mode was technically introduced in the 286, but it was still 16-bit and a nightmare in general so the 32-bit Protected Mode introduced in the 386 became synonymous with the mode.
What is 8086?
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.
The lines between CISC and RISC are so blurred with advanced CPUs that the terms are effectively obsolete. Past the decoders they all have a variety of different units and accelerators acting on micro-ops.
 
I love how Techpowerup refuse to acknowledge AMD is working on an ARM SoC for 2026, called Soundwave. Has been known for more than 6 months. It might even be a hybrid architecture. Nvidia and Mediatek are joining forces for ARM SOC in 2025, it's not just Nvidia alone.

Ian Cutress did a nice job explaining this announcement earlier today.
Where does he now work? Do you have the link as I would love to cntinue to read his tech articles.
 
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
There is the persistent theory that IBM would have chosen m68k had it been ready, but the PC might just then become another of the great many microcomputers (Atari ST, Amiga, and the m68k Mac) of the era, that had since fell by the wayside.

FWIW and to my 200-level assembly language sensibility, base m68k was so much more elegant and easier to use than base x86. An alternate history with some spun-off Motorola subsidiary operating in Intel's niche and Intel operating in, say, Micron's niche in real world could have been fun to read about.
 
Last edited:
There is the persistent theory that IBM would have chosen m68k had it been ready, but the PC might just then become another of the great many microcomputers (Atari ST, Amiga, and the m68k Mac) of the era, that had since fell by the wayside.

FWIW and to my 200-level assembly language sensibility, base m68k was so much more elegant and easier to use than base x86. An alternate history with some spun-off Motorola subsidiary operating in Intel's niche and Intel operating in, say, Micron's niche in real world could have been fun to read about.
Much of the PC's success was due to it being a very open architecture. It was clear how everything worked and you could easily develop and use your own hardware and software for it. Even the BIOS source code was published. It was still copyrighted, which forced competitors to clean-room design their own, but it's functionality was easily and completely understood. It was also easily expandable.
 
oh , imagine it if RISC-V evolved into VI and had actually all those extra features it needs to match performance of x86-64

or AMD will just adopt RISC-V cores on extra chiplet :)
 
oh , imagine it if RISC-V evolved into VI and had actually all those extra features it needs to match performance of x86-64
What does an ISA features have to do with performance? The only relevant part that was missing in RISC-V was a proper vector extension, which has been ratified since 2021.
or AMD will just adopt RISC-V cores on extra chiplet :)
Do you mean for internal use? Nvidia already does something similar with their Falcon MCU. Some other manufacturers also use RISC-V based µCUs for many different things, those are just not really visible to the end user.
 
A good way for Intel and AMD to increase the performance of their x86 processors, in the face of the growth of their ARM and RISC-V competitors, would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).

The iGPU, being used as a co-processor, is capable of being dozens of times faster than x86 cores.

And, of course, there should be a standard between Intel and AMD processors in order to the same software can run on the iGPUs of both companies.

If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers.
 
Last edited:
would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).
That's not how it works. A GPU can't just magically run all kinds of tasks that a CPU can.
If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers
Nvidia already has such products, look into their Grace lineup. And guess what, the way you mentioned is not how it works.
 
The main issue with dual GPU is that a lot of data is reused during rendering. each pixel gets multiples pass and multiples calculations. This temporary data is stored on the GPU memory and it would have to either be copied to the main memory or accessed from there. You would hit PCE-E bandwidth limitation and increased latency that would kill any hope of performance gain.

This is actually what killed SLI/Crossfire. The dedicated link between GPU was not even fast enough to be able to give decent performance.

With DirectX 12, it's not impossible to do, but it would be a nightmare as the number of dedicated GPU pairing with a same architecture iGPU that could use the same driver and same compiled shaders is incredibly small.

Not counting that temporal effect are very common. This killed the last hope of Crossfire/SLI as you have to reuse the data of multiple previous frame.
 
The main issue with dual GPU is that a lot of data is reused during rendering. each pixel gets multiples pass and multiples calculations. This temporary data is stored on the GPU memory and it would have to either be copied to the main memory or accessed from there. You would hit PCE-E bandwidth limitation and increased latency that would kill any hope of performance gain.

This is actually what killed SLI/Crossfire. The dedicated link between GPU was not even fast enough to be able to give decent performance.

With DirectX 12, it's not impossible to do, but it would be a nightmare as the number of dedicated GPU pairing with a same architecture iGPU that could use the same driver and same compiled shaders is incredibly small.

Not counting that temporal effect are very common. This killed the last hope of Crossfire/SLI as you have to reuse the data of multiple previous frame.

I know all that you said, but I didn't say that the iGPU should be used in SLI/Crossfire mode.

I said that the iGPU should be used as a general purpose co-processor, for tasks where it can be used as a coprocessor, since for some tasks the iGPU can be tens of times faster than x86 cores and consuming a small fraction of the energy that x86 cores would consume to do the same task.

If Nvidia is going to enter the consumer processor market, it seems that it is exactly what it will do.

And this idea of using the iGPU as a general-purpose co-processor is not new. AMD engineers had this idea over 20 years ago. This was even one of the reasons AMD bought ATI.

Without mentioning names, companies X and Y have always helped each other in secret during each other's difficult times. Maybe this idea of using the iGPU as a co-processor was not implemented more than 10 years ago because both companies (X and Y) made an agreement, always in secret (of course), to one of both companies would not ruin the profits of the other.
 
Last edited:
I didn't say that "a GPU can't just magically run all kinds of tasks that a CPU can".
"general purpose" imples in "all kinds of tasks a CPU can".
for tasks where it can be used as a coprocessor
That's a really important point, you can't just shove everything in there, and I don't think there are many tasks that can be easily offloaded to there that wouldn't be better on the main GPU anyway.

Anyhow, your point is not really related to x86, that's has nothing to do with the ISA, that's more of a software thing. Some software already makes use of Quick Sync (which lives inside Intel's iGPU) for some tasks, as an example.
 
"general purpose" imples in "all kinds of tasks a CPU can".

That's a really important point, you can't just shove everything in there, and I don't think there are many tasks that can be easily offloaded to there that wouldn't be better on the main GPU anyway.

Anyhow, your point is not really related to x86, that's has nothing to do with the ISA, that's more of a software thing. Some software already makes use of Quick Sync (which lives inside Intel's iGPU) for some tasks, as an example.

We have another keyboard engineer here...
 
The 80186 was more an enhanced version the the 8086 than a variant of 80286. It had a handful of extra instructions and the illegal opcode exception notably lacking from the original, but it didn't have any of the fancy Protected Mode features introduced in 80286. Yes, Protected Mode was technically introduced in the 286, but it was still 16-bit and a nightmare in general so the 32-bit Protected Mode introduced in the 386 became synonymous with the mode.

A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.

The lines between CISC and RISC are so blurred with advanced CPUs that the terms are effectively obsolete. Past the decoders they all have a variety of different units and accelerators acting on micro-ops.
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?
 
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?
Nothing really. The CPU that was most influential in kicking this whole thing off was 8086. IBM PC and co, so the number became valuable and they followed it up with 80186, 80286, 80386 and thus 80x86 pattern which got shortened to x86.

8086 was from Intel's naming scheme at the time. Its been a while and I am sure there is a guide somewhere in the Internet but from what I recall:
1st digit was about technology, I believe it started with PMOS, NMOS etc but the ones interesting to us are 4 and 8 which denote 4-bit and 8-bit chips (at least initially, since 8086 is 16-bit).
2nd digit was chip type. 0 processor, 1 RAM, 2 controller, 3 ROM etc.
Last 2 were generally sequence but sometimes pretty freeform. Not all numbers got to be a product and it was not always a sequence. "Sounds nice" was sometimes a deciding factor as well.
 
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?

To add to londiste's reply, the 8086 was directly preceeded by the 8085 (the 5 because it had a single 5V power supply) and before that the 8080 and 8008. All the later chips were source code compatible with the earlier ones with the appropriate assemblers.

 
Nothing really. The CPU that was most influential in kicking this whole thing off was 8086. IBM PC and co, so the number became valuable and they followed it up with 80186, 80286, 80386 and thus 80x86 pattern which got shortened to x86.

8086 was from Intel's naming scheme at the time. Its been a while and I am sure there is a guide somewhere in the Internet but from what I recall:
1st digit was about technology, I believe it started with PMOS, NMOS etc but the ones interesting to us are 4 and 8 which denote 4-bit and 8-bit chips (at least initially, since 8086 is 16-bit).
2nd digit was chip type. 0 processor, 1 RAM, 2 controller, 3 ROM etc.
Last 2 were generally sequence but sometimes pretty freeform. Not all numbers got to be a product and it was not always a sequence. "Sounds nice" was sometimes a deciding factor as well.
I've never considered that Intel could have a naming scheme since the beginning, but there must be something to that, yes.
Intel's first DRAM chip was the 1103. ROMs were 23xx, EPROMs were 27xx, EEPROMS/flash were (and still are) 29xx, where xx was the capacity in kilobits.

And it continues to this day. The Raptor Lake desktop chip is 80715. The Z97 chipset was DH82Z97 but I'm not sure if newer chipset follow the same scheme.

Edit: I'm just leaving this here. The story of the Intel 2114 static RAM, put together by some stupid AI and not to be fully trusted, but interesting nevertheless.
 
Last edited:
Back
Top