• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

Joined
May 10, 2023
Messages
257 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
I assume it's the same for Zen 5c with server variants having 512-bit, but I haven't read anything definite about that.
It is, zen 5 does have 4 sub-versions in total: 5 and 5c, each having either 256 or 512 datapaths for SIMD.
ARM and RISC-V are more of a threat to each other
Throwback to that website ARM did bashing RISC-V, clearly showing they were scared of it eating up their share in the embedded world lol
 
Joined
Oct 12, 2005
Messages
708 (0.10/day)
ARM and RISC-V are more of a threat to each other than they are to x86. If anything was going to beat x86 it would have been the DEC Alpha in the 90s.

If this is the outdated "RISC is better than CISC" argument from 30 years ago, there's nothing about x86's ISA that makes it more performant nor is there anything about ARM's ISA that makes it more power efficient.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.


In the end, one of the main reason what x86 core are less efficients is most x86 arch aim the server market where they do not need low power efficiency. They didnt spend a lot of R&D into low power architecture because it was not useful. ARM on the other side was aiming for the low power market and all manufacturer aimed their R&D for low power devices.

Intel and AMD made small attempt at low power but they would probably have needed way too much money to get competitive and anyway, they aim at high marging server market and not the low margin mobile market.
 
Joined
Jan 2, 2019
Messages
124 (0.06/day)
>>...AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise...

It was a Complete Disaster. Period. The point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...A decade later, the implementation of this instruction set on CPU cores remains wildly spotty

This is because AVX-512 is Too Fragmented. Period. Again, the point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...Intel implemented it first on an HPC accelerator, then its Xeon server processors...

Intel KNL Servers with a Xeon Phi series CPUs.

>>...before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner...

Energy-efficient... Really? It was an Energy Hog! I also would like to add that it was Too Expensive compared to NVIDIA GPUs.

>>...AMD implemented it just a couple of years ago...

Absolute mistake because most software developers do Not care about AVX-512 ISA.
 
Joined
May 10, 2023
Messages
257 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Absolute mistake because most software developers do Not care about AVX-512 ISA.
Speak for yourself, AVX-512 is finally getting some really nice traction due to AMD, and provides a hefty performance uplift in many tasks.
 
Joined
Dec 6, 2022
Messages
383 (0.53/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
beat x86 it would have been the DEC Alpha in the 90s.
I would dare saying that PowerPC also had a real chance, but as usual, both IBM and Motorola dropped the ball.

About this announcement, I would love the return of a shared/compatible CPU socket.

That would reduce the prices of future motherboards, I think.
 
Joined
Mar 16, 2017
Messages
236 (0.08/day)
Location
behind you
Processor Threadripper 1950X
Motherboard ASRock X399 Professional Gaming
Cooling IceGiant ProSiphon Elite
Memory 48GB DDR4 2934MHz
Video Card(s) MSI GTX 1080
Storage 4TB Crucial P3 Plus NVMe, 1TB Samsung 980 NVMe, 1TB Inland NVMe, 2TB Western Digital HDD
Display(s) 2x 4K60
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Corsair Ironclaw Wireless
Keyboard Corsair K70 MK.2
VR HMD HTC Vive Pro
Software Windows 10, QubesOS
The 80186 was (roughly) the microcontroller version of the 80286. Interestingly, I can find pics of them marked with Ⓜ AMD © INTEL, or © AMD, or Ⓜ © INTEL (still made by AMD), or Ⓜ © AMD © INTEL. AMD also used both type numbers, 80186 and Am186. This probably hints at their magnificent army of lawyers, engineers, reverse engineers, and reverse lawyers.
The 80186 was more an enhanced version the the 8086 than a variant of 80286. It had a handful of extra instructions and the illegal opcode exception notably lacking from the original, but it didn't have any of the fancy Protected Mode features introduced in 80286. Yes, Protected Mode was technically introduced in the 286, but it was still 16-bit and a nightmare in general so the 32-bit Protected Mode introduced in the 386 became synonymous with the mode.
What is 8086?
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.
The lines between CISC and RISC are so blurred with advanced CPUs that the terms are effectively obsolete. Past the decoders they all have a variety of different units and accelerators acting on micro-ops.
 
Joined
May 3, 2018
Messages
2,881 (1.20/day)
I love how Techpowerup refuse to acknowledge AMD is working on an ARM SoC for 2026, called Soundwave. Has been known for more than 6 months. It might even be a hybrid architecture. Nvidia and Mediatek are joining forces for ARM SOC in 2025, it's not just Nvidia alone.

Ian Cutress did a nice job explaining this announcement earlier today.
Where does he now work? Do you have the link as I would love to cntinue to read his tech articles.
 
Joined
May 22, 2024
Messages
411 (2.17/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
There is the persistent theory that IBM would have chosen m68k had it been ready, but the PC might just then become another of the great many microcomputers (Atari ST, Amiga, and the m68k Mac) of the era, that had since fell by the wayside.

FWIW and to my 200-level assembly language sensibility, base m68k was so much more elegant and easier to use than base x86. An alternate history with some spun-off Motorola subsidiary operating in Intel's niche and Intel operating in, say, Micron's niche in real world could have been fun to read about.
 
Last edited:
Joined
Mar 7, 2011
Messages
4,557 (0.91/day)
Where does he now work? Do you have the link as I would love to cntinue to read his tech articles.
He is doing his own thing these days and posts interviews and other videos on his youtube channel. Here is link for this x86 collab video posted yesterday:
 
Joined
Mar 16, 2017
Messages
236 (0.08/day)
Location
behind you
Processor Threadripper 1950X
Motherboard ASRock X399 Professional Gaming
Cooling IceGiant ProSiphon Elite
Memory 48GB DDR4 2934MHz
Video Card(s) MSI GTX 1080
Storage 4TB Crucial P3 Plus NVMe, 1TB Samsung 980 NVMe, 1TB Inland NVMe, 2TB Western Digital HDD
Display(s) 2x 4K60
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Corsair Ironclaw Wireless
Keyboard Corsair K70 MK.2
VR HMD HTC Vive Pro
Software Windows 10, QubesOS
There is the persistent theory that IBM would have chosen m68k had it been ready, but the PC might just then become another of the great many microcomputers (Atari ST, Amiga, and the m68k Mac) of the era, that had since fell by the wayside.

FWIW and to my 200-level assembly language sensibility, base m68k was so much more elegant and easier to use than base x86. An alternate history with some spun-off Motorola subsidiary operating in Intel's niche and Intel operating in, say, Micron's niche in real world could have been fun to read about.
Much of the PC's success was due to it being a very open architecture. It was clear how everything worked and you could easily develop and use your own hardware and software for it. Even the BIOS source code was published. It was still copyrighted, which forced competitors to clean-room design their own, but it's functionality was easily and completely understood. It was also easily expandable.
 
Joined
Nov 11, 2005
Messages
43 (0.01/day)
oh , imagine it if RISC-V evolved into VI and had actually all those extra features it needs to match performance of x86-64

or AMD will just adopt RISC-V cores on extra chiplet :)
 
Joined
May 10, 2023
Messages
257 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
oh , imagine it if RISC-V evolved into VI and had actually all those extra features it needs to match performance of x86-64
What does an ISA features have to do with performance? The only relevant part that was missing in RISC-V was a proper vector extension, which has been ratified since 2021.
or AMD will just adopt RISC-V cores on extra chiplet :)
Do you mean for internal use? Nvidia already does something similar with their Falcon MCU. Some other manufacturers also use RISC-V based µCUs for many different things, those are just not really visible to the end user.
 
Joined
Oct 24, 2022
Messages
198 (0.26/day)
A good way for Intel and AMD to increase the performance of their x86 processors, in the face of the growth of their ARM and RISC-V competitors, would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).

The iGPU, being used as a co-processor, is capable of being dozens of times faster than x86 cores.

And, of course, there should be a standard between Intel and AMD processors in order to the same software can run on the iGPUs of both companies.

If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers.
 
Last edited:
Joined
May 10, 2023
Messages
257 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).
That's not how it works. A GPU can't just magically run all kinds of tasks that a CPU can.
If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers
Nvidia already has such products, look into their Grace lineup. And guess what, the way you mentioned is not how it works.
 
Joined
Oct 24, 2022
Messages
198 (0.26/day)
That's not how it works. A GPU can't just magically run all kinds of tasks that a CPU can.

I didn't say that "a GPU can't just magically run all kinds of tasks that a CPU can".

Nvidia already has such products, look into their Grace lineup.

I'm talking about a CPU for home use and that runs Windows.
 
Joined
Oct 12, 2005
Messages
708 (0.10/day)
The main issue with dual GPU is that a lot of data is reused during rendering. each pixel gets multiples pass and multiples calculations. This temporary data is stored on the GPU memory and it would have to either be copied to the main memory or accessed from there. You would hit PCE-E bandwidth limitation and increased latency that would kill any hope of performance gain.

This is actually what killed SLI/Crossfire. The dedicated link between GPU was not even fast enough to be able to give decent performance.

With DirectX 12, it's not impossible to do, but it would be a nightmare as the number of dedicated GPU pairing with a same architecture iGPU that could use the same driver and same compiled shaders is incredibly small.

Not counting that temporal effect are very common. This killed the last hope of Crossfire/SLI as you have to reuse the data of multiple previous frame.
 
Joined
Oct 24, 2022
Messages
198 (0.26/day)
The main issue with dual GPU is that a lot of data is reused during rendering. each pixel gets multiples pass and multiples calculations. This temporary data is stored on the GPU memory and it would have to either be copied to the main memory or accessed from there. You would hit PCE-E bandwidth limitation and increased latency that would kill any hope of performance gain.

This is actually what killed SLI/Crossfire. The dedicated link between GPU was not even fast enough to be able to give decent performance.

With DirectX 12, it's not impossible to do, but it would be a nightmare as the number of dedicated GPU pairing with a same architecture iGPU that could use the same driver and same compiled shaders is incredibly small.

Not counting that temporal effect are very common. This killed the last hope of Crossfire/SLI as you have to reuse the data of multiple previous frame.

I know all that you said, but I didn't say that the iGPU should be used in SLI/Crossfire mode.

I said that the iGPU should be used as a general purpose co-processor, for tasks where it can be used as a coprocessor, since for some tasks the iGPU can be tens of times faster than x86 cores and consuming a small fraction of the energy that x86 cores would consume to do the same task.

If Nvidia is going to enter the consumer processor market, it seems that it is exactly what it will do.

And this idea of using the iGPU as a general-purpose co-processor is not new. AMD engineers had this idea over 20 years ago. This was even one of the reasons AMD bought ATI.

Without mentioning names, companies X and Y have always helped each other in secret during each other's difficult times. Maybe this idea of using the iGPU as a co-processor was not implemented more than 10 years ago because both companies (X and Y) made an agreement, always in secret (of course), to one of both companies would not ruin the profits of the other.
 
Last edited:
Joined
May 10, 2023
Messages
257 (0.45/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
I didn't say that "a GPU can't just magically run all kinds of tasks that a CPU can".
"general purpose" imples in "all kinds of tasks a CPU can".
for tasks where it can be used as a coprocessor
That's a really important point, you can't just shove everything in there, and I don't think there are many tasks that can be easily offloaded to there that wouldn't be better on the main GPU anyway.

Anyhow, your point is not really related to x86, that's has nothing to do with the ISA, that's more of a software thing. Some software already makes use of Quick Sync (which lives inside Intel's iGPU) for some tasks, as an example.
 
Joined
Oct 24, 2022
Messages
198 (0.26/day)
"general purpose" imples in "all kinds of tasks a CPU can".

That's a really important point, you can't just shove everything in there, and I don't think there are many tasks that can be easily offloaded to there that wouldn't be better on the main GPU anyway.

Anyhow, your point is not really related to x86, that's has nothing to do with the ISA, that's more of a software thing. Some software already makes use of Quick Sync (which lives inside Intel's iGPU) for some tasks, as an example.

We have another keyboard engineer here...
 
Joined
Sep 15, 2015
Messages
1,076 (0.32/day)
Location
Latvija
System Name Fujitsu Siemens, HP Workstation
Processor Athlon x2 5000+ 3.1GHz, i5 2400
Motherboard Asus
Memory 4GB Samsung
Video Card(s) rx 460 4gb
Storage 750 Evo 250 +2tb
Display(s) Asus 1680x1050 4K HDR
Audio Device(s) Pioneer
Power Supply 430W
Mouse Acme
Keyboard Trust
The 80186 was more an enhanced version the the 8086 than a variant of 80286. It had a handful of extra instructions and the illegal opcode exception notably lacking from the original, but it didn't have any of the fancy Protected Mode features introduced in 80286. Yes, Protected Mode was technically introduced in the 286, but it was still 16-bit and a nightmare in general so the 32-bit Protected Mode introduced in the 386 became synonymous with the mode.

A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.

The lines between CISC and RISC are so blurred with advanced CPUs that the terms are effectively obsolete. Past the decoders they all have a variety of different units and accelerators acting on micro-ops.
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?
 
Joined
Feb 3, 2017
Messages
3,755 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?
Nothing really. The CPU that was most influential in kicking this whole thing off was 8086. IBM PC and co, so the number became valuable and they followed it up with 80186, 80286, 80386 and thus 80x86 pattern which got shortened to x86.

8086 was from Intel's naming scheme at the time. Its been a while and I am sure there is a guide somewhere in the Internet but from what I recall:
1st digit was about technology, I believe it started with PMOS, NMOS etc but the ones interesting to us are 4 and 8 which denote 4-bit and 8-bit chips (at least initially, since 8086 is 16-bit).
2nd digit was chip type. 0 processor, 1 RAM, 2 controller, 3 ROM etc.
Last 2 were generally sequence but sometimes pretty freeform. Not all numbers got to be a product and it was not always a sequence. "Sounds nice" was sometimes a deciding factor as well.
 
Joined
Mar 16, 2017
Messages
236 (0.08/day)
Location
behind you
Processor Threadripper 1950X
Motherboard ASRock X399 Professional Gaming
Cooling IceGiant ProSiphon Elite
Memory 48GB DDR4 2934MHz
Video Card(s) MSI GTX 1080
Storage 4TB Crucial P3 Plus NVMe, 1TB Samsung 980 NVMe, 1TB Inland NVMe, 2TB Western Digital HDD
Display(s) 2x 4K60
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Corsair Ironclaw Wireless
Keyboard Corsair K70 MK.2
VR HMD HTC Vive Pro
Software Windows 10, QubesOS
I only wanted to know, what these numbers mean. x-?, 8-?, 6-?

To add to londiste's reply, the 8086 was directly preceeded by the 8085 (the 5 because it had a single 5V power supply) and before that the 8080 and 8008. All the later chips were source code compatible with the earlier ones with the appropriate assemblers.

 
Joined
Jan 3, 2021
Messages
3,500 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Nothing really. The CPU that was most influential in kicking this whole thing off was 8086. IBM PC and co, so the number became valuable and they followed it up with 80186, 80286, 80386 and thus 80x86 pattern which got shortened to x86.

8086 was from Intel's naming scheme at the time. Its been a while and I am sure there is a guide somewhere in the Internet but from what I recall:
1st digit was about technology, I believe it started with PMOS, NMOS etc but the ones interesting to us are 4 and 8 which denote 4-bit and 8-bit chips (at least initially, since 8086 is 16-bit).
2nd digit was chip type. 0 processor, 1 RAM, 2 controller, 3 ROM etc.
Last 2 were generally sequence but sometimes pretty freeform. Not all numbers got to be a product and it was not always a sequence. "Sounds nice" was sometimes a deciding factor as well.
I've never considered that Intel could have a naming scheme since the beginning, but there must be something to that, yes.
Intel's first DRAM chip was the 1103. ROMs were 23xx, EPROMs were 27xx, EEPROMS/flash were (and still are) 29xx, where xx was the capacity in kilobits.

And it continues to this day. The Raptor Lake desktop chip is 80715. The Z97 chipset was DH82Z97 but I'm not sure if newer chipset follow the same scheme.

Edit: I'm just leaving this here. The story of the Intel 2114 static RAM, put together by some stupid AI and not to be fully trusted, but interesting nevertheless.
 
Last edited:
Top