• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

Joined
May 10, 2023
Messages
184 (0.35/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
I assume it's the same for Zen 5c with server variants having 512-bit, but I haven't read anything definite about that.
It is, zen 5 does have 4 sub-versions in total: 5 and 5c, each having either 256 or 512 datapaths for SIMD.
ARM and RISC-V are more of a threat to each other
Throwback to that website ARM did bashing RISC-V, clearly showing they were scared of it eating up their share in the embedded world lol
 
Joined
Oct 12, 2005
Messages
701 (0.10/day)
ARM and RISC-V are more of a threat to each other than they are to x86. If anything was going to beat x86 it would have been the DEC Alpha in the 90s.

If this is the outdated "RISC is better than CISC" argument from 30 years ago, there's nothing about x86's ISA that makes it more performant nor is there anything about ARM's ISA that makes it more power efficient.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.


In the end, one of the main reason what x86 core are less efficients is most x86 arch aim the server market where they do not need low power efficiency. They didnt spend a lot of R&D into low power architecture because it was not useful. ARM on the other side was aiming for the low power market and all manufacturer aimed their R&D for low power devices.

Intel and AMD made small attempt at low power but they would probably have needed way too much money to get competitive and anyway, they aim at high marging server market and not the low margin mobile market.
 
Joined
Jan 2, 2019
Messages
101 (0.05/day)
>>...AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise...

It was a Complete Disaster. Period. The point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...A decade later, the implementation of this instruction set on CPU cores remains wildly spotty

This is because AVX-512 is Too Fragmented. Period. Again, the point of view is based on my software development experience using Intel KNL Server with a Xeon Phi CPU.

>>...Intel implemented it first on an HPC accelerator, then its Xeon server processors...

Intel KNL Servers with a Xeon Phi series CPUs.

>>...before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner...

Energy-efficient... Really? It was an Energy Hog! I also would like to add that it was Too Expensive compared to NVIDIA GPUs.

>>...AMD implemented it just a couple of years ago...

Absolute mistake because most software developers do Not care about AVX-512 ISA.
 
Joined
May 10, 2023
Messages
184 (0.35/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Absolute mistake because most software developers do Not care about AVX-512 ISA.
Speak for yourself, AVX-512 is finally getting some really nice traction due to AMD, and provides a hefty performance uplift in many tasks.
 
Joined
Dec 6, 2022
Messages
343 (0.50/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
beat x86 it would have been the DEC Alpha in the 90s.
I would dare saying that PowerPC also had a real chance, but as usual, both IBM and Motorola dropped the ball.

About this announcement, I would love the return of a shared/compatible CPU socket.

That would reduce the prices of future motherboards, I think.
 
Joined
Mar 16, 2017
Messages
227 (0.08/day)
Location
behind you
Processor Threadripper 1950X
Motherboard ASRock X399 Professional Gaming
Cooling IceGiant ProSiphon Elite
Memory 48GB DDR4 2934MHz
Video Card(s) MSI GTX 1080
Storage 4TB Crucial P3 Plus NVMe, 1TB Samsung 980 NVMe, 1TB Inland NVMe, 2TB Western Digital HDD
Display(s) 2x 4K60
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Corsair Ironclaw Wireless
Keyboard Corsair K70 MK.2
VR HMD HTC Vive Pro
Software Windows 10, QubesOS
The 80186 was (roughly) the microcontroller version of the 80286. Interestingly, I can find pics of them marked with Ⓜ AMD © INTEL, or © AMD, or Ⓜ © INTEL (still made by AMD), or Ⓜ © AMD © INTEL. AMD also used both type numbers, 80186 and Am186. This probably hints at their magnificent army of lawyers, engineers, reverse engineers, and reverse lawyers.
The 80186 was more an enhanced version the the 8086 than a variant of 80286. It had a handful of extra instructions and the illegal opcode exception notably lacking from the original, but it didn't have any of the fancy Protected Mode features introduced in 80286. Yes, Protected Mode was technically introduced in the 286, but it was still 16-bit and a nightmare in general so the 32-bit Protected Mode introduced in the 386 became synonymous with the mode.
What is 8086?
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
there was in the past when cpu didnt had a frontend and executed directly the instructions. CISC had a disadvantage on power consumptions as it required more transistors in the executions pipeline. but now, all x86 processors have a front end that decode x86* instruction into smaller ones.

At first, that front end was one of the reason why x86 was less efficient, but CPU got so large that the front end is a small portion of the whole core anyway.

Also, if you look at the latest arm instructions sets, i wonder if it can still be called RISC. They now too have a front end.
The lines between CISC and RISC are so blurred with advanced CPUs that the terms are effectively obsolete. Past the decoders they all have a variety of different units and accelerators acting on micro-ops.
 
Joined
May 3, 2018
Messages
2,877 (1.22/day)
I love how Techpowerup refuse to acknowledge AMD is working on an ARM SoC for 2026, called Soundwave. Has been known for more than 6 months. It might even be a hybrid architecture. Nvidia and Mediatek are joining forces for ARM SOC in 2025, it's not just Nvidia alone.

Ian Cutress did a nice job explaining this announcement earlier today.
Where does he now work? Do you have the link as I would love to cntinue to read his tech articles.
 
Joined
May 22, 2024
Messages
397 (2.66/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
A stop-gap CPU introduced by Intel while they worked on a proper 32-bit CPU to compete with the new 32-bit chips made by other companies. Nevertheless it (or more specifically it's 8088 variant) was chosen as the CPU in the original IBM PC which was a massive hit, and thus the x86 architecture became the basis for all subsequent PCs, likely including the device you're reading this on now (unless it's a phone or tablet in which case it probably uses ARM). x86 was a hodgepodge of a chip even when it was introduced, a trend that it very much continued as it evolved. It wasn't designed for the future.
There is the persistent theory that IBM would have chosen m68k had it been ready, but the PC might just then become another of the great many microcomputers (Atari ST, Amiga, and the m68k Mac) of the era, that had since fell by the wayside.

FWIW and to my 200-level assembly language sensibility, base m68k was so much more elegant and easier to use than base x86. An alternate history with some spun-off Motorola subsidiary operating in Intel's niche and Intel operating in, say, Micron's niche in real world could have been fun to read about.
 
Last edited:
Joined
Mar 7, 2011
Messages
4,437 (0.89/day)
Where does he now work? Do you have the link as I would love to cntinue to read his tech articles.
He is doing his own thing these days and posts interviews and other videos on his youtube channel. Here is link for this x86 collab video posted yesterday:
 
Top