• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Updates Its ISA Manual with Advanced Matrix Extension Reference

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,641 (0.99/day)
Intel today released and updated version of its "Architecture Instruction Set Extensions and Future Features Programming" Reference document with the latest advanced matrix extension (AMX) programming reference. This gives us some insights into AMX and how it works. While we will not go in too much depth here, the AMX is pretty simple. Intel describes it as the following: "Intel Advanced Matrix Extensions (Intel AMX) is a new 64-bit programming paradigm consisting of two components: a set of 2-dimensional registers (tiles) representing sub-arrays from a larger 2-dimensional memory image, and an accelerator able to operate on tiles, the first implementation is called TMUL (tile matrix multiply unit)." In other words, this represents another matrix processing extension that can be used for a wide variety of workload, mainly machine learning processing. The first microarchitecture that will implement the new extension is Sapphire Rapids Xeon processor. You can find more about AMX here.


View at TechPowerUp Main Site
 
Joined
Jun 19, 2010
Messages
409 (0.08/day)
Location
Germany
Processor Ryzen 5600X
Motherboard MSI A520
Cooling Thermalright ARO-M14 orange
Memory 2x 8GB 3200
Video Card(s) RTX 3050 (ROG Strix Bios)
Storage SATA SSD
Display(s) UltraHD TV
Case Sharkoon AM5 Window red
Audio Device(s) Headset
Power Supply beQuiet 400W
Mouse Mountain Makalu 67
Keyboard MS Sidewinder X4
Software Windows, Vivaldi, Thunderbird, LibreOffice, Games, etc.
in wich CPU-µArch will this be implemented first, Golden Cove, Sapphire Rapids, etc.?
 
Joined
Apr 24, 2020
Messages
2,721 (1.60/day)
Nice. AMD will copy this probably in Zen 5.

Unlikely. AMD doesn't even have AVX512 plans as far as I'm aware.

A few notes:

* This seems to be a competitor against NVidia's Tensor cores.
* Unlike NVidia Tensor Cores, these Intel AMX instructions seem to handle rectangular matricies (2x3 matrix).
* IIRC, NVidia Tensor cores are a full matrix-multiply instruction. These AMX instructions are "only" dot-product.

-----------

AMD's tensor operations are clearly in AMD's SIMD-processor. Vega / RDNA chips instead. I would expect AMD to push tensor processing to the GPU, while focusing on CPU I/O and core count.
 
Top