Wednesday, November 13th 2024

AMD Unveils the Versal Premium Series Gen 2 Adaptive SoC (FPGA) with PCIe 6.0 and CXL 3.1 For Next-Gen, I/O Rich Applications

AMD on Tuesday released its flagship FPGA series, and possibly its most important product launch after the company's Xilinx acquisition, the Versal Premium Series Gen 2 SoC. It's hard to call this an FPGA, much in the same way as it's hard to call a modern processor "a CPU," there are many integrated devices, platform interfaces, and application-specific accelerators, and much in the same way, the Versal Premium Series Gen 2 is marketed as an "adaptive SoC," rather than a really powerful FPGA. Speaking of I/O, this is AMD's first product to implement PCI-Express Gen 6 and CXL 3.1.

There's no host platform that currently support these standards—neither EPYC "Turin" nor Xeon 6 "Granite Rapids," but it goes to show that the chip is future-ready and can put the new I/O standards to use. The chip is designed to be implemented as a standalone SoC, and so it supports both DDR5 RDIMMs and LPDDR5X, giving end-users the flexibility to go with the two vastly different memory standards depending on what their application is. These capabilities make the Versal Premium Series Gen 2 well-suited for demanding applications in sectors like data centers, communications, test and measurement, aerospace, and defense, where high-speed data processing is critical.
The main feature of the Versal Premium Series Gen 2 is its FPGA. Put short, think of an FPGA as a sea of transistors that you can program to work like any logic device you want. This is different from emulation, because the FPGA performs close to what an ASIC would. FPGAs are crucial when designing and prototyping ASICs, or for hardware manufacturers that have a very small production target to create the logic device they want using an FPGA (think a space agency creating an SoC for one of its interplanetary probes). The Versal Premium Series Gen 2, depending on the model, comes with anywhere between 1.40 to 3.27 million system logic cells (SLCs), 643k to 1.49 million LUTs, 3,320 to 7,616 DSP engines, 4 to 8 DDR5/LPDDR5x memory controllers, and a memory bus width ranging from 128-bit to 256-bit. I/O includes a dual PCI-Express 6.0 x8 interface (64 Gbps per lane), which converts to a CXL 3.1 interface of comparable bandwidth, and a multitude of MACs, including a 100 Gbps multirate Ethernet MAC, up to five 600 Gbps Ethernet MACs depending on the model, and a 400 Gbps high-speed crypto engine.

In addition to faster connectivity, Versal Premium Series Gen 2 includes enhanced security features. It is the first FPGA to integrate PCIe Integrity and Data Encryption (IDE) directly within its hard IP, offering high-level security for data in transit. It also incorporates 400 Gbps Crypto Engines, which support accelerated, secure data transmission. AMD has embedded encryption into the platform's DDR memory controllers, ensuring data at rest is also protected. Development tools for the Versal Premium Gen 2 will be available in 2025, with production expected in 2026. With these innovations, AMD's Versal Premium Series Gen 2 aims to redefine adaptive computing for industries seeking scalable, high-performance, and secure data handling solutions.

AMD will begin sampling and releasing development tools of the Versal Premium Series Gen 2 in 2025, while taking orders. Production of the SoC should commence in 2026—now you know why it comes with a PCIe Gen 6 interface.

For more information, visit the product page.

The complete slide-deck from AMD follows.
Add your own comment

16 Comments on AMD Unveils the Versal Premium Series Gen 2 Adaptive SoC (FPGA) with PCIe 6.0 and CXL 3.1 For Next-Gen, I/O Rich Applications

#1
Neo_Morpheus
Versal Premium Series Gen 2
A name that only a mother will love....

Their naming conventions are simply....ughhh
Posted on Reply
#2
Daven
It would be nice to know more about acquired Xilinx products and how they might be used in our DIY PC building world. But I'm guessing there isn't much crossover.
Posted on Reply
#3
R-T-B
Neo_MorpheusA name that only a mother will love....

Their naming conventions are simply....ughhh
I honestly don't see anything wrong with the name.
Posted on Reply
#4
Neo_Morpheus
R-T-BI honestly don't see anything wrong with the name.
Its so...generic, empty, doesnt stand out in anyway.
Posted on Reply
#5
R-T-B
Neo_MorpheusIts so...generic, empty, doesnt stand out in anyway.
I feel that way about most corperate names frankly.
Posted on Reply
#6
Neo_Morpheus
R-T-BI feel that way about most corperate names frankly.
Same here, but at least some or maybe most of them, provide some kind of info, model, I dont know, but this one in particular is just so generic.
Posted on Reply
#7
Daven
Thanks for posting the slide deck. Versal looks like it's mostly used for PCIe and Memory traffic control and security in data centers. Always good to know about these kinds of technologies.
Posted on Reply
#8
lilhasselhoffer
Wow, it's an FPGA with PCI-e 6.0 and blazing fast RAM. This is going to cost an arm and a leg...but it's technology directly transferrable to stuff like GPUs. Maybe AMD is taking the next generation of GPUs off to better integrate new tech and get it at a reasonable price point so that more than one of its business centers can align...and the image of truly heterogeneous computing can be realized.

-continues reading-

Oh...it's a late 2025 sampling with late 2026 release of actual hardware. This stuff is two years out, and it's rather squarely focused on data center applications instead of making stuff like NPUs obsolete because an FPGA can be flashed to do anything great instead of having to have multiple co-processors who are dead space unless their unique functions are being used. This is a lot less exciting for consumers in the next 5-6 years...which is about how long the tech will require to trickle down. Sigh.
Posted on Reply
#9
OSdevr
lilhasselhofferWow, it's an FPGA with PCI-e 6.0 and blazing fast RAM. This is going to cost an arm and a leg...but it's technology directly transferrable to stuff like GPUs. Maybe AMD is taking the next generation of GPUs off to better integrate new tech and get it at a reasonable price point so that more than one of its business centers can align...and the image of truly heterogeneous computing can be realized.

-continues reading-

Oh...it's a late 2025 sampling with late 2026 release of actual hardware. This stuff is two years out, and it's rather squarely focused on data center applications instead of making stuff like NPUs obsolete because an FPGA can be flashed to do anything great instead of having to have multiple co-processors who are dead space unless their unique functions are being used. This is a lot less exciting for consumers in the next 5-6 years...which is about how long the tech will require to trickle down. Sigh.
Not only are the chips themselves expensive, the software to develop with them costs thousands. It's a major pain point for electronics enthusiasts that even ancient FPGAs you can find on scrap boards can't be used by us because the development software is basically unobtainable. By and large there aren't FOSS or freely available dev tools for high end or even mid range FPGAs (though AMD/Xilinx is considerably more generous with this than Intel/Altera).

These aren't very relevant for PC or gaming enthusiasts, but their usefulness for makers and hardware hackers is undeniable for those with the skill to use them.
Posted on Reply
#10
Wirko
OSdevrNot only are the chips themselves expensive, the software to develop with them costs thousands. It's a major pain point for electronics enthusiasts that even ancient FPGAs you can find on scrap boards can't be used by us because the development software is basically unobtainable. By and large there aren't FOSS or freely available dev tools for high end or even mid range FPGAs (though AMD/Xilinx is considerably more generous with this than Intel/Altera).

These aren't very relevant for PC or gaming enthusiasts, but their usefulness for makers and hardware hackers is undeniable for those with the skill to use them.
I guess the architecture of these FPGAs is not documented in detail, and AMD and Intel prohibit reverse engineering the binary files their tools generate. A situation similar to Nvidia's T&C on compiled CUDA code. Is this the case here?
Posted on Reply
#11
OSdevr
WirkoI guess the architecture of these FPGAs is not documented in detail, and AMD and Intel prohibit reverse engineering the binary files their tools generate. A situation similar to Nvidia's T&C on compiled CUDA code. Is this the case here?
To an extent yes. FPGAs are also really different beasts from CPUs.
Posted on Reply
#12
lilhasselhoffer
OSdevrNot only are the chips themselves expensive, the software to develop with them costs thousands. It's a major pain point for electronics enthusiasts that even ancient FPGAs you can find on scrap boards can't be used by us because the development software is basically unobtainable. By and large there aren't FOSS or freely available dev tools for high end or even mid range FPGAs (though AMD/Xilinx is considerably more generous with this than Intel/Altera).

These aren't very relevant for PC or gaming enthusiasts, but their usefulness for makers and hardware hackers is undeniable for those with the skill to use them.
WirkoI guess the architecture of these FPGAs is not documented in detail, and AMD and Intel prohibit reverse engineering the binary files their tools generate. A situation similar to Nvidia's T&C on compiled CUDA code. Is this the case here?
OSdevrTo an extent yes. FPGAs are also really different beasts from CPUs.
So...
1) An FPGA is a field programmable gate array. My interest here is that you could image a processor onto the thing. Think not having to have ray-trace features take up space in your GPU, should you not enable them.
2) Speaking of why this matters for gaming...and the obvious fun for makers, imagine having a locked AMD image for about half a dozen configurations that AMD could tweak for each game. Very heavy on developmental features, but if they made their general performance values available developers could target to have specific features at specific levels enabled and disabled based on series. Think something along the lines of a slider in a game telling the FPGA to make a GPU that excels with the desired settings, no matter how big or small it is.

3) My limited time using FPGAs was focused on garbage software and performance generations old. If AMD is serious about this, and they support it with even a little bit of their open source initiative, I see a beautiful future where you emulate processors with a license, and a single piece of hardware can be your physics processor, NPU, or Raspberry-PI killer based only on whatever you plug into it.
4) FPGAs are not meant to be CPUs. They are meant to be a breadboard to make your own custom gate arrangement. By definition a CPU, like any processor, is a gate arrangement. I say this because an FPGA is the processor that you don't have to have lithography for...which makes it both the most flexible and least efficient (space wise) option. I choose to ignore the obvious downsides because most of them are imposed by having worse processes and poor support, whereas most CPUs are good at whatever they were designed and priced for.


I just have high hopes because, assuming the security is good, this is the future where you can download a licensed image to make the best processor for whatever it is you are doing...which means stuff like the GPU crypto craze will disappear under simply wanting to buy a literal chunk of silicon. Bigger silicon = more gates = better processors...so we don't get situations where artificial limiters determine wild swings in pricing. That may sound idealistic, but I also believe that if we could link this together then there would be less e-waste and you could patch out vulnerabilities by literally rebuilding every single one in existence with a software patch...which makes issues like Intel and AMD have with security due to predictive algorithms disappear without having to significantly reset hardware expectations. Again, dreaming more than a little. That said, having dreams is occasionally nice.
Posted on Reply
#13
JasBC
R-T-BI honestly don't see anything wrong with the name.
Look up Qualcomm's naming scheme since they started doing "Snapdragon 8/7/6/4 Gen X" and try to make sense of product groupings with all the Plus- and S-models which make some 7-series chips better than most 8-series chips and some 6-series chips worse than most entry level 4-series chips. . .

It's a product naming scheme meant to confuse the consumer, and is quite honestly just ugly and difficult to not laugh at over its obtuseness.
Posted on Reply
#14
kapone32
Yet another chip to clearly separate the CPU hierarchy. Whether a fan boy or not you have to admit that AMD have changed the CPU space for the good.
Posted on Reply
#15
SOAREVERSOR
lilhasselhofferSo...
1) An FPGA is a field programmable gate array. My interest here is that you could image a processor onto the thing. Think not having to have ray-trace features take up space in your GPU, should you not enable them.
2) Speaking of why this matters for gaming...and the obvious fun for makers, imagine having a locked AMD image for about half a dozen configurations that AMD could tweak for each game. Very heavy on developmental features, but if they made their general performance values available developers could target to have specific features at specific levels enabled and disabled based on series. Think something along the lines of a slider in a game telling the FPGA to make a GPU that excels with the desired settings, no matter how big or small it is.

3) My limited time using FPGAs was focused on garbage software and performance generations old. If AMD is serious about this, and they support it with even a little bit of their open source initiative, I see a beautiful future where you emulate processors with a license, and a single piece of hardware can be your physics processor, NPU, or Raspberry-PI killer based only on whatever you plug into it.
4) FPGAs are not meant to be CPUs. They are meant to be a breadboard to make your own custom gate arrangement. By definition a CPU, like any processor, is a gate arrangement. I say this because an FPGA is the processor that you don't have to have lithography for...which makes it both the most flexible and least efficient (space wise) option. I choose to ignore the obvious downsides because most of them are imposed by having worse processes and poor support, whereas most CPUs are good at whatever they were designed and priced for.


I just have high hopes because, assuming the security is good, this is the future where you can download a licensed image to make the best processor for whatever it is you are doing...which means stuff like the GPU crypto craze will disappear under simply wanting to buy a literal chunk of silicon. Bigger silicon = more gates = better processors...so we don't get situations where artificial limiters determine wild swings in pricing. That may sound idealistic, but I also believe that if we could link this together then there would be less e-waste and you could patch out vulnerabilities by literally rebuilding every single one in existence with a software patch...which makes issues like Intel and AMD have with security due to predictive algorithms disappear without having to significantly reset hardware expectations. Again, dreaming more than a little. That said, having dreams is occasionally nice.
Sigh this is not for gaming or gamers it's for serious stuff and serious people. It's going to be expensive as hell to boot.
Posted on Reply
#16
OSdevr
lilhasselhofferSo...
1) An FPGA is a field programmable gate array. My interest here is that you could image a processor onto the thing. Think not having to have ray-trace features take up space in your GPU, should you not enable them.
2) Speaking of why this matters for gaming...and the obvious fun for makers, imagine having a locked AMD image for about half a dozen configurations that AMD could tweak for each game. Very heavy on developmental features, but if they made their general performance values available developers could target to have specific features at specific levels enabled and disabled based on series. Think something along the lines of a slider in a game telling the FPGA to make a GPU that excels with the desired settings, no matter how big or small it is.

3) My limited time using FPGAs was focused on garbage software and performance generations old. If AMD is serious about this, and they support it with even a little bit of their open source initiative, I see a beautiful future where you emulate processors with a license, and a single piece of hardware can be your physics processor, NPU, or Raspberry-PI killer based only on whatever you plug into it.
4) FPGAs are not meant to be CPUs. They are meant to be a breadboard to make your own custom gate arrangement. By definition a CPU, like any processor, is a gate arrangement. I say this because an FPGA is the processor that you don't have to have lithography for...which makes it both the most flexible and least efficient (space wise) option. I choose to ignore the obvious downsides because most of them are imposed by having worse processes and poor support, whereas most CPUs are good at whatever they were designed and priced for.


I just have high hopes because, assuming the security is good, this is the future where you can download a licensed image to make the best processor for whatever it is you are doing...which means stuff like the GPU crypto craze will disappear under simply wanting to buy a literal chunk of silicon. Bigger silicon = more gates = better processors...so we don't get situations where artificial limiters determine wild swings in pricing. That may sound idealistic, but I also believe that if we could link this together then there would be less e-waste and you could patch out vulnerabilities by literally rebuilding every single one in existence with a software patch...which makes issues like Intel and AMD have with security due to predictive algorithms disappear without having to significantly reset hardware expectations. Again, dreaming more than a little. That said, having dreams is occasionally nice.
Point 4 largely answers point 1. FPGAs are the king of flexibility, but they are always inferior to custom silicon when it comes to performance and efficiency. Bitcoin accelerators actually used FPGAs before ASICs were made for it. Sure FPGAs cost less than custom silicon in smaller quantities (so the biggest ones are often used for prototyping custom silicon) and tend to perform a specific task better than software on a CPU, but they can't match fully custom silicon when it comes to performance. Most modern FPGAs include "hard blocks" ranging from small blocks of SRAM to entire CPUs, GPUs and other accelerators to even things up a bit. It's rare for all the hard blocks on an FPGA (or even peripherals on a microcontroller) to be used so you're "wasting" silicon space anyway just like raytracing hardware on GPUs is never used by most gamers. It's there because there's someone else who can make use of it. All the Versals include multiple types of ARM cores among other accelerators which is why AMD calls them "Adaptive SoCs."

"Licensable IP cores" are pretty standard in FPGA development, and some of the basic ones (or ones that use chip specific functions) are freely available from the FPGA vendor and others, but the cutting edge stuff is all expensive since it's generally based on companies using them in their designs. They're not designed around or priced per unit. I'd be great if this changed, but it'd require a major overhaul in all sorts of software.

As it stands FPGAs are beyond the skills of even most makers to make use of, let alone gamers. There's continuous work to make them more approachable, but beyond throwing a premade bitstream on a chip in an FPGA based "emulator" there's an incredibly steep learnig curve. FPGAs haven't had their "Arduino" moment yet.
Posted on Reply
Add your own comment
Nov 14th, 2024 12:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts