Tuesday, April 9th 2019

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.
CXL uses the PCIe physical layer, and has raw on-paper bandwidth of 32 Gbps per lane, per direction, which aligns with PCIe gen 5.0 standard. The link layer is where all the secret-sauce is. Intel worked on new handshake, auto-negotiation, and transaction protocols replacing those of PCIe, designed to overcome its shortcomings listed above. With PCIe gen 5.0 already standardized by the PCI-SIG, Intel could share CXL IP back to the SIG with PCIe gen 6.0. In other words, Intel admits that CXL may not outlive PCIe, and until the PCI-SIG can standardize gen 6.0 (around 2021-22, if not later), CXL is the need of the hour.
The CXL transaction layer consists of three multiplexed sub-protocols that run simultaneously on a single link. They are: CXL.io, CXL.cache, and CXL.memory. CXL.io deals with device discovery, link negotiation, interrupts, registry access, etc., which are basically tasks that get a machine to work with a device. CXL.cache deals with the device's access to a local processor's memory. CXL.memory deals with processor's access to non-local memory (memory controlled by another processor or another machine).

Intel listed out use-cases for CXL, which begins with accelerators with memory, such as graphics cards, GPU compute accelerators, and high-density compute cards. All three CXL transaction layer protocols are relevant to such devices. Next up, are FPGAs, and NICs. CXL.io and CXL.cache are relevant here, since network-stacks are processed by processors local to the NIC. Lastly, there are the all-important memory buffers. You can imagine these devices as "NAS, but with DRAM sticks." Future data-centers will consist of vast memory pools shared between thousands of physical machines and accelerators. CXL.memory and CXL.cache are relevant. Much of what makes the CXL link-layer faster than PCIe is its optimized stack (processing load for the CPU). The CXL stack is built from the ground up keeping low-latency as a design goal.
Source: Serve the Home
Add your own comment

37 Comments on Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

#2
R0H1T
You don't even have PCIe gen4 yet & you're gunning for gen6, reminds me of that "10nm coming soon" promise :rolleyes:
Posted on Reply
#3
londiste
R0H1TYou don't even have PCIe gen4 yet & you're gunning for gen6, reminds me of that "10nm coming soon" promise :rolleyes:
Current estimate is that due to 4.0 and 5.0 being less than two years apart, 4.0 (officially announced in June 2017) would get overwhelmed by 5.0 (final spec is expected to ratified in Q1 2019). Compare this to 3.0 being from November 2010.

There have been rumors that Intel intends to skip PCI-Express 4.0 completely.
Posted on Reply
#4
R0H1T
londisteCurrent estimate is that due to 4.0 and 5.0 being less than two years apart, 4.0 (officially announced in June 2017) would get overwhelmed by 5.0 (final spec is expected to ratified in Q1 2019). Compare this to 3.0 being from November 2010.

There have been rumors that Intel intends to skip PCI-Express 4.0 completely.
Yes & looking at that S/A article Intel seems to want to lock people into CXL - a proprietary lookalike of CCIX. Besides we already have PCIe 4.0 CPU, GPU, SSD(?) & accelerators out there. Yet Intel does what it knows best, only for themselves!
Posted on Reply
#5
londiste
R0H1TYes & looking at that S/A article Intel seems to want to lock people into CXL - a proprietary lookalike of CCIX. Besides we already have PCIe 4.0 CPU, GPU, SSD(?) & accelerators out there.
First, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.
Posted on Reply
#6
nemesis.ie
SA may have a good dose of anti-Intel bias, but that does not mean they are wrong. ;)

So if CCIX is so similar, why are they "modifying it" themselves rather than joining the party with everyone else?
Posted on Reply
#7
R0H1T
CCIX is open standard, likewise GenZ IIRC lest you've forgotten what happened to firewire, TB, GSync & so many others before these. Firstly CXL will lock users into the Intel ecosystem, second there will be a CXL "tax" & lastly with Intel controlling pretty much the entire consortium it's their way or the highway. I'm sure there are other technical differences, but on the face of it I see no reason why CXL should be preferred over GenZ or CCIX atm.
Posted on Reply
#8
londiste
nemesis.ieSA may have a good dose of anti-Intel bias, but that does not mean they are wrong. ;)
So if CCIX is so similar, why are they "modifying it" themselves rather than joining the party with everyone else?
You are right about S/A and Charlie being right sometimes. Just not all the time and they are clickmagnety with their headlines.

I have not had a chance to read the entire CCIX spec (simple search doesn't do it and have not jumped through enough hoops to get the full document) and CXL spec is not public AFAIK. While having their own version of everything is probably part of it, from what has been revealed the solution seems to be somehat different. Intel's approach no doubt is geared or optimized to their specific needs.
Posted on Reply
#9
jabbadap
londisteFirst, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.
Maybe they feel the pressure by the big blue.
Posted on Reply
#10
londiste
jabbadapMaybe they feel the pressure by the big blue.
IBM is part of all these and a crucial part in some. They are definitely a player here. The OpenFabrics Alliance is probably the larger body for this stuff: www.openfabrics.org/
Posted on Reply
#11
kastriot
So basically for desktop users this is not important in next 5-10 years?
Posted on Reply
#12
londiste
kastriotSo basically for desktop users this is not important in next 5-10 years?
It is never going to be relevant to desktop users.
Posted on Reply
#13
sergionography
londisteFirst, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.
Charlie Demerjian is one of the best tech analysts in my opinion. Dude sure knows his stuff. Sure he is pretty critical of intel; though I truly think it is well justified as intel has proved over and over again that they are unethical as hell. Though to be fair, I am liking the new intel better as they kinda seem to be getting better and more streamlined with the new management.
londisteYou are right about S/A and Charlie being right sometimes. Just not all the time and they are clickmagnety with their headlines.
This is also not exactly true because they are subscription based so they hardly rely on clickbait as that type of traffic doesn't net them anything
Posted on Reply
#14
bug
R0H1TYou don't even have PCIe gen4 yet & you're gunning for gen6, reminds me of that "10nm coming soon" promise :rolleyes:
Yes, because this newspiece is totally about PCIe 6.0 :rolleyes:
Posted on Reply
#15
R0H1T
I guess you don't see the slides, nor the promise of making CXL open (standard?) by gen 6.0?
With PCIe gen 5.0 already standardized by the PCI-SIG, Intel could share CXL IP back to the SIG with PCIe gen 6.0
But of course you didn't -


So if Intel doesn't get their way this will likely end up as TB, without the USB bailout :rolleyes:
Posted on Reply
#16
Steevo
Remind me again how the couple percent performance in our own resident PCIe bandwidth testing from W1zz shows we don't yet need more bandwidth to GPUs unless you want to stack a bunch together, which has never really scaled well, but it's more of resources and management than bandwidth.....


Sounds like Intel wants to make standards that offer little benefits but cost a lot to license.
Posted on Reply
#17
SoNic67
The latency is not the same as bandwidth. In many applications that use GPU to accelerate CPU calculations, even at desktop level, I am already seeing the latency effects. Usage for CPU, GPU, memory are not maximized to 100%, but some apps cannot go higher in utilization.
That's why we "don't need more bandwidth", because latency kills any speed that we could gain from that.

Intel proposing this to be incorporated in PCIe standard is nothing nefarious, since they are already members of the PCI-SIG consortium:
pcisig.com/membership/member-companies?combine=intel
I don't see how this translates into Intel "wanting to get a fee".

As for people that bash Intel just because they feel it's "cool" and they think they "know better"... whatever inflates your ego is fine to be put online, for everyone to see.
Posted on Reply
#18
R0H1T
This isn't going to be incorporated in PCIe anytime soon, at the latest gen 6.0 & only if Intel feels generous. This as proprietary as TB was at launch, there are also competing standards which are in fact open.
Posted on Reply
#19
bug
R0H1TThis isn't going to be incorporated in PCIe anytime soon, at the latest gen 6.0 & only if Intel feels generous. This as proprietary as TB was at launch, there are also competing standards which are in fact open.
It's as open as NVLink and InfinityFabric this competes with ;)
Posted on Reply
#20
R0H1T
NVlink & IF aren't CXL's direct competitors, it's CCIX & GenZ though the point about proprietary is 100% valid.
Posted on Reply
#21
eidairaman1
The Exiled Airman
As the others have ones that work well with existing standards intel wants to abandon pcie...
Posted on Reply
#22
SoNic67
There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.
Posted on Reply
#23
Patriot
SteevoRemind me again how the couple percent performance in our own resident PCIe bandwidth testing from W1zz shows we don't yet need more bandwidth to GPUs unless you want to stack a bunch together, which has never really scaled well, but it's more of resources and management than bandwidth.....


Sounds like Intel wants to make standards that offer little benefits but cost a lot to license.
This is not for your desktop Steeevo, this is for servers where the bandwidth isn't as much for single device performance but for device to device performance. X8 may be fine for a single gpu to not lose performance, but not if it wants to work with 15 others and compete against nvlink. This is also intel railroading and not joining the other consortiums... which are already open standards Now... not to be opened on 2nd gen. This is a desperate lock-in attempt for their cascade lake failings.
Posted on Reply
#24
eidairaman1
The Exiled Airman
SoNic67There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.
Plenty of Amd bias here too
Posted on Reply
#25
bug
SoNic67There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.
I will add to that: a standard in an agreed upon way of doing things. The trouble is, it's hard to do stuff for the first(ish) time and agree with everybody else.
So oftentimes, when companies decide to go at it by themselves, it's not because they're after your cash (well, they are in the end), but because they need a product out there.
Posted on Reply
Add your own comment
Dec 9th, 2024 11:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts