Wednesday, August 2nd 2023

PCI-SIG Exploring an Optical Interconnect to Enable Higher PCIe Technology Performance

PCI-SIG today announced the formation of a new workgroup to deliver PCI Express (PCIe) technology over optical connections. The PCI-SIG Optical Workgroup intends to be optical technology-agnostic, supporting a wide range of optical technologies, while potentially developing technology-specific form factors.

"Optical connections will be an important advancement for PCIe architecture as they will allow for higher performance, lower power consumption, extended reach and reduced latency," said Nathan Brookwood, Research Fellow at Insight 64. "Many data-demanding markets and applications such as Cloud and Quantum Computing, Hyperscale Data Centers and High-Performance Computing will benefit from PCIe architecture leveraging optical connections."
"We have seen strong interest from the industry to broaden the reach of the established, multi-generational and power-efficient PCIe technology standard by enabling optical connections between applications," said PCI-SIG President and Chairperson Al Yanes. "PCI-SIG welcomes input from the industry and invites all PCI-SIG members to join the Optical Workgroup, share their expertise and help set specific workgroup goals and requirements."

Existing PCI-SIG workgroups will continue their generational march towards a 128GT/s data rate in the PCIe 7.0 specification, while this new optical workgroup will work to make the PCIe architecture more optical-friendly.
Source: PCI-SIG
Add your own comment

32 Comments on PCI-SIG Exploring an Optical Interconnect to Enable Higher PCIe Technology Performance

#1
TheLostSwede
News Editor
The picture is not related to what the PCI-SIG is working on and was just added as an illustration.
Posted on Reply
#2
LabRat 891
This would circumvent the escalating attenuation issues with Gen5-onwards.

Interested in if they intend some kind of standard adapter/backwards compatibility, or if this is expected to be 'industry-use only'?
Posted on Reply
#3
Assimilator
LabRat 891This would circumvent the escalating attenuation issues with Gen5-onwards.
You assume that optical technology won't have its own issues.
LabRat 891Interested in if they intend some kind of standard adapter/backwards compatibility, or if this is expected to be 'industry-use only'?
Optical is the transport mechanism, PCIe is the protocol, there is no backwards compatibility concern.
Posted on Reply
#4
lemonadesoda
Interesting. If one fibre optic could carry multiple PCIe lanes... image what you could "dock" a laptop on to... Or reimagine what a mainboard or PC looks like if all you needed was one fibre optic link to carry 8x PCIe v4 lanes. (16GT/s PCIe v4 @ 128GT/s PCIe v7)
Posted on Reply
#5
Panther_Seraphin
lemonadesodaInteresting. If one fibre optic could carry multiple PCIe lanes... image what you could "dock" a laptop on to... Or reimagine what a mainboard or PC looks like if all you needed was one fibre optic link to carry 8x PCIe v4 lanes. (16GT/s PCIe v4 @ 128GT/s PCIe v7)
OPtical Fibre transcievers arent cheap!! So expect this to be a data center only thing for the short/mid term

For example Cornings Optically based 5m Thunderbolt cable is ~£400

A Passive 2 meter cable from a reputable brand is like ~£30
Posted on Reply
#6
LabRat 891
AssimilatorYou assume that optical technology won't have its own issues.


Optical is the transport mechanism, PCIe is the protocol, there is no backwards compatibility concern.
You assume, I assumed. Optical has its own 'issues', yes. However, we're about at the edge of what traditional copper interconnects can do with modern signalling methodology.
Fibre Optics are fairly well developed, but there's immense 'headroom' for faster and faster transfer rates.


...and PCIe is 32-bit and 64-bit PCI and PCI-X backwards compatible by PCI-SIG spec too. 'Parallel' was just the transport mechanism, right?

Still requires a bridge chip for the signals. That's what I'm curious about.
Posted on Reply
#7
evernessince
LabRat 891You assume, I assumed. Optical has its own 'issues', yes. However, we're about at the edge of what traditional copper interconnects can do with modern signalling methodology.
Fibre Optics are fairly well developed, but there's immense 'headroom' for faster and faster transfer rates.


...and PCIe is 32-bit and 64-bit PCI and PCI-X backwards compatible by PCI-SIG spec too. 'Parallel' was just the transport mechanism, right?

Still requires a bridge chip for the signals. That's what I'm curious about.
We can all stop assuming and just read this scholarly paper on the challenges of optical interconnects for electronic chips (Note that the following link is a direct download of the PDF): citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=d9dab715629bb9672886e3f2b833b121d583e6bc
Posted on Reply
#9
Nanochip
With optical pcie, will that be a boon for external connections as well, and replace the likes of occulink and thunderbolt ?
Posted on Reply
#10
Darmok N Jalad
This could change the way we build PCs, which are facing the limits of power and cooling and even mounting in existing form factors. I wonder if they could potentially separate the main components more effectively so that nothing is dumping heat directly into the case. The PSU, CPU and GPU could each get their own sub-module, or even be separated entirely from each other. The GPU could be the monitor base with some IO ports or something like that.
Posted on Reply
#11
Dristun
Darmok N JaladThis could change the way we build PCs, which are facing the limits of power and cooling and even mounting in existing form factors. I wonder if they could potentially separate the main components more effectively so that nothing is dumping heat directly into the case. The PSU, CPU and GPU could each get their own sub-module, or even be separated entirely from each other. The GPU could be the monitor base with some IO ports or something like that.
Most of this can be done now with standard cables and connectors but requires a move away from ATX, which nobody seems to be able to try and force.
Posted on Reply
#12
Six_Times
Intel worked on the same a decade ago. SIG should group with them, to then possibly develop at a faster pace.
Posted on Reply
#13
InVasMani
I had proposed the use of optical for PCIE interconnect on forums in 2020 at one point. It's interesting to see that PCI-SIG is trying to explore the idea of it. I don't know how effective it'll be on costs at least now with optical not being as widely adopted which doesn't really help in bringing down costs. It'll still probably improve over time a bit on relative costs though the way most things do. I see a bright future ahead for optical technology.

Something that could be done as well is better use of 3D environment space in regard to PCB's and interconnects. Running a PCIE slot 6 to 7 inches away isn't nearly as optimal as a quarter inch interconnecct between two short 3D planes of a PCB's. I'd re-thinking how PC's are built and setup is a one area for notable improvements on trace layouts and optimal performance by reducing the distances on those things by stacking pcb's cleverly to reduces overall trace lengths between destination sources.
Posted on Reply
#14
TheinsanegamerN
DristunMost of this can be done now with standard cables and connectors but requires a move away from ATX, which nobody seems to be able to try and force.
Intel did try, with BTX, and failed miserably.

It's a monumental attempt to force a different standard into an established market. ATX is long in the tooth, but somebody will need to shoulder the cost of re tooling, well, everything. New cases, PSU designs, PCB designs, ece.
Posted on Reply
#15
Darmok N Jalad
TheinsanegamerNIntel did try, with BTX, and failed miserably.

It's a monumental attempt to force a different standard into an established market. ATX is long in the tooth, but somebody will need to shoulder the cost of re tooling, well, everything. New cases, PSU designs, PCB designs, ece.
Just put RGB on it and it will pay for itself. :D

In all seriousness now, BTX was too soon and was seen as Intel trying to push a change so they could sell their failing Netburst designs. Power and cooling needs were quite modest compared to today’s stuff. Now we’re starting to max out air cooling in the space we have to work with. Even then, OEMs are already taking some liberties with the ATX form factor. I’m really surprised the aftermarket world hasn’t pushed for something different. I guess it does take cooperation from AMD and Intel on designing something around the CPU.
Posted on Reply
#16
Solaris17
Super Dainty Moderator
Darmok N JaladJust put RGB on it and it will pay for itself. :D
cant wait to stare at my new blinky class 1 laser product!!

I am curious about heat. tranceivers get prettttttyyyy warm.
Posted on Reply
#17
Frick
Fishfaced Nincompoop
I just want to sell optical chips on Leeds.
Posted on Reply
#18
LabRat 891
FrickI just want to sell optical chips on Leeds.
You can trust Richard Winston Tobias to make you the best deal anywhere!


Geeze, that takes me back. o_O
Posted on Reply
#19
ymdhis
InVasManiSomething that could be done as well is better use of 3D environment space in regard to PCB's and interconnects. Running a PCIE slot 6 to 7 inches away isn't nearly as optimal as a quarter inch interconnecct between two short 3D planes of a PCB's. I'd re-thinking how PC's are built and setup is a one area for notable improvements on trace layouts and optimal performance by reducing the distances on those things by stacking pcb's cleverly to reduces overall trace lengths between destination sources.
We already have some boards that use clever dongles to mount several things in 3d, like a board that plugs into the board and has both 90 degree sata and usb ports on it. Some boards have nvme slots stacked on top of each other (can't think it does good for the heat output).
It's mostly a things with ITX boards since they are the most space limited.
Posted on Reply
#20
Darmok N Jalad
I wonder how practical something like putting the motherboard in the middle of a wider but shorter case would be, where the CPU, RAM, and NVME drives are on one side, and the PCIe slots would be on the other. The motherboard itself could act as a natural barrier, and cooling could be approached from pretty much all sides. I suppose if GPUs were able to be mounted vertically, then it may not even take up more width than a traditional case. They might even be able to solve the issue of trace length for high speed PCIe, at least in the short term. We could probably also get away with much shorter cabling.
Posted on Reply
#21
chrcoluk
If not backwards compat, forget it.
Posted on Reply
#22
zlobby
TheLostSwedeThe picture is not related to what the PCI-SIG is working on and was just added as an illustration.
It irks me that you need to clarify that. :D
lemonadesodaInteresting. If one fibre optic could carry multiple PCIe lanes...
It's called DWDM. Although another form of muxing can be adopted for that.
Solaris17I am curious about heat. tranceivers get prettttttyyyy warm.
Ask nvidia to hook you up with a cooler. Theirs seem to be a hot topic these days.
AssimilatorYou assume that optical technology won't have its own issues.


Optical is the transport mechanism, PCIe is the protocol, there is no backwards compatibility concern.
Someone didn't know of ISO's OSI. :)
Panther_SeraphinOPtical Fibre transcievers arent cheap!! So expect this to be a data center only thing for the short/mid term

For example Cornings Optically based 5m Thunderbolt cable is ~£400

A Passive 2 meter cable from a reputable brand is like ~£30
I have heard of 800Gbps FO transcievers and those cost bonkers! And they can work at lenghts no greater than Mr. Leather Jacket's pp.
Posted on Reply
#23
InVasMani
Darmok N JaladI wonder how practical something like putting the motherboard in the middle of a wider but shorter case would be, where the CPU, RAM, and NVME drives are on one side, and the PCIe slots would be on the other. The motherboard itself could act as a natural barrier, and cooling could be approached from pretty much all sides. I suppose if GPUs were able to be mounted vertically, then it may not even take up more width than a traditional case. They might even be able to solve the issue of trace length for high speed PCIe, at least in the short term. We could probably also get away with much shorter cabling.
Two motherboards that connect along the edge with short interconnect inside of newer designed case format would make most sense to me. Not only do you shorten trace lengths, but you increase PCB board space significantly which from a design standpoint has big ramifications. Could have second connection along the memory edge as well easily if more PCIE slot bandwidth was needed between the daughter motherboard. A slight deviation from that would allow for some serious stacking of M.2's on a PCB for dense raid of them provided you have enough PCIE lanes. Instead lets stick to the same dated formula that has issues with long PCB trace lengths that end up increasing design costs as well.

I think we're overdue for approaching it a bit more optimally. In a good case design you could probably squeeze in 4 full size ATX or micro ATX boards with pretty short connections between them especially with optical. some kind of PCIE connection along each of the 3 upper board edges and you could connect one of each to another board for 4 boards in total. That's certainly gives you a lot of PCB space to work with. It complicates cooling more depending on how dense you get with it. Depending on how spread apart they are at least w/o optical you kind of are back to square one on long trace lengths.
Posted on Reply
#24
A Computer Guy
Solaris17cant wait to stare at my new blinky class 1 laser product!!

I am curious about heat. tranceivers get prettttttyyyy warm.
Maybe new reasons to get into water cooling. Full nickel EK transceiver block can't wait for it except for the pricetag.
Posted on Reply
#25
mouacyk
Really, is this where the bottleneck is?
Posted on Reply
Add your own comment
May 21st, 2024 16:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts