Friday, March 21st 2025

PCI-SIG Ratifies PCI Express 7.0 Specification to Reach 128 GT/s

The AI data center buildout requires massive bandwidth from accelerator to accelerator and from accelerator to CPU. At the core of that bandwidth bridge is PCIe technology, which constantly needs to evolve to satisfy massive bandwidth requirements. Today, PCI-SIG, the working group behind the PCI and PCIe connector, is releasing details about the almost ready 0.9 version of the PCIe 7.0 connector and its final specifications. The latest PCIe 7.0 will bring 128 GT/s speeds, with a bi-directional bandwidth of 512 GB/s in the x16 lane configuration. Targeting applications like 800G Ethernet, AI/ML, cloud, quantum computing, hyperscalers, military/aerospace, and cloud providers all need massive bandwidth for their respective applications and use cases to work flawlessly.

Interestingly, as PCIe doubles bandwidth over the traditional three-year cadence, high bandwidth for things like storage is becoming available on fewer and fewer lanes. For example, PCIe 3.0 with x16 lanes delivers 32 GB/s of bi-directional bandwidth. And now, PCIe 7.0 delivers that same bandwidth on only a single x1 lane. Some other goals of the new PCIe 7.0 include significant improvements in channel parameters and signal integrity while enhancing power efficiency and maintaining the protocol's low-latency characteristics. All while ensuring complete backward compatibility with previous generations of the standard. Notably, the PCIe 7.0 standard uses PAM4 signaling, which was first presented for PCIe 6.0. Here is a nice PAM4 signaling primer if you want to learn more about PAM4 signaling. Below are the specifications of PCIe generations and their respective characteristics. We expect to see final version v1.0 by end of the year, and some PCIe 7.0 accelerators next year.
PCIe 7.0 PCIe 7.0 PCIe 7.0
Sources: PCI-SIG, via VideoCardz
Add your own comment

18 Comments on PCI-SIG Ratifies PCI Express 7.0 Specification to Reach 128 GT/s

#1
BSim500
I'm holding out for PCI Express 14.0. Should be due by Easter...
Posted on Reply
#2
Daven
One lane of PCIe 7 is enough for a graphics card (same as 16 PCIe 3 lanes).
Posted on Reply
#3
londiste
BSim500I'm holding out for PCI Express 14.0. Should be due by Easter...
What's wrong with progress? If you look at the graph provided, there has always been at least a 3-4 year gap between final spec and actual implementations appearing. The envelope has to be pushed somehow and advancing the spec - which does in case of PCIe have a real tech behind it - is the way to do it.
Posted on Reply
#4
Testsubject01
londisteWhat's wrong with progress? If you look at the graph provided, there has always been at least a 3-4 year gap between final spec and actual implementations appearing. The envelope has to be pushed somehow and advancing the spec - which does in case of PCIe have a real tech behind it - is the way to do it.
Nothing, there is tech utilizing high bandwidth. Just not in the gaming enthusiast segment.

And instead of mocking those advances, I would rather lament the lack of progress and stagnation in gaming GPUs. Each to their own.
Posted on Reply
#5
piloponth
And on e again they didn’t go optical with the new standard. PCI-SIG should really take the courage to lead us to another era.
Posted on Reply
#6
Mysteoa
This is what I like, more expensive future MB.
Posted on Reply
#7
randomUser
I would not call this a progress. I mean this progress requires regress in other areas.
First of all increased power draw, second, additional components, more material.
It's like "sure fine, we can make 6L engine, it's a progress over 2L engine, right?" and ignore that it requires more fuel and components which increases maintenance costs.

Real progress often found in GPU, CPU and RAM departments where same amount of work can be done at lower power usage while using about the same amount of resources.
NVMe with PCIe 4+ i also call a regress, because it only progresses in single area (sequential transfers) while introducing significant power draw and requirement for cooling. They should have stopped with pcie 3 where you can still use it with just a factory sticker on it and it draws max of 5W at load (like HDDs do)
Posted on Reply
#8
Wirko
There is work being done. The first two Google hits show that Synopsys and Cadence demonstrated their systems almost a year ago. Each technology will find its market niche, here's a rough guess: electrical PCIe for very short distances, networking standards such as optical Ethernet for long distances, and optical PCIe may be cost effective somewhere in between. Motherboard to storage backplane, or between adjacent nodes in a rack.
Posted on Reply
#9
BSim500
londisteWhat's wrong with progress? If you look at the graph provided, there has always been at least a 3-4 year gap between final spec and actual implementations appearing. The envelope has to be pushed somehow and advancing the spec - which does in case of PCIe have a real tech behind it - is the way to do it.
The fact it's often fake? Eg, "PCIe 4.0 has double the bandwidth of PCIe 3.0" = as soon as PCIe 4.0 GPU's hit the market, nVidia & AMD started dumbing down 3.0 x16 lane GPU's bandwidth on $100-$200 GPU's to 4.0 x8 lanes on even $300-$450 GPU "to save money" doesn't 'pass the smell test' when even the lowly GTX 1050 2GB (non Ti) managed 16x lanes for $99. Meanwhile the cost of motherboards has soared in large part due to PCIe 5.0 - fine for flagship GPU's that need it but it's just price inflation for completely artificial reasons for lower end components that still don't use much more bandwidth than 3.0 x16.
Posted on Reply
#10
Panther_Seraphin
BSim500The fact it's often fake? Eg, "PCIe 4.0 has double the bandwidth of PCIe 3.0" = as soon as PCIe 4.0 GPU's hit the market, nVidia & AMD started dumbing down 3.0 x16 lane GPU's bandwidth on $100-$200 GPU's to 4.0 x8 lanes on even $300-$450 GPU "to save money" doesn't 'pass the smell test' when even the lowly GTX 1050 2GB (non Ti) managed 16x lanes for $99. Meanwhile the cost of motherboards has soared in large part due to PCIe 5.0 - fine for flagship GPU's that need it but it's just price inflation for completely artificial reasons for lower end components that still don't use much more bandwidth than 3.0 x16.
And as mentioned previously that unforutnately gaming PCs are not the target audience for this tech unfortunately

If you have you eyes/ears in the server space you know that High speed networking and storage is where a lot of the focus has been in recent times. I mean go an look at Linus with his "million dollar server" series. That is it on a small/smaller scale and with the scale that LLM and AI is wanting to use its demand is only going to grow.
Posted on Reply
#11
Six_Times
londisteWhat's wrong with progress? If you look at the graph provided, there has always been at least a 3-4 year gap between final spec and actual implementations appearing. The envelope has to be pushed somehow and advancing the spec - which does in case of PCIe have a real tech behind it - is the way to do it.
exactly. :)
Posted on Reply
#12
Philaphlous
Can't wait to see full sized CPU/GPU heatsinks trying to cool NVME SSD's with PCIe 7.0 configuration....
Posted on Reply
#13
evernessince
londisteWhat's wrong with progress? If you look at the graph provided, there has always been at least a 3-4 year gap between final spec and actual implementations appearing. The envelope has to be pushed somehow and advancing the spec - which does in case of PCIe have a real tech behind it - is the way to do it.
For the average consumer, there is no benefit and many downsides. Increased cost of production, an increasing number of signal integrity issues (it's a PITA to extend PCIe 5.0 over any distance, let alone 7.0), higher power consumption, more die space required to drive the PCIe lanes, etc.

These newer PCIe standards aren't designed with consumers in mind, they are squarely aimed to push AI along. They aren't focused on reducing costs or accessibility / ease of use. A great example of this is in SSDs. SATA is effectively dead and there's really no equivalent replacement for them in the consumer space. Desktop users have to either go with M.2, which carries a number of downsides like higher price and capacity limitations due to the form factor's size constraints (it wasn't really designed for desktops in the first place), or you have to go enterprise U.2 (which itself is becoming increasingly harder as anything above PCIe 3.0 is not guaranteed to run due to signaling issues).

I wouldn't call paying more and use more power progress. Real progress IMO would be doing more with the same amount or less, whether that be price, power consumption, or die space. That is ultimately how technology advances.
Posted on Reply
#14
dyonoctis
BSim500The fact it's often fake? Eg, "PCIe 4.0 has double the bandwidth of PCIe 3.0" = as soon as PCIe 4.0 GPU's hit the market, nVidia & AMD started dumbing down 3.0 x16 lane GPU's bandwidth on $100-$200 GPU's to 4.0 x8 lanes on even $300-$450 GPU "to save money" doesn't 'pass the smell test' when even the lowly GTX 1050 2GB (non Ti) managed 16x lanes for $99. Meanwhile the cost of motherboards has soared in large part due to PCIe 5.0 - fine for flagship GPU's that need it but it's just price inflation for completely artificial reasons for lower end components that still don't use much more bandwidth than 3.0 x16.
Those lower end GPUs wouldn't be able to make use of that bandwidth anyway. PCIe 4.0 is when trouble started to appear with things like riser and some boards who couldn't handle that speed. Higher speeds means more time to spend designing, optimising the PCB traces.
GIGABYTE Announcement Regarding Z690I AORUS ULTRA Motherboard Issue | News - GIGABYTE Global
evernessinceFor the average consumer, there is no benefit and many downsides. Increased cost of production, an increasing number of signal integrity issues (it's a PITA to extend PCIe 5.0 over any distance, let alone 7.0), higher power consumption, more die space required to drive the PCIe lanes, etc.

These newer PCIe standards aren't designed with consumers in mind, they are squarely aimed to push AI along. They aren't focused on reducing costs or accessibility / ease of use. A great example of this is in SSDs. SATA is effectively dead and there's really no equivalent replacement for them in the consumer space. Desktop users have to either go with M.2, which carries a number of downsides like higher price and capacity limitations due to the form factor's size constraints (it wasn't really designed for desktops in the first place), or you have to go enterprise U.2 (which itself is becoming increasingly harder as anything above PCIe 3.0 is not guaranteed to run due to signaling issues).

I wouldn't call paying more and use more power progress. Real progress IMO would be doing more with the same amount or less, whether that be price, power consumption, or die space. That is ultimately how technology advances.
Watch them bing PCIe 7.0 to mainstream market, and say that those 128GB/S SSD and PCIe 7.0 x16 slot are the best thing ever for gamers when it won't make a tangible difference.

Also worth to note how SSD enssentially stalled when it comes to capacity. I would have imagined that 4TB would have become the new sweetspot, when it's still firmly rooted as a premium option.
Posted on Reply
#15
TechBuyingHavoc
DavenOne lane of PCIe 7 is enough for a graphics card (same as 16 PCIe 3 lanes).
The interesting question for me is if it is cheaper for the manufacturer to include 8 PCIe 4 lanes, 4 PCIe 5 lanes, 2 PCIe 6 lanes, or the single PCIe 7 lane. What is the price sweet spot here?
Posted on Reply
#16
Athena
dyonoctisAlso worth to note how SSD enssentially stalled when it comes to capacity. I would have imagined that 4TB would have become the new sweetspot, when it's still firmly rooted as a premium option.
Stalled? some big data centers are already using 2 Tb QLC NAND chips now, a bit more than a year ago, they came out with 1Tb QLC NAND chips, so, that is pretty fast in the tech world

price is another story, once they get economies of scale going, it will become cheaper and cheaper, so sometime around the end of '25 or Q1 '26 would be a good guess when the 4TB SSDs will start to trickle out at more reasonable prices for the average consumer
Posted on Reply
#17
Panther_Seraphin
TechBuyingHavocThe interesting question for me is if it is cheaper for the manufacturer to include 8 PCIe 4 lanes, 4 PCIe 5 lanes, 2 PCIe 6 lanes, or the single PCIe 7 lane. What is the price sweet spot here?
For Graphics card its probably between 4 and 8x in terms of die space efficency and board complexity/tolerances

Problem is the motherboards to accomodate such high speeds the wiriing from CPU to PCI slot gets tighter and tighter every generation which is what drove a lot of the cost up for the latest gen of boards due to increase copper layers to meet these tighter and tighter requirements
Posted on Reply
#18
evernessince
TechBuyingHavocThe interesting question for me is if it is cheaper for the manufacturer to include 8 PCIe 4 lanes, 4 PCIe 5 lanes, 2 PCIe 6 lanes, or the single PCIe 7 lane. What is the price sweet spot here?
It's hard to say. It could save expensive die space by reducing the size of the IO block on the chip. Costs are mostly going to fall on motherboard manufacturers in the form of a more expensive PCB to maintain signal integrity and redrivers.

If everyone starts putting IO on an older chiplet though, the potential cost savings shrink while the burden on motherboard manufacturers continues to increase each passing PCIe gen.

It's not like cost savings on behalf of GPU and SSD manufacturer's who have implemented cut down lane counts has been passed onto customers either. On paper they might be saving money but from a customer standpoint GPUs are hella expensive and motherboards have greatly increased in prices as well. It's a loose loose as of 2025 for customers.
Posted on Reply
Add your own comment
Mar 22nd, 2025 11:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts