Monday, September 12th 2022

NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

It looks like NVIDIA is finally taking AMD's route in the mid-range by giving the third-largest silicon in its next-generation GeForce "Ada" RTX 40-series a narrower PCI-Express host interface. The AD106 silicon will be NVIDIA's third largest client GPU based on the "Ada" architecture, and succeeds the GA106 powering the likes of the GeForce RTX 3060. This chip reportedly features a narrower PCI-Express x8 host interface. At this point we don't know if the AD106 comes with PCI-Express Gen 5 or Gen 4. Regardless, having a PCIe lane count of 8 could possibly impact performance of the GPU on systems with PCI-Express Gen 3, such as 10th Gen Intel "Comet Lake," or even AMD's Ryzen 7 5700G APU.

Interestingly, the same leak also claims that the AD107, the fourth largest silicon powering lower mid-range SKUs, and which succeeds the GA107, features the same PCIe lane-count of x8. This is unlike AMD, which gives the "Navi 24" silicon a PCI-Express 4.0 x4 interface. Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package. NVIDIA's calculation here is that there are now at least two generations of Intel and AMD platforms with PCIe Gen 4 or later (Intel "Rocket Lake" and "Alder Lake," AMD "Zen 2," and "Zen 3,") and so it makes sense to lower the PCIe lane-count.
Source: kopite7kimi (Twitter)
Add your own comment

40 Comments on NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

#26
Solidstate89
This will have have no effect on performance. Current graphics cards still can't even max out the PCI-e 3.0 x16 bandwidth, much less 4.0 or 5.0.
Posted on Reply
#27
Bomby569
Solidstate89This will have have no effect on performance. Current graphics cards still can't even max out the PCI-e 3.0 x16 bandwidth, much less 4.0 or 5.0.
this one is x8 not x16
Posted on Reply
#28
ppn
AD102
AD103
AD104
AD106
AD107

Third largest would be AD104, AD106 at 203mm2 is one third of 4090, but sadly only one quarter of the CUDA cores 4608 or even less, 3840. Still considering how late into the cycle 3050 was released, why are they even talking about AD106 so soon. 4050 is at least a 3060/ 3060 Ti if true.
Posted on Reply
#29
Valantar
WirkoBy the way, is PCIe x12 dead forever? It's part of the standard and it would be useful if bifurcation to 12 + 4 were possible, so one more M.2 port could be added.
Is x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.
Posted on Reply
#30
catulitechup
Dirt ChipThay should also make a x4 one, at a lower price, for those with pcie5.
So whoever have 'older' pcie4 will pay more in order to get the full passthrough.

I have more bad suggestions for NV but everything in due time.
I still waiting for RX7400/7500 at pci-e gen5 x2 lanes with many luck them can decide this time offer to users pci-e gen5 x1 lanes (this will be outstanding product :roll:)

:)
Posted on Reply
#31
defaultluser
The 3050 doesn't care if you're running it at 8x @3.0:

cant imagine why the falloff for the 4050 would be any higher, while the 4060 may be around 5% slower? At least they aren't castrating the Ad107 GPU like AMD continues to do with RX 7500!
Posted on Reply
#32
Sisyphus
Lower bit bus, now 8x pcie. It seams they had problems with higher clock speeds on consumer grade PCBs.
Posted on Reply
#33
Wirko
ValantarIs x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.
I saw it first on Wikipedia
Up to and including PCIe 5.0, x12 and x32 links were defined as well but never used.
then in various other sources, old rather than new, with passing mention:
www.oreilly.com/library/view/pci-express-system/0321156307/0321156307_ch02lev1sec8.html
www.manualslib.com/manual/1183617/Idt-89hpes64h16g2.html?page=225&term=x12&selected=9#manual
arstechnica.com/features/2004/07/pcie/5/
knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019OddSAE

I can't access the original source of PCI-SIG documents but here is another source for PCIe 2.0 base specification, and it mentions x12:
www.cl.cam.ac.uk/~djm202/pdf/specifications/pcie/PCI_Express_Base_Rev_2.0_20Dec06a.pdf

What version of documentation were you looking at, perhaps the 6.0 spec leaves out x12 and introduces x24?

Edit: Surprise, you can buy PCIe 5.0 x24 connectors from Amphenol, they have 230 pins and would hang over the edge on a mini-ITX motherboard:
cdn.amphenol-cs.com/media/wysiwyg/files/documentation/datasheet/ssio/ssio_cooledge_1_00mm.pdf
Does anyone know where these are used?
Posted on Reply
#34
trsttte
ValantarIs x12 part of the standard? I've looked specifically for any mention of that across various sources previously, and not found any mention of it (including PCI-SIG documention). I did discover the existence of PCIe x24 while looking, but from what I could find the supported lane widths were 1, 4, 8, 16 and 24.
Aren't lanes just lanes? There's no mechanical x12 slot but I don't see why they can't just handshake with 12 lanes just like they would if on a lower/higher lane count slot.

Boards will waste lanes, but nvidia still gets the space savings on die from a smaller pcie phy.
Posted on Reply
#35
Solidstate89
Bomby569this one is x8 not x16
I think you misunderstood my point. Current GPUs can't even max out the bandwidth of 16 lanes of PCI-e 3.0 so 8 lanes of 4.0 (which is the same as 16 lanes of 3.0) or 5.0 will not even be close to a problem or cause any kind of bandwidth bottlenecking. This is a non-issue.
Posted on Reply
#36
Bomby569
Solidstate89I think you misunderstood my point. Current GPUs can't even max out the bandwidth of 16 lanes of PCI-e 3.0 so 8 lanes of 4.0 (which is the same as 16 lanes of 3.0) or 5.0 will not even be close to a problem or cause any kind of bandwidth bottlenecking. This is a non-issue.
you're assuming everyone has a 4.0 mobo. That's the issue here.
Posted on Reply
#37
Wirko
trstttenvidia still gets the space savings on die from a smaller pcie phy.
This! The PCIe phy is a large bunch of exceedingly complex analogue electronics, even more so if it's 5.0. Pretty sure it takes a significant part of the die, and may even be a cause of lower yields.
Posted on Reply
#38
Valantar
WirkoI saw it first on Wikipedia

then in various other sources, old rather than new, with passing mention:
www.oreilly.com/library/view/pci-express-system/0321156307/0321156307_ch02lev1sec8.html
www.manualslib.com/manual/1183617/Idt-89hpes64h16g2.html?page=225&term=x12&selected=9#manual
arstechnica.com/features/2004/07/pcie/5/
knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019OddSAE

I can't access the original source of PCI-SIG documents but here is another source for PCIe 2.0 base specification, and it mentions x12:
www.cl.cam.ac.uk/~djm202/pdf/specifications/pcie/PCI_Express_Base_Rev_2.0_20Dec06a.pdf

What version of documentation were you looking at, perhaps the 6.0 spec leaves out x12 and introduces x24?

Edit: Surprise, you can buy PCIe 5.0 x24 connectors from Amphenol, they have 230 pins and would hang over the edge on a mini-ITX motherboard:
cdn.amphenol-cs.com/media/wysiwyg/files/documentation/datasheet/ssio/ssio_cooledge_1_00mm.pdf
Does anyone know where these are used?
x24 has some adoption in servers of various kinds.
trsttteAren't lanes just lanes? There's no mechanical x12 slot but I don't see why they can't just handshake with 12 lanes just like they would if on a lower/higher lane count slot.

Boards will waste lanes, but nvidia still gets the space savings on die from a smaller pcie phy.
No. Lanes come from controllers, which group lanes in various ways. Very few PCIe controllers consist of a collection of individually addressable lanes, as that's rather inefficient in terms of die space when you'll be running them grouped. Bifurcation support depends on how these lanes are grouped in hardware, and how the controller(s) are able to sync and split these configurations. Starting with current consumer CPU PEG lanes as an example, that's "a x16 controller" that's internally made up of two x8 hardware blocks in order to facilitate CF/SLI bifurcation. On modern platforms these x8 blocks can again be split into x4+x4. My guess is that there's some problem with half the lanes from one controller being paired with the lanes from the other one, rather than them running fully synced or not at all. Though it might just be a case of "this is such a niche use case, we can't spend our budget on R&D and QC for this".
Posted on Reply
#39
Wirko
ValantarOn modern platforms these x8 blocks can again be split into x4+x4. My guess is that there's some problem with half the lanes from one controller being paired with the lanes from the other one, rather than them running fully synced or not at all. Though it might just be a case of "this is such a niche use case, we can't spend our budget on R&D and QC for this".
One more thing. PCIe x12 would send 3 bytes in 2 transfers, and everything is more difficult (if only a little bit) in the binary world if you have to split anything into units whose size isn't a power of 2.
Posted on Reply
#40
paul7
"Regardless, having a PCIe lane count of 8 could possibly impact performance of the GPU on systems with PCI-Express Gen 3"
That's absolute nonsense. There isn't a GPU on the market, nor will there be anytime in the near future, that could possibly be bottlenecked by a PCIe Gen3 x8 slot. There are countless people right now that are using the highest of high end GPU's in a Gen3 X8 slot with zero performance loss, so there's no chance a next gen mid-range card could be bottlenecked by doing the same.
Posted on Reply
Add your own comment
Nov 24th, 2024 22:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts