Monday, September 12th 2022

NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

It looks like NVIDIA is finally taking AMD's route in the mid-range by giving the third-largest silicon in its next-generation GeForce "Ada" RTX 40-series a narrower PCI-Express host interface. The AD106 silicon will be NVIDIA's third largest client GPU based on the "Ada" architecture, and succeeds the GA106 powering the likes of the GeForce RTX 3060. This chip reportedly features a narrower PCI-Express x8 host interface. At this point we don't know if the AD106 comes with PCI-Express Gen 5 or Gen 4. Regardless, having a PCIe lane count of 8 could possibly impact performance of the GPU on systems with PCI-Express Gen 3, such as 10th Gen Intel "Comet Lake," or even AMD's Ryzen 7 5700G APU.

Interestingly, the same leak also claims that the AD107, the fourth largest silicon powering lower mid-range SKUs, and which succeeds the GA107, features the same PCIe lane-count of x8. This is unlike AMD, which gives the "Navi 24" silicon a PCI-Express 4.0 x4 interface. Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package. NVIDIA's calculation here is that there are now at least two generations of Intel and AMD platforms with PCIe Gen 4 or later (Intel "Rocket Lake" and "Alder Lake," AMD "Zen 2," and "Zen 3,") and so it makes sense to lower the PCIe lane-count.
Source: kopite7kimi (Twitter)
Add your own comment

40 Comments on NVIDIA's Third Largest Ada GPU, the AD106, Features PCIe x8 Interface

#1
hat
Enthusiast
A midrange card with an 8 pin and a 6 pin power connector... excellent.
Posted on Reply
#2
taka
Expecting 3070 perf for 4060 and 220-250W TDP. What i'm not sure just yet is the price, but for sure it will be $400+. Let's see what AMD brings to table this time perf/price ratio.
And if Ngreedia goes with new power connector for all cards or not.
Posted on Reply
#3
tfdsaf
I can understand it on lower end products that cost $180 or less but having a narrower pci-e bus for a $400-dollar mid range card is absurd!
Posted on Reply
#4
Leshy
tfdsafI can understand it on lower end products that cost $180 or less but having a narrower pci-e bus for a $400-dollar mid range card is absurd!
try to read and u ll understand... "Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package."

so its all about cuting cost.
Posted on Reply
#5
PapaTaipei
Leshytry to read and u ll understand... "Lowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony. It also reduces the pin-count of the GPU package."

so its all about cuting cost.
Woaa! They gonna save us 0.5 dollar per PCB!
Posted on Reply
#6
AusWolf
PapaTaipeiWoaa! They gonna save us 0.5 dollar per PCB!
Nah... it'll save nvidia some money, but we'll pay the same regardless. Whether it matters or not, we'll see when the products come out.
Posted on Reply
#7
Tsukiyomi91
guess the 30 Series will be the better choice since it uses the whole 16x PCIe Gen4 lanes compared to the 40 Series "supposedly" 8x Gen4/5 lanes.
Posted on Reply
#8
konga
These are the fourth and fifth largest GPUs. AD102 (4090), AD103 (4080), and AD104 (4070) are first, second, and third.
Posted on Reply
#9
Leshy
PapaTaipeiWoaa! They gonna save us 0.5 dollar per PCB!
you did the math quick .. some kind of electrical engineer you are .. lol
Tsukiyomi91guess the 30 Series will be the better choice since it uses the whole 16x PCIe Gen4 lanes compared to the 40 Series "supposedly" 8x Gen4/5 lanes.
what the point to have 16x G4 if you dont lose performance with 8x?
Posted on Reply
#10
konga
Tsukiyomi91guess the 30 Series will be the better choice since it uses the whole 16x PCIe Gen4 lanes compared to the 40 Series "supposedly" 8x Gen4/5 lanes.
Read the reviews and get the card that make the most sense for you from a price to performance standpoint. Basing a purchasing decision on this alone would be very dumb.
Posted on Reply
#11
ExcuseMeWtf
PapaTaipeiWoaa! They gonna save us 0.5 dollar per PCB!
They're not saving YOU anything.
They're saving it for THEMSELVES and slap you with same if not higher price, depending on what they think they can get away with.
Posted on Reply
#12
ixi
If performance will be above 3070ti then 4060 is good gpu. If not, then meh.
Posted on Reply
#13
TheDeeGee
hatA midrange card with an 8 pin and a 6 pin power connector... excellent.
You know that's not a 4000 series in the picture?
Posted on Reply
#14
Bomby569
if it doesn't hinder the card it's fine, if it does it's absurd.
Posted on Reply
#15
Tigerfox
I don't expect any performance impact on gen5-boards, but as the article mentions, these "midrange"-cards are often bought as an upgrade for older systems and could be used even in gen3-boards, where the performance impact could be very noticeable.

Together with a steep rise in boardpower, supposably in price, too, and a smaller GPU with narrower memory interface (256Bit instead of 320/384Bit on x080, 192/160Bit instead of 256Bit on x070/x060Ti) and still only an increase in VRAM capacity by 50% instead of 100% on average, this again gives us the impresseion that NV gives us less for our money than in the last generations.
Posted on Reply
#16
Wirko
btarunrLowering the PCIe lane count simplifies PCB design, since there are fewer PCIe lanes to be wired out in precise trace-lengths to avoid asynchrony.
PCIe is not very strict about trace lengths, the tolerances are quite broad to account for different lengths. So is that really an issue?
I don't see how Nvidia or AMD could cut costs by more than a couple $ this way.
Posted on Reply
#17
Vayra86
Oh man, I'm really going to wait and see on this one it seems, once again.

So far there isn't a single product in the Nvidia ADA stack I'm really excited for. It all looks... handicapped.
Posted on Reply
#18
Wirko
By the way, is PCIe x12 dead forever? It's part of the standard and it would be useful if bifurcation to 12 + 4 were possible, so one more M.2 port could be added.
Posted on Reply
#19
AusWolf
Vayra86Oh man, I'm really going to wait and see on this one it seems, once again.

So far there isn't a single product in the Nvidia ADA stack I'm really excited for. It all looks... handicapped.
... and overpriced, not to mention hungry as hell.
Posted on Reply
#20
Haile Selassie
takaExpecting 3070 perf for 4060 and 220-250W TDP. What i'm not sure just yet is the price, but for sure it will be $400+. Let's see what AMD brings to table this time perf/price ratio.
And if Ngreedia goes with new power connector for all cards or not.
3070 is already ~ 215W TDP. If this is true then you are getting the same card years later, hence worse product.
Posted on Reply
#21
taka
Haile Selassie3070 is already ~ 215W TDP. If this is true then you are getting the same card years later, hence worse product.
If the price is right, witch i doubt, could be an interesting upgrade for me at least from a 2060.
I will upgrade 2060 if only: => 3070 perf, 200W TDP max and $300-350.
But the Nvidia tactics last two years kinda make me turn to red team again. Not that red team is no saint.......
Waiting for reviews for both vendors and make a decision based on my expectations.

P.S. also 2060 is a 170W card, but runs just fine at 125W with minimal loss.
Posted on Reply
#22
Bomby569
TigerfoxI don't expect any performance impact on gen5-boards, but as the article mentions, these "midrange"-cards are often bought as an upgrade for older systems and could be used even in gen3-boards, where the performance impact could be very noticeable.
that depends if the card is Gen 5 or 4, you can't draw those conclusion just yet
Posted on Reply
#23
Dirt Chip
Thay should also make a x4 one, at a lower price, for those with pcie5.
So whoever have 'older' pcie4 will pay more in order to get the full passthrough.

I have more bad suggestions for NV but everything in due time.
Posted on Reply
#24
docnorth
For midrange GPUs PCI-e 4x16 or maybe even PCI-e 3x16 should be far more useful than PCI-e 5x8. I don't know how many of these cards will be combined with top and/or last-gen CPUs and motherboards. Most systems waiting (and waiting and waiting...:banghead:) for a GPU upgrade are still PCI-e 3.
Posted on Reply
#25
ModEl4
Regarding raster, 384bit bus/24Gbps GDDR6X is supporting in the same degree AD102 as 128bit bus/18Gbps GDDR6 is supporting AD106.
If 4060 is full AD106 with 18Gbps GDDR6 it will be in worst case 200W (like 3060Ti) and the FHD performance should be at 3070Ti level at least!
We should wait regarding 8X PCI-express scaling, but the performance essentially will be at OC RTX 2080Ti level which is PCI-express 3.0 16X which offers similar bandwidth with 4.0 8X (not that we can conclude anything concrete from that though)
Posted on Reply
Add your own comment
Nov 21st, 2024 15:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts