Wednesday, September 21st 2022

Jensen Confirms: NVLink Support in Ada Lovelace is Gone

NVIDIA CEO Jensen Huang in a call with the press today confirmed that Ada loses the NVLink connector. This marks the end of any possibility of explicit multi-GPU, and marks the complete demise of SLI (over a separate physical interface). Jensen stated that the reason behind removing the NVLink connector was because they needed the I/O for "something else," and decided against spending the resources to wire out an NVLink interface. NVIDIA's engineers also wanted to make the most out of the silicon area at their disposal to "cram in as much AI processing as we could". Jen-Hsun continued with "and also, because Ada is based on Gen 5, PCIe Gen 5, we now have the ability to do peer-to-peer cross-Gen 5 that's sufficiently fast that it was a better tradeoff". We reached out to NVIDIA to confirm and their answer is:
NVIDIAAda does not support PCIe Gen 5, but the Gen 5 power connector is included.

PCIe Gen 4 provides plenty of bandwidth for graphics usages today, so we felt it wasn't necessary to implement Gen 5 for this generation of graphics cards. The large framebuffers and large L2 caches of Ada GPUs also reduce utilization of the PCIe interface.
Add your own comment

20 Comments on Jensen Confirms: NVLink Support in Ada Lovelace is Gone

#1
Daven
I wonder why the bus connector is still PCIe Gen 4. Not that the Gen 5 power connector has anything technical to do with the bus connection and is called Gen 5 purely for marketing reasons but why not just go all in PCIe and power supply connector Gen 5.
Posted on Reply
#2
oxrufiioxo
DavenI wonder why the bus connector is still PCIe Gen 4. Not that the Gen 5 power connector has anything technical to do with the bus connection and is called Gen 5 purely for marketing reasons but why not just go all in PCIe and power supply connector Gen 5.
Probably strictly cost and them likely seeing no benefit with Gen4 vs Gen5 for GPU currently.
Posted on Reply
#3
BorisDG
Even Gen 3 will be enough.
Posted on Reply
#4
Legacy-ZA
BorisDGEven Gen 3 will be enough.
You already lose some performance on Gen 3 with RTX3000, not much, but it's still there.
Posted on Reply
#5
Tek-Check
Could tech journalists and other enthusiasts officially ask Nvidia why it is that they did not include DisplayPort 2.0 interface at the end of 2022?
We know that Intel and AMD would support this new technology on their GPUs I am curious what Nvidia responds.
Was it too expensive to install those ports on $1600 cards?
Will they keep consumers wait for DP 2.0 ports until 2024, when 5000 series is released?
Posted on Reply
#6
Legacy-ZA
Tek-CheckCould tech journalists and other enthusiasts officially ask Nvidia why it is that they did not include DisplayPort 2.0 interface at the end of 2022?
We know that Intel and AMD would support this new technology on their GPUs I am curious what Nvidia responds.
Was it too expensive to install those ports on $1600 cards?
Will they keep consumers wait for DP 2.0 ports until 2024, when 5000 series is released?
Yes, this I agree on, was expecting to see DP2.0, nGreedia, maximizing their profits, lol
Posted on Reply
#7
zlobby
Phook the Leather Jacket, man!
Posted on Reply
#8
TheoneandonlyMrK
His explanation reeks of BS, the IO was needed, on a monolithic chip.

All his other reasons are sound but that's not.
And he obviously couldn't mention market segregation.
Posted on Reply
#9
BorisDG
Legacy-ZAYou already lose some performance on Gen 3 with RTX3000, not much, but it's still there.
Very minimal to be even discussed.
Posted on Reply
#10
Sabotaged_Enigma
"Ada does not support PCIe Gen 5, but the Gen 5 power connector is included. PCIe Gen 4 provides plenty of bandwidth for graphics usages today, so we felt it wasn't necessary to implement Gen 5 for this generation of graphics cards."

Yeah of course Gen 5 power connector is included because so much electricity is needed to power the card. Totally insane. What's 4090 Ti gonna be? 2 Gen 5 connectors maybe?
Tek-CheckCould tech journalists and other enthusiasts officially ask Nvidia why it is that they did not include DisplayPort 2.0 interface at the end of 2022?
I could list a bunch of questions.
1. Why is RTX 40 so damn expensive? Think consumers are just stupid?
2. What's the actual performance uplift if DLSS is turned off, esp. rasterisation performance? I personally don't really care about ray-tracing BS.
3. How many RTX 30 stock are there indeed? And what about the claim saying shortage of resources and supplies and everything? (Although we've already known the answer so damn well)
4. How many cards were sold to miners? (They of course dare not make this public)
...
Posted on Reply
#11
TheinsanegamerN
Legacy-ZAYou already lose some performance on Gen 3 with RTX3000, not much, but it's still there.
But......that comes down to signaling, not bandwidth. You also lose 1% perf on RTX 2000, GTX 1000, and GTX 900 series.

The only time PCIe 4.0 is noticeable is when your GPU is gimped on memory, see also the RX 6400 and 4GB RX 550.
Яid!culousOwO"Ada does not support PCIe Gen 5, but the Gen 5 power connector is included. PCIe Gen 4 provides plenty of bandwidth for graphics usages today, so we felt it wasn't necessary to implement Gen 5 for this generation of graphics cards."

Yeah of course Gen 5 power connector is included because so much electricity is needed to power the card. Totally insane. What's 4090 Ti gonna be? 2 Gen 5 connectors maybe?


I could list a bunch of questions.
1. Why is RTX 40 so damn expensive? Think consumers are just stupid?
2. What's the actual performance uplift if DLSS is turned off, esp. rasterisation performance? I personally don't really care about ray-tracing BS.
3. How many RTX 30 stock are there indeed? And what about the claim saying shortage of resources and supplies and everything? (Although we've already known the answer so damn well)
4. How many cards were sold to miners? (They of course dare not to make this public)
...
Well number 1 is really easy to answer. Gamers will CONSOOM this product at its elevated price and make bank for nvidia. Happens every gen.
Posted on Reply
#12
N3utro
Google translation

From: marketing mumbo jumbo....................................................................................................To: real english

they needed the I/O for "something else................................................................. NVlink was so bad we prefered removing it
and "cram in as much AI processing as we could"
Posted on Reply
#13
trsttte
DavenI wonder why the bus connector is still PCIe Gen 4. Not that the Gen 5 power connector has anything technical to do with the bus connection and is called Gen 5 purely for marketing reasons but why not just go all in PCIe and power supply connector Gen 5.
PCIe gen 5 would almost certainly cost them extra precious die area that they can instead dedicate to more compute.
N3utroGoogle translation

From: marketing mumbo jumbo....................................................................................................To: real english

they needed the I/O for "something else................................................................. NVlink was so bad we prefered removing it
and "cram in as much AI processing as we could"
I mean, I don't think NVlink was bad, the thing is that the technology didn't really work well enough for gaming (especially when performance of a single card is already so high that there's really no need for a second one lol) and was simply not necessary for compute applications
Posted on Reply
#14
bonehead123
Well, at least jacket man finally got 1 thing right....

Unfortunately, that still leaves 1599 other things that he f*cked up on :roll:
Posted on Reply
#15
R-T-B
TheinsanegamerNYou also lose 1% perf on RTX 2000, GTX 1000, and GTX 900 series.
No, you don't. We have a whole article series here documenting this.
Posted on Reply
#16
steen
Legacy-ZAYou already lose some performance on Gen 3 with RTX3000, not much, but it's still there.
As GPU perf scales, it puts more load on other sub systems.
BorisDGVery minimal to be even discussed.
Let's see how accelerated DirectStorage API affects this.
trstttePCIe gen 5 would almost certainly cost them extra precious die area that they can instead dedicate to more compute.
Additional power & signal integrity issues, too.
Posted on Reply
#17
Jimmy_
Leather jacket UNCLE :)
Posted on Reply
#18
stimpy88
Almost like these chips were done a year ago, and simply waited on TSMC to be able to mass produce them... Very odd to have io specs the same as an over 3-year-old previous gen release...

It will be embarrassing for nGreedia to see AMD launch their new cards with all the latest io! But it's refreshing to hear the fanboys saying that none of this is necessary, and die area waste blah blah... Maybe these features are not badly missed right now, but these cards are to be on the market for at least 3 more years, and maybe people will miss these features long before then.
Posted on Reply
#19
Lycanwolfen
So basicly Nvidia want you too use DLSS now to get the performace lost by SLI, Which btw is not the same performance. Still running 1070Ti's in SLI and can get over 120 FPS in 4k in games today. I still see no reason to update to some AI tweaking DLSS jerryriged platform to make FPS faster.

Maybe if we all boycott Nvidia and not buy a single card and say too them too big is too much. Honestly they are making the same mistake twice again. Like when the 8800 GTX's and 9700 vacuum cleaners came out people lost interest in nvidia so they went back to smaller less power and more GPU power cards. The 10 series was a perfect example of good engineering. Simple blower coolers, SLI on all cards 1080's and 1070's Low power but high output. The 20 series saw some small increases in performance but SLI did not come till the supers came out. Then when the 30 series came out they went into the same crappy directions of the 8800's and 9700's again. Massive single card no SLI on any smaller models and more and more power. The 40 series is now this all over again. 4 slot cooler WOW! so I have no room for my PCI-e audio card or anything else like my raid SSD m2 setup. Video card can draw 450 watts that like a whole computer inside another computer. I mean for 1080p and 1440p a pair of 660ti's in SLI performed quite well. Once you hit that 2160p then the whole world changes. My 660ti's could not keep up but my 1070ti's no sweat on 4k. Imagine when we have 8k comming out the 40 series in single is not going to be able to handle that. The DLSS will just trying and fake it but still its not true hardware performance.

DLSS is nothing more but a software enhancer to make it think its faster.
Posted on Reply
#20
vmarv
The explanation for their decision to remove NVLink from the 40 series and from the Quadro RTX it's kinda disappointing.
One of the major benefits of the NVLink was the ability to share the memory of the two cards in the scenarios where this was needed. And this doesn't seem possible anymore.
With this move if someone needs 48 GB of vram is forced to buy a single RTX 6000 for about (I guess) +6000$. Ridiculous.
For me they decided to remove NVLink to force the professional customers to buy their more expensive professional cards.
But I bet that this time more than one professional studio or research lab will stick with the 30 series and NVLink.
Posted on Reply
Add your own comment
Nov 21st, 2024 08:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts