Wednesday, September 21st 2022
Jensen Confirms: NVLink Support in Ada Lovelace is Gone
NVIDIA CEO Jensen Huang in a call with the press today confirmed that Ada loses the NVLink connector. This marks the end of any possibility of explicit multi-GPU, and marks the complete demise of SLI (over a separate physical interface). Jensen stated that the reason behind removing the NVLink connector was because they needed the I/O for "something else," and decided against spending the resources to wire out an NVLink interface. NVIDIA's engineers also wanted to make the most out of the silicon area at their disposal to "cram in as much AI processing as we could". Jen-Hsun continued with "and also, because Ada is based on Gen 5, PCIe Gen 5, we now have the ability to do peer-to-peer cross-Gen 5 that's sufficiently fast that it was a better tradeoff". We reached out to NVIDIA to confirm and their answer is:
NVIDIAAda does not support PCIe Gen 5, but the Gen 5 power connector is included.
PCIe Gen 4 provides plenty of bandwidth for graphics usages today, so we felt it wasn't necessary to implement Gen 5 for this generation of graphics cards. The large framebuffers and large L2 caches of Ada GPUs also reduce utilization of the PCIe interface.
20 Comments on Jensen Confirms: NVLink Support in Ada Lovelace is Gone
We know that Intel and AMD would support this new technology on their GPUs I am curious what Nvidia responds.
Was it too expensive to install those ports on $1600 cards?
Will they keep consumers wait for DP 2.0 ports until 2024, when 5000 series is released?
All his other reasons are sound but that's not.
And he obviously couldn't mention market segregation.
Yeah of course Gen 5 power connector is included because so much electricity is needed to power the card. Totally insane. What's 4090 Ti gonna be? 2 Gen 5 connectors maybe? I could list a bunch of questions.
1. Why is RTX 40 so damn expensive? Think consumers are just stupid?
2. What's the actual performance uplift if DLSS is turned off, esp. rasterisation performance? I personally don't really care about ray-tracing BS.
3. How many RTX 30 stock are there indeed? And what about the claim saying shortage of resources and supplies and everything? (Although we've already known the answer so damn well)
4. How many cards were sold to miners? (They of course dare not make this public)
...
The only time PCIe 4.0 is noticeable is when your GPU is gimped on memory, see also the RX 6400 and 4GB RX 550. Well number 1 is really easy to answer. Gamers will CONSOOM this product at its elevated price and make bank for nvidia. Happens every gen.
From: marketing mumbo jumbo....................................................................................................To: real english
they needed the I/O for "something else................................................................. NVlink was so bad we prefered removing it
and "cram in as much AI processing as we could"
Unfortunately, that still leaves 1599 other things that he f*cked up on :roll:
It will be embarrassing for nGreedia to see AMD launch their new cards with all the latest io! But it's refreshing to hear the fanboys saying that none of this is necessary, and die area waste blah blah... Maybe these features are not badly missed right now, but these cards are to be on the market for at least 3 more years, and maybe people will miss these features long before then.
Maybe if we all boycott Nvidia and not buy a single card and say too them too big is too much. Honestly they are making the same mistake twice again. Like when the 8800 GTX's and 9700 vacuum cleaners came out people lost interest in nvidia so they went back to smaller less power and more GPU power cards. The 10 series was a perfect example of good engineering. Simple blower coolers, SLI on all cards 1080's and 1070's Low power but high output. The 20 series saw some small increases in performance but SLI did not come till the supers came out. Then when the 30 series came out they went into the same crappy directions of the 8800's and 9700's again. Massive single card no SLI on any smaller models and more and more power. The 40 series is now this all over again. 4 slot cooler WOW! so I have no room for my PCI-e audio card or anything else like my raid SSD m2 setup. Video card can draw 450 watts that like a whole computer inside another computer. I mean for 1080p and 1440p a pair of 660ti's in SLI performed quite well. Once you hit that 2160p then the whole world changes. My 660ti's could not keep up but my 1070ti's no sweat on 4k. Imagine when we have 8k comming out the 40 series in single is not going to be able to handle that. The DLSS will just trying and fake it but still its not true hardware performance.
DLSS is nothing more but a software enhancer to make it think its faster.
One of the major benefits of the NVLink was the ability to share the memory of the two cards in the scenarios where this was needed. And this doesn't seem possible anymore.
With this move if someone needs 48 GB of vram is forced to buy a single RTX 6000 for about (I guess) +6000$. Ridiculous.
For me they decided to remove NVLink to force the professional customers to buy their more expensive professional cards.
But I bet that this time more than one professional studio or research lab will stick with the 30 series and NVLink.