Tuesday, September 20th 2022

NVIDIA's New Ada Lovelace RTX GPU Arrives for Designers and Creators

Opening a new era of neural graphics that marries AI and simulation, NVIDIA today announced the NVIDIA RTX 6000 workstation GPU, based on its new NVIDIA Ada Lovelace architecture. With the new NVIDIA RTX 6000 Ada Generation GPU delivering real-time rendering, graphics and AI, designers and engineers can drive cutting-edge, simulation-based workflows to build and validate more sophisticated designs. Artists can take storytelling to the next level, creating more compelling content and building immersive virtual environments. Scientists, researchers and medical professionals can accelerate the development of life-saving medicines and procedures with supercomputing power on their workstations—all at up to 2-4x the performance of the previous-generation RTX A6000.

Designed for neural graphics and advanced virtual world simulation, the RTX 6000, with Ada generation AI and programmable shader technology, is the ideal platform for creating content and tools for the metaverse with NVIDIA Omniverse Enterprise. Incorporating the latest generations of render, AI and shader technologies and 48 GB of GPU memory, the RTX 6000 enables users to create incredibly detailed content, develop complex simulations and form the building blocks required to construct compelling and engaging virtual worlds.
"Neural graphics is driving the next wave of innovation in computer graphics and will change the way content is created and experienced," said Bob Pette, vice president of professional visualization at NVIDIA. "The NVIDIA RTX 6000 is ready to power this new era for engineers, designers and scientists to meet the need for demanding content-creation, rendering, AI and simulation workloads that are required to build worlds in the metaverse."

Global Leaders Turn to NVIDIA RTX 6000
"NVIDIA's professional GPUs helped us deliver an experience like none other to baseball fans everywhere by bringing legends of the game back to life with AI-powered facial animation," said Michael Davies, senior vice president of field operations at Fox Sports. "We're excited to take advantage of the incredible graphics and AI performance provided by the RTX 6000, which will help us showcase the next chapter of live sports broadcast."

"Broadcasters are increasingly adopting software and compute to help build the next generation of TV stations," said Andrew Cross, CEO of Grass Valley. "The new workstation GPUs are truly game changing, providing us with over 300% performance increases—allowing us to improve the quality of video and the value of our products."

"The new NVIDIA Ada Lovelace architecture will enable designers and engineers to continue pushing the boundaries of engineering simulations," said Dipankar Choudhury, Ansys Fellow and HPC Center of Excellence lead. "The RTX 6000 GPU's larger L2 cache, significant increase in number and performance of next-gen cores and increased memory bandwidth will result in impressive performance gains for the broad Ansys application portfolio."

Next-Generation RTX Technology
Powered by the NVIDIA Ada architecture, the world's most advanced GPU architecture, the NVIDIA RTX 6000 features state-of-the-art NVIDIA RTX technology. Features include:
  • Third-generation RT Cores: Up to 2x the throughput of the previous generation with the ability to concurrently run ray tracing with either shading or denoising capabilities.
  • Fourth-generation Tensor Cores: Up to 2x faster AI training performance than the previous generation with expanded support for the FP8 data format.
  • CUDA cores: Up to 2x the single-precision floating point throughput compared to the previous generation.
  • GPU memory: Features 48 GB of GDDR6 memory for working with the largest 3D models, render images, simulation and AI datasets.
  • Virtualization: Will support NVIDIA virtual GPU (vGPU) software for multiple high-performance virtual workstation instances, enabling remote users to share resources and drive high-end design, AI and compute workloads.
  • XR: Features 3x the video encoding performance of the previous generation, for streaming multiple simultaneous XR sessions using NVIDIA CloudXR.
Availability
The NVIDIA RTX 6000 workstation GPU will be available from global distribution partners and manufacturers starting in December.
Source: NVIDIA
Add your own comment

24 Comments on NVIDIA's New Ada Lovelace RTX GPU Arrives for Designers and Creators

#1
Tek-Check
There is no DisplayPort 2.0 even on creators' cards. Wow!
Posted on Reply
#2
cvaldes
Tek-CheckThere is no DisplayPort 2.0 even on creators' cards.
Again, these are probably somewhere in the DP 2.0 certification process.

Moaning about this in thread after thread isn't going to speed the certification process up for you or anyone else.

In fact, if the actual engineers doing the certification were reading all these threads, they wouldn't be doing their jobs in the most efficient manner, would they?

Feel free to keep prattling on about this but either me or someone else is going to echo the same statement. They don't have the certification yet. That doesn't mean it can't be offered in the future.

Remember, these cards aren't being loaded onto a FedEx delivery truck right now.
Posted on Reply
#3
trsttte
What are we supposed to use to distinguish this new GPUs from the previous ones? The previous one was called RTX A6000, now this is RTX 6000? Right, because that's not confusing at all... an whatever happened to Quadro!?
cvaldesAgain, these are probably somewhere in the DP 2.0 certification process.

Moaning about this in thread after thread isn't going to speed the certification process up for you or anyone else.

In fact, if the actual engineers doing the certification were reading all these threads, they wouldn't be doing their jobs in the most efficient manner, would they?

Feel free to keep prattling on about this but either me or someone else is going to echo the same statement. They don't have the certification yet. That doesn't mean it can't be offered in the future.

Remember, these cards aren't being loaded onto a FedEx delivery truck right now.
Hmm I haven't seen all the news from todays announcement but I'm gonna press doubt on that, if they were going to support DP 2.0 the certification should already have been done and/or they would advertise it (it's not like certification is the first spec test done)

In reality it's not like quadro (or whatever name we should use to distinguish workstation gpus now, like what the fuck nvidia!?) needs the higher bandwith when they'll be used in virtualized scenarios or with lower refresh rates and can leverage DSC, but would still have been nice to see it implemented.
Posted on Reply
#4
cvaldes
I don't know what VESA has on their plate right now. For sure, they don't just accept a bunch of hardware submissions and press "Approved" to clear everything at once.

There's also the possibility that DP 2.0 certification hinges on some sort of software support (firmware or driver) that NVIDIA must provide.

That might explain why no AIB partner cards have any mention of DP 2.0 either.
Posted on Reply
#5
ncrs
cvaldesI don't know what VESA has on their plate right now. For sure, they don't just accept a bunch of hardware submissions and press "Approved" to clear everything at once.

There's also the possibility that DP 2.0 certification hinges on some sort of software support (firmware or driver) that NVIDIA must provide.

That might explain why no AIB partner cards have any mention of DP 2.0 either.
Intel managed to get it even for A380. The lack of support on Ada-based cards is suspicious. If it was under active certification I'm sure NVIDIA would have mentioned it in the PR materials.
Posted on Reply
#6
cvaldes
But DP 2.0 certification isn't final for Intel A770.

There's a small chance that NVIDIA completely forgot about DisplayPort 2.0 and neglected to include that technology on their Ada generation cards despite having enough presence of mind to include HDMI 2.1.

What do you think the odds are that NVIDIA thought that they could just skip DisplayPort 2.0 with their 40 series cards?
Posted on Reply
#7
ncrs
cvaldesBut DP 2.0 certification isn't final for Intel A770.
You're right, but what Intel wrote in their specification ("**Designed for DP2.0, certification pending VESA CTS Release") is what I expected NVIDIA to do if they also were in the certification process, but they didn't.
cvaldesThere's a small chance that NVIDIA completely forgot about DisplayPort 2.0 and neglected to include that technology on their Ada generation cards despite having enough presence of mind to include HDMI 2.1.

What do you think the odds are that NVIDIA thought that they could just skip DisplayPort 2.0 with their 40 series cards?
That's why I wrote it was suspicious ;) I guess we'll have to wait for an official NVIDIA response to this issue.
Posted on Reply
#8
Eiswolf93
No NVLink even on the prefessionals cards?
Posted on Reply
#9
Tek-Check
ncrsIntel managed to get it even for A380. The lack of support on Ada-based cards is suspicious. If it was under active certification I'm sure NVIDIA would have mentioned it in the PR materials.
True that. You cannot omit such an important tech development in marketing or at least in a teaser.
ncrsYou're right, but what Intel wrote in their specification ("**Designed for DP2.0, certification pending VESA CTS Release") is what I expected NVIDIA to do if they also were in the certification process, but they didn't.
Exactly! This is the least they could have done to inform people, like Intel did, that the technology will be included, to assure prospective buyers that DP 2.0 was baked into hardware, regardless of formal certification process.

With or without VESA certificate, if the hardware is capable, it should be mentioned without second thoughts, just like HDMI 2.1 port worked from the first day. Certification simply brings a formal recognition that the industry standard was implemented. The port itself should work anyway, if Nvidia supports it in software too.

No one sane would be hiding such important video capability of GPU that is expected to work once VESA blessing is available. This makes me think that nothing was submitted for DP 2.0 certification and 4000 cards will run on older DP 1.4a.
Posted on Reply
#10
lexluthermiester
Tek-CheckThere is no DisplayPort 2.0 even on creators' cards. Wow!
Right? Seriously with the no DP2.0? I'm beginning to understand why EVGA dropped out...
Posted on Reply
#11
Tek-Check
lexluthermiesterRight? Seriously with the no DP2.0? I'm beginning to understand why EVGA dropped out...
Exactly. Here is the spec for 6000. They did not bake DP 2.0 hardware support into boards on the most expensive cards on the market.
Intel's lowest card, A380, has DP 2.0 port at 40 Gbps...
Posted on Reply
#12
trsttte
Tek-CheckA380, has DP 2.0 port at 40 Gbps...
And before anyone points how the bandwith increase is not that great (32gbit to 40) the encoding scheme also changed so effective bandwith goes from 26 to 38.6 which is pretty massive when you account 4k 10bit color, HDR or daisy chaining.
Posted on Reply
#13
Tek-Check
trsttteAnd before anyone points how the bandwith increase is not that great (32gbit to 40) the encoding scheme also changed so effective bandwith goes from 26 to 38.6 which is pretty massive when you account 4k 10bit color, HDR or daisy chaining.
Exactly. The only way to speed up improvements in monitors and bring more 4K/5K 10-bit panels into mainstream is to install DP 2.0 ports and be free from HDMI 2.1 FRL that brought so many giant OLED TVs into PC space. When I look into high quality 4K HDR monitors, I am horrified by prices of Asus ProArt line. It cannot be the case that high quality OLED panels on giant TVs still cost up to three times less than similarly well-speced truly HDR monitor. As HDR requires ~25% more bandwidth, it is time DP 2.0 hits the ground running to get those monitor vendors to speed up mainstream innovation and image quality.
Posted on Reply
#14
Lycanwolfen
ahhh single slot blower card wish we could go back to those.
Posted on Reply
#15
wolf
Better Than Native
I'll be keen to see if they make an RTX 2000 like the A2000, low profile 75w max, let's see the extent of the perf:watt improvement.
Posted on Reply
#16
Steevo
cvaldesIn fact, if the actual engineers doing the certification were reading all these threads, they wouldn't be doing their jobs in the most efficient manner, would they?
How does it feel to be a living straw man?
Posted on Reply
#17
cvaldes
Very hollow obviously. :D

It will be interesting to see how NVIDIA navigates through the next few weeks before the first shipments start. Perhaps more interesting will be how they react after AMD and Intel make their next moves.
Posted on Reply
#18
vmarv
Eiswolf93No NVLink even on the prefessionals cards?
That would be really weird. They may have removed the NVLink from the 4090 to allow a certain level of performance only to the workstation gpus and force the people to buy these instead of the Geforce.
But removing NVLink from the RTX 6000 can convince the people who benefit from that technology to stay with the old gen cards. And the same can be said for the ones who used it with the 3090.
Unless they found another way to scale the memory of multiple gpus in the same way of NVLink.
Posted on Reply
#19
AdmiralThrawn
Eiswolf93No NVLink even on the prefessionals cards?
Nobody uses it, and it requires a massive amount of time to create drivers for. Like 10 people use nvlink.
Posted on Reply
#20
trsttte
vmarvThat would be really weird. They may have removed the NVLink from the 4090 to allow a certain level of performance only to the workstation gpus and force the people to buy these instead of the Geforce.
But removing NVLink from the RTX 6000 can convince the people who benefit from that technology to stay with the old gen cards. And the same can be said for the ones who used it with the 3090.
Unless they found another way to scale the memory of multiple gpus in the same way of NVLink.
Professional applications don't need NVLink because they don't need the level of synchronization it provides (that games require for example). They can just share resources through their regular pcie connection, it's good enough for the type of loads they'll be doing that are heavely parallelized and easily distributed to multiple processors
Posted on Reply
#21
vmarv
trsttteProfessional applications don't need NVLink because they don't need the level of synchronization it provides (that games require for example). They can just share resources through their regular pcie connection, it's good enough for the type of loads they'll be doing that are heavely parallelized and easily distributed to multiple processors
NVLink was introduced for a reason: there are scenarios where a program needs the fast gpu calculations and cuda cores and at the same time a massive amount of gpu memory. Rendering is one of these situations. Very complex animations can need more than 24GB of vram. If you think that rendering such scene can take many hours even with gpus, you can imagine that the author would do anything possible to avoid crashes.
Even if a new card can complete a rendering faster, it is worthless for some people or studios if it doesn't have enough memory.
By the way, programs like Octane, 3ds Max, Maya, Blender, Redshift, DaVinci Resolve, etcetera, can use NVLink. If you think that programs like Max and Maya are the standard tools used in the videogame and movie industry, it's easy to understand that this technology has its benefits and can be a must have for someone.
For the scientific calculations can be even more valuable.

So, I can't believe that they are ditching it like that and without explaining why. I'm curious to know what is going on.
Posted on Reply
#22
lexluthermiester
vmarvSo, I can't believe that they are ditching it like that and without explaining why. I'm curious to know what is going on.
My theory is what Jensen hinted at elsewhere, instead of NVLink, developers can do what wasn't effective until now, use of the card bus for direct data transfers an inter communications. The PCIe bus has truly massive amounts of available bandwidth since PCIe4.0, more than any single GPU alone can saturate. So dropping in two(or more) GPU's into a system and then connecting them tandem in software is now a doable option.

Hardware SLI no longer needs to be a thing as it can now be done in software through the exist card bus.
Posted on Reply
#23
vmarv
lexluthermiesterMy theory is what Jensen hinted at elsewhere, instead of NVLink, developers can do what wasn't effective until now, use of the card bus for direct data transfers an inter communications. The PCIe bus has truly massive amounts of available bandwidth since PCIe4.0, more than any single GPU alone can saturate. So dropping in two(or more) GPU's into a system and then connecting them tandem in software is now a doable option.

Hardware SLI no longer needs to be a thing as it can now be done in software through the exist card bus.
Well, if they can share their memory like that would be great. I guess we'll wait and see.
Posted on Reply
#24
lexluthermiester
vmarvWell, if they can share their memory like that would be great. I guess we'll wait and see.
There's no reason why that can not be configured in software, but yeah, we'll have to wait and see if/how it's done.
Posted on Reply
Add your own comment
Dec 23rd, 2024 21:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts