Tuesday, August 14th 2018

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.
The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."

Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
Add your own comment

88 Comments on NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

#1
Vya Domus
That RT Core sure is interesting , I was convinced Tensor Cores wouldn't be used for ray tracing.
Posted on Reply
#3
Vya Domus
Midland Dogrip Ayyyymd
RIP insightful and comprehensive opinions.
Posted on Reply
#4
nemesis.ie
So it does indeed look like the "GeForce" (or new name) RTX cards will be cut down versions of this, that explains the leaked CU counts/RAM sizes most likely.
Vya DomusThat RT Core sure is interesting , I was convinced Tensor Cores wouldn't be used for ray tracing.
I don't think the tensor cores are used in the ray tracking, what they appear to be doing is using the tensor cores DL/ML to reduce the amount of work the RT engines needs to do by working out things like what will actually be visible and "guess" at what is a likely outcome to speed up the ray tracing.

So you may end up with a slightly less accurate render than a pure RT scene, but much, much faster. Obviously if you are moving around in real-time quickly, like in a game, a little inaccuracy is likely not noticed and likewise, when the camera (render port) is static, any lost detail can be added in.

Something like that anyway I think.
Posted on Reply
#5
Hawkster222
Note to the person that buys the above - Never ever feed it a apple .... as we all know what happened afterwards to Mister Turning.
Posted on Reply
#6
cucker tarlson
nemesis.ieSo it does indeed look like the "GeForce" (or new name) RTX cards will be cut down versions of this, that explains the leaked CU counts/RAM sizes most likely.



I don't think the tensor cores are used in the ray tracking, what they appear to be doing is using the tensor cores DL/ML to reduce the amount of work the RT engines needs to do by working out things like what will actually be visible and "guess" at what is a likely outcome to speed up the ray tracing.

So you may end up with a slightly less accurate render than a pure RT scene, but much, much faster. Obviously if you are moving around in real-time quickly, like in a game, a little inaccuracy is likely not noticed and likewise, when the camera (render port) is static, any lost detail can be added in.

Something like that anyway I think.
yes,he said Turing can render at lower resolution thanks to "training the model out of very HQ ground truth" and generate the final image at much higher rate. He mentioned it in use in UE4 and Microsoft DXR. It's called DLAA (deep learining accelerated something...), and it is based on the use of tensor cores.


Looks like it has both decicated RT cores and it's AI accelerated, it looks like an absoulute RT beast.






I wonder how much of this goes into GeForce.
Posted on Reply
#7
RejZoR
All this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
Posted on Reply
#8
kastriot
Another fairytale which will be reality in 10+ years..
Posted on Reply
#9
Space Lynx
Astronaut
kastriotAnother fairytale which will be reality in 10+ years..
Or if nothing else, cost a 20-30 FPS hit when you turn it on, just like "Physx" lulz
Posted on Reply
#10
techy1
am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
Posted on Reply
#11
Prima.Vera
One thing is for sure. That nVidia guy really loves his black leather jackets.
Posted on Reply
#12
cucker tarlson
techy1am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
No, gtx 2080 will be announced on monday.
Posted on Reply
#13
RejZoR
lynx29Or if nothing else, cost a 20-30 FPS hit when you turn it on, just like "Physx" lulz
Remember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games could feature gameplay physics on a much higher level than just ragdolls. But because only some % of users can experience it, they can't afford to include it in a core element of a game (gameplay) and instead only offer it as eye candy for glass, fog, smoke and litter. And that's it. We all hoped Havok would become GPU accelerated but that also dropped into water entirely. Bullet also didn't progress anywhere. And PhysX is basically the same useless thing since the beginning. I just wish everyone would stop twiddling with their sausages and do something about it for the sake of evolving gaming into something more than still having worlds basically entirely static with few deformable decals. Especially because every game using PhysX even downgrades CPU physics to 2000 era for the sake of HW one looking so much better. Which still just looks like a joke in the end compared to 2005 era games and costs tons of performance for no logical reason.
Posted on Reply
#14
iO
techy1am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
Geforce cards should be announced next monday.
Posted on Reply
#15
cucker tarlson
twitter.com/twitter/statuses/1029164596903337984

focus on this part




rest of the date appears in the order 2->0->8->0
RejZoRRemember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games could feature gameplay physics on a much higher level than just ragdolls. But because only some % of users can experience it, they can't afford to include it in a core element of a game (gameplay) and instead only offer it as eye candy for glass, fog, smoke and litter. And that's it. We all hoped Havok would become GPU accelerated but that also dropped into water entirely. Bullet also didn't progress anywhere. And PhysX is basically the same useless thing since the beginning. I just wish everyone would stop twiddling with their sausages and do something about it for the sake of evolving gaming into something more than still having worlds basically entirely static with few deformable decals. Especially because every game using PhysX even downgrades CPU physics to 2000 era for the sake of HW one looking so much better. Which still just looks like a joke in the end compared to 2005 era games and costs tons of performance for no logical reason.
RejZoRAll this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
You are just adorable.
Posted on Reply
#16
DeathtoGnomes
unless the card is right side up, they didnt reinvent crap!
Posted on Reply
#17
cucker tarlson
DeathtoGnomesunless the card is right side up, they didnt reinvent crap!
I don't think any professional gpu had RT cores accelerated by tensor before.
Posted on Reply
#18
Caring1
The "consumer" card will most likely have the RT core disabled.
Posted on Reply
#19
cucker tarlson
Caring1The "consumer" card will most likely have the RT core disabled.
or nerfed.
Posted on Reply
#20
StrayKAT
So... what then? The consumer card is just more of the same (upgraded)?
Posted on Reply
#21
nemesis.ie
cucker tarlsonNo, gtx 2080 will be announced on monday.
Or will it be the RTX 2080?
RejZoRRemember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games
Instead of dedicated H/W physics, this is perhaps somewhere the extra cores we are starting to get can be put to good use?
Posted on Reply
#22
Fluffmeister
kastriotAnother fairytale which will be reality in 10+ years..
Isn't that competition?
Posted on Reply
#23
RejZoR
nemesis.ieOr will it be the RTX 2080?



Instead of dedicated H/W physics, this is perhaps somewhere the extra cores we are starting to get can be put to good use?
Problem with cores is, majority of users only have quad cores at best with 8 total threads thanks to SMT. Which places them to a "not applicable" list. We might see that in 5 years time when around 10-12 cores becomes a standard, but for now, just not enough players have them.
Posted on Reply
#24
Xzibit
Anyone else noticed the RTX 8000 & RTX 6000 have fewer CUDA & Tensor cores then the GV100
Posted on Reply
#25
Vya Domus
XzibitAnyone else noticed the RTX 8000 & RTX 6000 have fewer CUDA & Tensor cores then the GV100
Because it's a smaller GPU compared to V100, guess that monstrous 800nm^2 was a bit too much even for Nvidia and TSMC.
Posted on Reply
Add your own comment
Apr 27th, 2024 18:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts