• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.



The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."

Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.

View at TechPowerUp Main Site
 
That RT Core sure is interesting , I was convinced Tensor Cores wouldn't be used for ray tracing.
 
rip Ayyyymd
 
So it does indeed look like the "GeForce" (or new name) RTX cards will be cut down versions of this, that explains the leaked CU counts/RAM sizes most likely.

That RT Core sure is interesting , I was convinced Tensor Cores wouldn't be used for ray tracing.

I don't think the tensor cores are used in the ray tracking, what they appear to be doing is using the tensor cores DL/ML to reduce the amount of work the RT engines needs to do by working out things like what will actually be visible and "guess" at what is a likely outcome to speed up the ray tracing.

So you may end up with a slightly less accurate render than a pure RT scene, but much, much faster. Obviously if you are moving around in real-time quickly, like in a game, a little inaccuracy is likely not noticed and likewise, when the camera (render port) is static, any lost detail can be added in.

Something like that anyway I think.
 
Note to the person that buys the above - Never ever feed it a apple .... as we all know what happened afterwards to Mister Turning.
 
So it does indeed look like the "GeForce" (or new name) RTX cards will be cut down versions of this, that explains the leaked CU counts/RAM sizes most likely.



I don't think the tensor cores are used in the ray tracking, what they appear to be doing is using the tensor cores DL/ML to reduce the amount of work the RT engines needs to do by working out things like what will actually be visible and "guess" at what is a likely outcome to speed up the ray tracing.

So you may end up with a slightly less accurate render than a pure RT scene, but much, much faster. Obviously if you are moving around in real-time quickly, like in a game, a little inaccuracy is likely not noticed and likewise, when the camera (render port) is static, any lost detail can be added in.

Something like that anyway I think.
yes,he said Turing can render at lower resolution thanks to "training the model out of very HQ ground truth" and generate the final image at much higher rate. He mentioned it in use in UE4 and Microsoft DXR. It's called DLAA (deep learining accelerated something...), and it is based on the use of tensor cores.


Looks like it has both decicated RT cores and it's AI accelerated, it looks like an absoulute RT beast.


NVIDIA-RTX-Software-Stack-1030x512.png


NVIDIA-Turing-RTX-Die-Breakup.png


I wonder how much of this goes into GeForce.
 
Last edited:
All this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
 
Another fairytale which will be reality in 10+ years..
 
Another fairytale which will be reality in 10+ years..

Or if nothing else, cost a 20-30 FPS hit when you turn it on, just like "Physx" lulz
 
am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
 
One thing is for sure. That nVidia guy really loves his black leather jackets.
 
am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
No, gtx 2080 will be announced on monday.
 
Or if nothing else, cost a 20-30 FPS hit when you turn it on, just like "Physx" lulz

Remember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games could feature gameplay physics on a much higher level than just ragdolls. But because only some % of users can experience it, they can't afford to include it in a core element of a game (gameplay) and instead only offer it as eye candy for glass, fog, smoke and litter. And that's it. We all hoped Havok would become GPU accelerated but that also dropped into water entirely. Bullet also didn't progress anywhere. And PhysX is basically the same useless thing since the beginning. I just wish everyone would stop twiddling with their sausages and do something about it for the sake of evolving gaming into something more than still having worlds basically entirely static with few deformable decals. Especially because every game using PhysX even downgrades CPU physics to 2000 era for the sake of HW one looking so much better. Which still just looks like a joke in the end compared to 2005 era games and costs tons of performance for no logical reason.
 
am I getting this right? first comes quadro in late Q4 and only after that we can expect realistic news abput the new Geefoce (11xx or 20xx) ? so no late Q3 Geefoce (11xx or 20xx) :'(
Geforce cards should be announced next monday.
 
https://twitter.com/twitter/statuses/1029164596903337984

focus on this part

2080.jpg



rest of the date appears in the order 2->0->8->0

Remember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games could feature gameplay physics on a much higher level than just ragdolls. But because only some % of users can experience it, they can't afford to include it in a core element of a game (gameplay) and instead only offer it as eye candy for glass, fog, smoke and litter. And that's it. We all hoped Havok would become GPU accelerated but that also dropped into water entirely. Bullet also didn't progress anywhere. And PhysX is basically the same useless thing since the beginning. I just wish everyone would stop twiddling with their sausages and do something about it for the sake of evolving gaming into something more than still having worlds basically entirely static with few deformable decals. Especially because every game using PhysX even downgrades CPU physics to 2000 era for the sake of HW one looking so much better. Which still just looks like a joke in the end compared to 2005 era games and costs tons of performance for no logical reason.

All this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.

You are just adorable.
 
Last edited:
unless the card is right side up, they didnt reinvent crap!
 
The "consumer" card will most likely have the RT core disabled.
 
So... what then? The consumer card is just more of the same (upgraded)?
 
No, gtx 2080 will be announced on monday.

Or will it be the RTX 2080?

Remember Half-Life 2 when we had actual gameplay affected physics elements? Sure it was very basic stuff like ripping radiators from wall, throwing paint cans into zombies and adding weight to ramps to raise them, but it was a mechanic that you could take advantage of if desired for actual gameplay and progression. Fast forward 14 years and we really haven't moved ANYWHERE thanks to ZERO standards in game physics. If HW accelerated physics were a standard, all games

Instead of dedicated H/W physics, this is perhaps somewhere the extra cores we are starting to get can be put to good use?
 
Or will it be the RTX 2080?



Instead of dedicated H/W physics, this is perhaps somewhere the extra cores we are starting to get can be put to good use?

Problem with cores is, majority of users only have quad cores at best with 8 total threads thanks to SMT. Which places them to a "not applicable" list. We might see that in 5 years time when around 10-12 cores becomes a standard, but for now, just not enough players have them.
 
Back
Top