Tuesday, August 14th 2018

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.
The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."

Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
Add your own comment

88 Comments on NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

#76
FordGT90Concept
"I go fast!1!11!1!"
Judging by the picture, less than half of the die real-estate is useful for gaming. If everything else were equal (clock speeds, shader architecture, etc.), Pascal would be a better gaming card.

NVIDIA has to invent new software to use the hardware features in order to justify the waste of all that silicon in the gaming segment. It's PhysX all over but instead of "buy a second GeForce for PhysX" it's going to be "buy a GeForce RTX for ray tracing!"
Posted on Reply
#77
cucker tarlson
FordGT90ConceptJudging by the picture, less than half of the die real-estate is useful for gaming. If everything else were equal (clock speeds, shader architecture, etc.), Pascal would be a better gaming card.

NVIDIA has to invent new software to use the hardware features in order to justify the waste of all that silicon on them in the gaming segment. It's PhysX all over; an argument NVIDIA can't win.
you mean microsoft's DirectX R or Vulkan RTX ?

Radeon RXT coming 2019 :laugh:
Posted on Reply
#78
jabbadap
cucker tarlsonI get it now,but it's very stupid.
Yeah agree, It's very stupid and even impossible when adding tensor cores on the equation. But it's only way to get 3702 and 4608 to have common divisor.
Posted on Reply
#79
FordGT90Concept
"I go fast!1!11!1!"
cucker tarlsonyou mean microsoft's DirectX R or Vulkan RTX ?

Radeon RXT coming 2019 :laugh:
Or not, because graphics cards need to be about eight times more powerful to realistically do real-time raytracing. By the time hardware is eight times more powerful to do it, the bar will move in terms of scene complexity so they'll need eight times more powerful hardware after they got eight times more powerful hardware.

Lighting isn't something game developers want to budget a lot of compute time for.

Real time ray tracing has been a carrot dangling from a stick for decades now and there's no sign of that changing.
Posted on Reply
#80
cucker tarlson
That's why they utilize specified RT hardware like RT cores accelerated by tensor cores, that'll do the job better than just adding more fp32 cores.Plus I don't think RT is that compicated after all, I've seen people say it not, it's just compute heavy.
jabbadapYeah agree, It's very stupid and even impossible when adding tensor cores on the equation. But it's only way to get 3702 and 4608 to have common divisor.
;)
Posted on Reply
#81
FordGT90Concept
"I go fast!1!11!1!"
All that's different with RTX is approximation. Even with cutting corners, fairly simple scenes still render at less than 1 fps on a $3000 card. Unless you're making a game like Myst, that's worthless.

I'll let you in on a secret: the numbers were more or less the same two decades ago. Only difference is that scene is more complex now and still unattainable because the goal post keeps moving.


This is a pretty big deal for the digital animation industry because instead of buying racks of servers, they can do the same thing with a single rack of graphics cards. Not useful for consumers.
Posted on Reply
#83
FordGT90Concept
"I go fast!1!11!1!"
Much, much worse than half. Traditional techniques look a lot better than RTRT forced to run at 15 fps.
Posted on Reply
#85
Octopuss
RejZoRAll this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
Perhaps I am missing something, but isn't this about development cards and not gaming cards?
Posted on Reply
#86
cucker tarlson
OctopussPerhaps I am missing something, but isn't this about development cards and not gaming cards?
He paid so much for his card he's got a right ot rant about anything,whether it makes sense or not.
Posted on Reply
#87
Anymal
cucker tarlsonHe paid so much for his card he's got a right ot rant about anything,whether it makes sense or not.
He is always waiting an idiot to call him amd fanboy, but he IS that idiot amd fanboy who paid 790eur for top gaming gpu from not so beloved nvidia
Posted on Reply
#88
efikkan
FordGT90ConceptOr not, because graphics cards need to be about eight times more powerful to realistically do real-time raytracing. By the time the time hardware is eight times more powerful to do it, the bar will move in terms of scene complexity so they'll need eight times more powerful hardware after they got eight times more powerful hardware.

Lighting isn't something game developers want to budget a lot of compute time for.

Real time ray tracing has been a carrot dangling from a stick for decades now and there's no sign of that changing.
As I said earlier too, I think it will take multiple iterations before this becomes useful in gaming. But even then, let's say eight times as fast as you suggest, it will still have to "fake it" by doing a low sample raytracing and either blur or "denoise" the result. Full-scene raytracing per pixel would require something about a thousand times faster than this. By studying the videos from Nvidia, it becomes apparent to me that they are pretty cleverly crafted to look impressive. Once you move to typical game scenes with more objects, high-contrast grainy textures, and large scenes with distant lighting, etc.

Using low sample raytracing might be enough for many situations, especially if you don't need water or glass reflections, and just having more realistic soft shadows and color reflections will do a lot for realism (or cool effects). But that "denoising" thing is a gimmick, it will certainly create a lot of artifacts, it will struggle a lot in various situations, especially with fast animations and challenging textures.
Posted on Reply
Add your own comment
May 12th, 2024 02:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts