NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.
These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.
The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.
"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."
NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.
To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.
"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."
Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.
"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."
AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.
This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.
These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.
Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.
Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.
Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
88 Comments on NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000
Considering the move from 14nm to GloFo's 12nm allowed AMD to gain around 300Mhz , im expecting TSMC's 12nm to give to NVIDIA around 500Mhz boost . I believe we will definitely see higher clocks with the gaming series .
As for the other part, lol, you could at least bother clicking my specs if anything. Surprise, surprise, I own the Pascal. Highest tier one in fact, if we exclude the Titan models. Last time I checked, GTX 1080Ti is a Pascal based card... Funniest smearing attempt I've seen in a while. Now I'm just waiting for some idiot to lash out and call me an AMD fanboy somehow because I didn't absolutely piss on Vega at every possible occasion...
lol, 1080Ti is only 471mm2 and yet 2070/80 will carry less cuda and 256 bit bus 14 Gbps memory.
See, the second part is prime example of made up bullshit that is circulating around. Where did you get an idea I get anything cheaper lol? I bought it for 790€ just like anyone else at that time. Just like I do with ALL the hardware I own. I'm not paid or sponsored by anyone. I wish I were, but I spend my own money without any special treatment. As for badmouthing NVIDIA, they just recently got their shit together with their drivers not being total stinkers. After weeks and months of bitching over broken Adaptive and Fast V-Sync features, they finally fixed that nonsense. Thank F god. Still not sure if they fixed anything regarding DSR and 144Hz on the output when running 4K on a 1080p 144Hz screen. Was so annoying I stopped using DSR entirely because I can't play at 60Hz and the damn thing was insisting on it unless the game explicitly enforced 144Hz (which basically none except Deus Ex Human Revolution and Mankind Divided). And when you pay 800€ for a graphic card, you're justifiably angry when shit doesn't work as it should. If you aren't, then you're a very uncritical consumer, the kind companies like the most, but also do the most damage to quality because companies become lazy and stop giving a F because you just gobble up anything they serve. Well, I'm loud and obnoxious because I never want to be like that.
If 2080 is really 500mm2 due to tensor and RT cores, we are not getting GT102 on 2080Ti, Titan RTX only. How did you get that 500mm2 number anyway ppn ?
His calculations are pure guessing tho, we have no idea how big 2080 is, it may be 500mm2, it may be 400mm2. You can't just cust RTX 8000 by 1/3 and have the size of GeForce 2080 :laugh: The CUDA cores themselves are revamped and more effiicient so moot point, 980 with 2048 cores and 256-bit bus strolled over 780Ti with 40% more cores and 384-bit bus.