Tuesday, August 14th 2018
NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000
NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.
These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.
"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."
NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.
To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.
"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."
Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.
"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."
AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.
This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.
These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.
Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.
Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.
Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.
"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."
NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.
To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.
"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."
Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.
"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."
AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.
This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.
These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.
Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.
Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.
Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
88 Comments on NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000
so the 256 bit RTX card is very likely 503mm2. Unbeliavable but RTX 2080 will shrink to 256 mm2 on 7nm as 2085 perhaps and remain 256 bit. nvidia will make this transition seamlessly. Yes I saw this typo on their website too.
Plus they compared 102 to gp102 and not gp100 or gv100
Also they slipped and told the real transistor count og gp102, orinigally 12b now its 11.8 xD
same thing applies for the small RTX that we also have seen the bare PCB of, not 20x20 but 22x22.
I thought you were paying enough attention to know this problem applies to most new achievements, including new API versions, more CPU cores, etc. The sooner it gets out there, the sooner software will utilize it, but that doesn't mean you have to rush and buy it. I believe that the "launch hardware" of all of the last three Direct3D versions have been "outdated" before we've seen decent games using them, especially the last version which we are still waiting for good games.
Raytracing, at least in some form, is the future of computer graphics. And the sooner it gets out there, the sooner game developers and artists starts using it, and the sooner AMD will also add support. I honestly think it will take at least two more generations before it becomes powerful enough to be useful in a good selection of games. Like all of those investing in those "future-proof" GCN-based cards, I bet that investment is going to pay off any day now!
Anyway dare to explain where you are pulling 3072 cuda cores from ?
www.nvidia.com/en-us/design-visualization/quadro-desktop-gpus/
QUADRO RTX 5000 QUICK SPECS CUDA Parallel-Processing Cores3,702NVIDIA Tensor Cores384GPU Memory16 GB GDDR6RT CoresYesGraphics BusPCI Express 3.0 x 16NVLinkYesDisplay ConnectorsDP 1.4 (4), VirtualLink (1)Form Factor4.4" (H) x 10.5" (L) Dual Slot
max core counts are
RT104 - 3072
RT102 - 4608
RT100 - 4608
If 3072 is the full tu104, then 2944 on 2080 (supposedly) is the biggest one we get on tu104 geforce card, 2080Ti will be tu102. People said that if tu104 on 280 is cut down, then 2080Ti will be tu104 full.
each sm has warp schedulers,registers,you wanna keep it small in order for it to be efficient.you wanna simplify scheduling for better perf and efficiency,not complicate it. and they are arranged in pairs, a 128 cuda unit on pascal cards consists of two 64 cuda units.
314 squared is 98,569 and 471 is 221,841; a 125% increase.