Tuesday, August 14th 2018

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.
The company also unveiled its initial Turing-based products - the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs - which will revolutionize the work of some 50 million designers and artists across multiple industries.

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," said Jensen Huang, founder and CEO of NVIDIA, speaking at the start of the annual SIGGRAPH conference. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

NVIDIA's eighth-generation GPU architecture, Turing enables the world's first ray-tracing GPU and is the result of more than 10,000 engineering-years of effort. By using Turing's hybrid rendering capabilities, applications can simulate the physical world at 6x the speed of the previous Pascal generation.

To help developers take full advantage of these capabilities, NVIDIA has enhanced its RTX development platform with new AI, ray-tracing and simulation SDKs. It also announced that key graphics applications addressing millions of designers, artists and scientists are planning to take advantage of Turing features through the RTX development platform.

"This is a significant moment in the history of computer graphics," said Jon Peddie, CEO of analyst firm JPR. "NVIDIA is delivering real-time ray tracing five years before we had thought possible."

Real-Time Ray Tracing Accelerated by RT Cores
The Turing architecture is armed with dedicated ray-tracing processors called RT Cores, which accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

"Cinesite is proud to partner with Autodesk and NVIDIA to bring Arnold to the GPU, but we never expected to see results this dramatic," said Michele Sciolette, CTO of Cinesite. "This means we can iterate faster, more frequently and with higher quality settings. This will completely change how our artists work."

AI Accelerated by Powerful Tensor Cores
The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

This level of performance powers AI-enhanced features for creating applications with powerful new capabilities. These include DLAA - deep learning anti-aliasing, which is a breakthrough in high-quality motion image generation - denoising, resolution scaling and video re-timing.

These features are part of the NVIDIA NGX software development kit, a new deep learning-powered technology stack that enables developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks.

Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second.

Developers can take advantage of NVIDIA's CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environments and special effects.

Availability
Quadro GPUs based on Turing will be initially available in the fourth quarter.
Add your own comment

88 Comments on NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

#26
cucker tarlson
XzibitAnyone else noticed the RTX 8000 & RTX 6000 have fewer CUDA & Tensor cores then the GV100
But higher fp32. They haven't specified clocks but they must be higher. They had to shave off some cuda and tensor for the inclusion of rt cores, otherwise that die would be friggin 1000mm2.
Posted on Reply
#27
Fluffmeister
Still pretty big at 754mm2, put yes this thing even has big Volta Donald J Trumped.
Posted on Reply
#28
Xzibit
Refined Volta with RT cores. Likely to see the same for the GTX cards, Titan V.
Posted on Reply
#29
kings
cucker tarlsonBut higher fp32. They haven't specified clocks but they must be higher. They had to shave off some cuda and tensor for the inclusion of rt cores, otherwise that die would be friggin 1000mm2.
One part clocks, but the Stream Processors seems more powerful too. At least, is what Nvidia is saying.
"Faster Simulation and Rasterization with New Turing Streaming Multiprocessor
Turing-based GPUs feature a new streaming multiprocessor (SM) architecture that adds an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

Combined with new graphics technologies such as variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second."
Posted on Reply
#30
bug
RejZoRAll this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...
Because you just got into hardware and you are totally unaware that any advancement is a chicken and egg problem at the beginning.
RejZoRIt's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
For completeness, after whining Turing has hardware that's not put to good use yet, he goes on to slam Pascal because it doesn't have such hardware. Nicely done.
Posted on Reply
#31
Assimilator
Those are pretty coolers. Nice and understated, no LEDs, just classy. I wish OEMs would take the hint instead of slapping OMGWTFBBQ massive coolers with bling on everything.
Posted on Reply
#32
T4C Fantasy
CPU & GPU DB Maintainer
The RTX 2080 will be either a 2944 core variant of RTX 5000 or a full 5000 with 3072 cores. Yes it will have tensors 352 to 384
Posted on Reply
#33
cucker tarlson
T4C FantasyThe RTX 2080 will be either a 2944 core variant of RTX 5000 or a full 5000 with 3072 cores. Yes it will have tensors 352 to 384
Tensor cores are a give since both volta and turing has them, I'm thinking whether or not it will have rt cores.
Posted on Reply
#34
T4C Fantasy
CPU & GPU DB Maintainer
cucker tarlsonTensor cores are a give since both volta and turing has them, I'm thinking whether or not it will have rt cores.
They will have rt cores the question is how many not if
Posted on Reply
#35
bug
T4C FantasyT
Hey will have rt cores the question is how many not if
As a general rule, no first-gen hardware has enough HP to put the tech they're introducing to good use. See DX12, tessellation and even PS2.0 and PS1.4 in the ancient times.
Posted on Reply
#36
jabbadap
cucker tarlsonBut higher fp32. They haven't specified clocks but they must be higher. They had to shave off some cuda and tensor for the inclusion of rt cores, otherwise that die would be friggin 1000mm2.
And probably nerfed FP64 compute too as they don't mention that at all. Just have a feeling that nvidia need to do a die shrink before we see Raytracing for gamers. Maybe they release Titan RTX from that behemoth of the chip and some smaller Geforce RTX xx80 parts from the RTX 5000(I presume it have smaller die due the 256bit bus). Rest of the Geforces are most probably GTX family and might be even Volta based.
Posted on Reply
#37
T4C Fantasy
CPU & GPU DB Maintainer
bugAs a general rule, no first-gen hardware has enough HP to put the tech they're introducing to good use. See DX12, tessellation and even PS2.0 and PS1.4 in the ancient times.
True amd is also working on this ray tracing tech, they announced it with vega20 chip
Posted on Reply
#38
RH92
cucker tarlsonBut higher fp32. They haven't specified clocks but they must be higher.
Around 1,73Ghz according to Anandtech wich indeed is higher than the 1,45Ghz of GV100.

Considering the move from 14nm to GloFo's 12nm allowed AMD to gain around 300Mhz , im expecting TSMC's 12nm to give to NVIDIA around 500Mhz boost . I believe we will definitely see higher clocks with the gaming series .
Posted on Reply
#39
Midland Dog
RejZoRAll this tech fluff is all nice and fancy, but what new releases of cards really turn me on are the new features available NOW and in ANY game. Like, for example, utilizing all these fancy Tensor cores to make next level post-process AA that's light on GPU but smooths edges better than crappy FXAA they give us in NV CP. Like, freaking at least give us option to switch between FXAA, MLAA and SMAA ffs. Just give us something new we can use now, not something we might hypothetically see being used in games in 5 years time...

It's why Pascal was such boring release for me. Sure, it was fast, but other than that, it brought basically nothing new and exciting to the end user. New post process AA modes would be a nice start, just like it is that post process image thing they released some time ago to change game colors, sharpness, tone and so on. That's cool, but you need to use stupid NVIDIA Experience to have it which sucks. So, that's another problem thanks to archaic original NV CP. Anyway, I'm rambling again, give us more features for today so we can easier wait for the features of tomorrow... I usually buy new graphic cards because of these features even when I don't really have to buy new one totally out of curiosity, not for what I might use it in 5 years time, maybe.
SMAA my friend, SMAA
RH92Around 1,73Ghz according to Anandtech wich indeed is higher than the 1,45Ghz of GV100.

Considering the move from 14nm to GloFo's 12nm allowed AMD to gain around 300Mhz , im expecting TSMC's 12nm to give to NVIDIA around 500Mhz boost . I believe we will definitely see higher clocks with the gaming series .
2200mhz is my guess, golden samples of TSMCs 16nm, i am fortunate enough to be in that club (2177mhz gtx 1060), can do around there so i want to say average OC will be 2152 (taken from pascal boost table, they really do go up in weird increments; 2129, 2136, 2152, 2164, 2177, 2190) and golden could be 2250
Posted on Reply
#40
RH92
Midland Dog2200mhz is my guess, golden samples of TSMCs 16nm, i am fortunate enough to be in that club (2177mhz gtx 1060), can do around there so i want to say average OC will be 2152 (taken from pascal boost table, they really do go up in weird increments; 2129, 2136, 2152, 2164, 2177, 2190) and golden could be 2250
Nah there are already 1080 golden samples hitting 2,2Ghz under liquid so if i had to guess considering what i mentioned previously my guess would be 2,3Ghz as an average with golden samples going as far as 2,45Ghz if not 2,5Ghz.
Posted on Reply
#41
jabbadap
RH92Around 1,73Ghz according to Anandtech wich indeed is higher than the 1,45Ghz of GV100.

Considering the move from 14nm to GloFo's 12nm allowed AMD to gain around 300Mhz , im expecting TSMC's 12nm to give to NVIDIA around 500Mhz boost . I believe we will definitely see higher clocks with the gaming series .
The problem is the die size, it will take too much juice to achieve that kind of boosts. If there's some sub 600mm² more close to 500mm² chip on the line-up, then sure 500MHz is reasonable expectation. But with 754mm² die size, I think the power required for that kind of clocks are too much to cool in sane ways.
Posted on Reply
#42
Midland Dog
RH92Nah there are already 1080 golden samples hitting 2,2Ghz under liquid so if i had to guess considering what i mentioned previously my guess would be 2,3Ghz as an average with golden samples going as far as 2,45Ghz if not 2,5Ghz.
remember die size has gone up significantly on GT102 tho, Titan V didnt hit the same clocks as pascal and it was 12nm, GT102 is only 60mm squared off of GV100
Posted on Reply
#43
RejZoR
bugBecause you just got into hardware and you are totally unaware that any advancement is a chicken and egg problem at the beginning.

For completeness, after whining Turing has hardware that's not put to good use yet, he goes on to slam Pascal because it doesn't have such hardware. Nicely done.
I literally said they should focus more on things we can use NOW instead of tons of features we might one day use. Maybe. I didn't say they should drop new tech for the future entirely, I just said they should focus on new exciting stuff we can use now a bit more.

As for the other part, lol, you could at least bother clicking my specs if anything. Surprise, surprise, I own the Pascal. Highest tier one in fact, if we exclude the Titan models. Last time I checked, GTX 1080Ti is a Pascal based card... Funniest smearing attempt I've seen in a while. Now I'm just waiting for some idiot to lash out and call me an AMD fanboy somehow because I didn't absolutely piss on Vega at every possible occasion...
Posted on Reply
#44
bug
RejZoRI literally said they should focus more on things we can use NOW instead of tons of features we might one day use. Maybe. I didn't say they should drop new tech for the future entirely, I just said they should focus on new exciting stuff we can use now a bit more.
Yes, I also can't figure out how they keep churning out hardware without checking with you first what they should focus on next.
RejZoRAs for the other part, lol, you could at least bother clicking my specs if anything. Surprise, surprise, I own the Pascal. Highest tier one in fact, if we exclude the Titan models. Last time I checked, GTX 1080Ti is a Pascal based card... Funniest smearing attempt I've seen in a while. Now I'm just waiting for some idiot to lash out and call me an AMD fanboy somehow because I didn't absolutely piss on Vega at every possible occasion...
Yes, we all know you own high-end Nvidia hardware because you can get it for cheap. We also know that never stopped you badmouthing them at all. But in this instance you were actually incoherent, that's all.
Posted on Reply
#45
RH92
jabbadapThe problem is the die size, it will take too much juice to achieve that kind of boosts. If there's some sub 600mm² more close to 500mm² chip on the line-up, then sure 500MHz is reasonable expectation. But with 754mm² die size, I think the power required for that kind of clocks are too much to cool in sane ways.
Midland Dogremember die size has gone up significantly on GT102 tho, Titan V didnt hit the same clocks as pascal and it was 12nm, GT102 is only 60mm squared off of GV100
We are primarily talking about RTX 2080 here ( or whatever they name it ) and i believe it's safe to assume that GT/RT 104 is going to be nowhere near 754mm2 hence why those clocks are achievable. This being said yes obviously , Turing Titan and 2080Ti will clock lower if that's what you mean. Just for a reminder we don't know yet if that 754mm2 die is a GT/RT 102 , more likely than not it's an GT/RT 100 .
Posted on Reply
#46
ppn
RT104 is Quadro RTX 5000 and soon to be RTX 2070/80 ~~500mm2, 754 mm2 cut by 1/3.

lol, 1080Ti is only 471mm2 and yet 2070/80 will carry less cuda and 256 bit bus 14 Gbps memory.
RH92We are primarily talking about RTX 2080 here ( or whatever they name it ) and i believe it's safe to assume that GT/RT 104 is going to be nowhere near 754mm2 hence why those clocks are achievable. This being said yes obviously , Turing Titan and 2080Ti will clock lower if that's what you mean.
Posted on Reply
#47
RejZoR
bugYes, I also can't figure out how they keep churning out hardware without checking with you first what they should focus on next.


Yes, we all know you own high-end Nvidia hardware because you can get it for cheap. We also know that never stopped you badmouthing them at all. But in this instance you were actually incoherent, that's all.
Yeah, gotta love idiots who salivate over features on new cards that they won't be able to use anyway until they'll buy a new high end card in 2 years time with same feature set that will actually be used. But what do I know after observing the same thing for basically 2 decades year after year...

See, the second part is prime example of made up bullshit that is circulating around. Where did you get an idea I get anything cheaper lol? I bought it for 790€ just like anyone else at that time. Just like I do with ALL the hardware I own. I'm not paid or sponsored by anyone. I wish I were, but I spend my own money without any special treatment. As for badmouthing NVIDIA, they just recently got their shit together with their drivers not being total stinkers. After weeks and months of bitching over broken Adaptive and Fast V-Sync features, they finally fixed that nonsense. Thank F god. Still not sure if they fixed anything regarding DSR and 144Hz on the output when running 4K on a 1080p 144Hz screen. Was so annoying I stopped using DSR entirely because I can't play at 60Hz and the damn thing was insisting on it unless the game explicitly enforced 144Hz (which basically none except Deus Ex Human Revolution and Mankind Divided). And when you pay 800€ for a graphic card, you're justifiably angry when shit doesn't work as it should. If you aren't, then you're a very uncritical consumer, the kind companies like the most, but also do the most damage to quality because companies become lazy and stop giving a F because you just gobble up anything they serve. Well, I'm loud and obnoxious because I never want to be like that.
Posted on Reply
#48
cucker tarlson
Is there one post on TPU where you don't mention how expensive your gpu was, rejzor :laugh: The last one has it mentioned twice,oh God,he's mad....

If 2080 is really 500mm2 due to tensor and RT cores, we are not getting GT102 on 2080Ti, Titan RTX only. How did you get that 500mm2 number anyway ppn ?
Posted on Reply
#49
RH92
ppnRT104 is Quadro RTX 5000 and soon to be RTX 2070/80 ~~500mm2, 754 mm2 cut by 1/3.
lol, 1080Ti is only 471mm2 and yet 2070/80 will carry less cuda and 256 bit bus 14 Gbps memory.
For your information RTX 5000 has 3702 cuda cores wich is 118 more cuda cores than 1080Ti !
Posted on Reply
#50
cucker tarlson
RH92For your information RTX 5000 has 3702 cuda cores wich is 118 more cuda cores than 1080Ti !
No, it's 3072.
His calculations are pure guessing tho, we have no idea how big 2080 is, it may be 500mm2, it may be 400mm2. You can't just cust RTX 8000 by 1/3 and have the size of GeForce 2080 :laugh: The CUDA cores themselves are revamped and more effiicient so moot point, 980 with 2048 cores and 256-bit bus strolled over 780Ti with 40% more cores and 384-bit bus.
Posted on Reply
Add your own comment
May 12th, 2024 03:07 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts