Monday, August 20th 2018

NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics
NVIDIA at its Gamescom-imbued presentation finally took the lid off its long-awaited refresh to its GeForce lineup - and there's more than a thousand increase (and a consonant change) to it. At the Palladium venue in Koln, Germany (which was choke-full with press and NVIDIA-invited attendees), Jensen Huang went on stage to present a video on the advancements of graphics simulation that brought about animations such as Tron, the first Star Wars, the original Tomb Raider, Multi-Texturing on RIVA TNT, and passed through special effects in Hollywood... Every incarnation of the pixels and triangles we've been accustomed to.
We already know the juicy tidbits - the three models being released, when, and their pricing (with a hike to boot on the 2070 graphics card, which sees its price increased by $100 compared to last gen's 1070). We know the cooling solution official NVIDIA cards will sport, and how the company will be pairing efforts with game developers to ensure the extra hardware they've invested time, money, and a name change into will bear fruits. But what's behind this change? What brought us to this point in time? What powered the company's impressive Sol Demo?It's been a long road for NVIDIA ever since its contributor Turner Whitted worked on Multi-bounce Recursive Ray-tracing started way back in 1978. Jensen Huang says that GPU development and improvement has been moving at ten times what was being demanded by Moore's Law to CPUs - 1000 times every ten years. But ray-tracing is - or was - expected to require Petaflops of computing power. Yet another step that would take some 10 years to achieve.NVIDIA, naturally, didn't want any of that. According to Jensen Huang, that meant the company had to achieve an improvement equivalent to 1000 more performance - ten years earlier. The answer to that performance conundrum is RTX - a simultaneous hardware, software, SDK and library push, united in a single platform. RTX hybrid rendering unifies rasterization and ray tracing, with a first rasterization pass (highly parallel) and a second ray tracing pass that only acts upon the rendered pixels, but allows for materialization of effects and reflections and light sources that would be outside of the scene - and thus, virtually inexistent with pre-ray-tracing rendering techniques. Now, RT cores can work in tandem with rasterization compute solutions to achieve reasonable rendering times for ray-traced scenes that would, according to Jensen Huang, take ten times more to render in Pascal-based hardware.(NVIDIA CEO Jensen Huang quipped that for gamers to be able to achieve ray-tracing before the RT cores were added in the silicon and architecture design mix, they'd have to pay $68,000 dollars for the DGX with four Tesla V100 graphics cards. He even offered to do so in 3,000 facile $19.95 payments.)Turing has been ten years in the making, and Jensen Huang says this architecture and its RT Cores are the greatest jump in graphics computing for the company - and he likely meant the industry as well - since CUDA. The pairing of the three new or revised processing engines inside each Turing piece of silicon brings about this jump. The Turing SM, which allows for 14 TFLOPS and 14 TIPS (Integer Operations) of concurrent FP and INT Execution; the Tensor cores with their 110 TFLOPs of FP16, 220 TFLOPS if INT8, and a doubling again at 440 TFLOPS of INT4 performance; and the RT Core, with its 10 Giga Rays/sec (which Jensen Huang loves saying). For comparison, the 1080 Ti would be able to achieve, in peak conditions, 1.21 Giga Rays per second - almost 10 times lower performance.And the overall effect on performance is nothing short of breathtaking, at least in the terms put out by Jensen Huang: a single Turing chip replaces the 4 V100 GPUs found within the DGX - and with lowered render times of just 45 ms against the V100's 55 ms for rendering a ray-traced scene. Pascal, on the other hand, would take 308 ms to render the same scene - in its 1080 Ti rendition no less.A New Standard of Performance
Ray Tracing is being done all the time within 1 Turing Frame; this happens at the same time as part of the FP32 shading process - without RT cores, the green Ray tracing bar would be ten times larger. Now, it can be done completely within FP32 shading, followed by INT shading. And there are resources enough to add in some DNN (Deep Neural Network) processing to boot - NVIDIA is looking to generate Artificially-designed pixels with its DNN processing - essentially, the 110 TFLOPS powered by Tensor Cores, which in Turing render some 10x 1080 Ti equivalent performance, will be used to fill in some pixels - true to life - as if they had been actually rendered. Perhaps some Super Resolution applications will be found - this might well be a way of increasing pixel density by filling in additional pixels to an image.Perhaps one of the least "sexy" tidbits out of NVIDIA's new generation launch is one of the most telling. The change from GTX to RTX speaks to years of history being paid respects to, but left behind, unapollogeticaly, for a full push towards ray-tracing. It speaks of leaving behind years upon years of pixel rasterization improvement in search of that which was only theoretically possible not that long ago - real-time ray-tracing of lighting across multiple, physically-based bodies.
The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality. And it speaks of a dream where AIs can write software themselves (and maybe themselves), and the perfect, Ground Truth Image is generated via DLSS in deep-learning powered networks away from your local computing power, sent your way, and we see true cloud-assisted rendering - of sorts. It's bold, and it's been emblazoned on NVIDIA's vision, professional and gamer alike. We'll be here to see where it leads - with actual ray-traced graphics, of course.
Sources:
Ray Tracing and Global Illumination, NVIDIA Blogs, Image Inpainting
We already know the juicy tidbits - the three models being released, when, and their pricing (with a hike to boot on the 2070 graphics card, which sees its price increased by $100 compared to last gen's 1070). We know the cooling solution official NVIDIA cards will sport, and how the company will be pairing efforts with game developers to ensure the extra hardware they've invested time, money, and a name change into will bear fruits. But what's behind this change? What brought us to this point in time? What powered the company's impressive Sol Demo?It's been a long road for NVIDIA ever since its contributor Turner Whitted worked on Multi-bounce Recursive Ray-tracing started way back in 1978. Jensen Huang says that GPU development and improvement has been moving at ten times what was being demanded by Moore's Law to CPUs - 1000 times every ten years. But ray-tracing is - or was - expected to require Petaflops of computing power. Yet another step that would take some 10 years to achieve.NVIDIA, naturally, didn't want any of that. According to Jensen Huang, that meant the company had to achieve an improvement equivalent to 1000 more performance - ten years earlier. The answer to that performance conundrum is RTX - a simultaneous hardware, software, SDK and library push, united in a single platform. RTX hybrid rendering unifies rasterization and ray tracing, with a first rasterization pass (highly parallel) and a second ray tracing pass that only acts upon the rendered pixels, but allows for materialization of effects and reflections and light sources that would be outside of the scene - and thus, virtually inexistent with pre-ray-tracing rendering techniques. Now, RT cores can work in tandem with rasterization compute solutions to achieve reasonable rendering times for ray-traced scenes that would, according to Jensen Huang, take ten times more to render in Pascal-based hardware.(NVIDIA CEO Jensen Huang quipped that for gamers to be able to achieve ray-tracing before the RT cores were added in the silicon and architecture design mix, they'd have to pay $68,000 dollars for the DGX with four Tesla V100 graphics cards. He even offered to do so in 3,000 facile $19.95 payments.)Turing has been ten years in the making, and Jensen Huang says this architecture and its RT Cores are the greatest jump in graphics computing for the company - and he likely meant the industry as well - since CUDA. The pairing of the three new or revised processing engines inside each Turing piece of silicon brings about this jump. The Turing SM, which allows for 14 TFLOPS and 14 TIPS (Integer Operations) of concurrent FP and INT Execution; the Tensor cores with their 110 TFLOPs of FP16, 220 TFLOPS if INT8, and a doubling again at 440 TFLOPS of INT4 performance; and the RT Core, with its 10 Giga Rays/sec (which Jensen Huang loves saying). For comparison, the 1080 Ti would be able to achieve, in peak conditions, 1.21 Giga Rays per second - almost 10 times lower performance.And the overall effect on performance is nothing short of breathtaking, at least in the terms put out by Jensen Huang: a single Turing chip replaces the 4 V100 GPUs found within the DGX - and with lowered render times of just 45 ms against the V100's 55 ms for rendering a ray-traced scene. Pascal, on the other hand, would take 308 ms to render the same scene - in its 1080 Ti rendition no less.A New Standard of Performance
Ray Tracing is being done all the time within 1 Turing Frame; this happens at the same time as part of the FP32 shading process - without RT cores, the green Ray tracing bar would be ten times larger. Now, it can be done completely within FP32 shading, followed by INT shading. And there are resources enough to add in some DNN (Deep Neural Network) processing to boot - NVIDIA is looking to generate Artificially-designed pixels with its DNN processing - essentially, the 110 TFLOPS powered by Tensor Cores, which in Turing render some 10x 1080 Ti equivalent performance, will be used to fill in some pixels - true to life - as if they had been actually rendered. Perhaps some Super Resolution applications will be found - this might well be a way of increasing pixel density by filling in additional pixels to an image.Perhaps one of the least "sexy" tidbits out of NVIDIA's new generation launch is one of the most telling. The change from GTX to RTX speaks to years of history being paid respects to, but left behind, unapollogeticaly, for a full push towards ray-tracing. It speaks of leaving behind years upon years of pixel rasterization improvement in search of that which was only theoretically possible not that long ago - real-time ray-tracing of lighting across multiple, physically-based bodies.
The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality. And it speaks of a dream where AIs can write software themselves (and maybe themselves), and the perfect, Ground Truth Image is generated via DLSS in deep-learning powered networks away from your local computing power, sent your way, and we see true cloud-assisted rendering - of sorts. It's bold, and it's been emblazoned on NVIDIA's vision, professional and gamer alike. We'll be here to see where it leads - with actual ray-traced graphics, of course.
65 Comments on NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics
Actually, unless a youtube video is associated with a print web review site, I just don't watch. HardwareCanucks' case reviews is one i do look at. Looking forward to Wiz's return from Brazil.
With 7xx series, nVidia crushed AMDs marketing campaign for the 290x by taking dusting off the 1080 Ti design they had sitting on the shelf ... and as it turned out, when both overclocked, the 780 was faster than the 290x. Then the 970 came out and nVidia took the top 3 tiers with nVida selling 2+ times more than all AMDs 2xx and 3xx series combined. There was an illusionary battle for supremacy at the 1060 vs 480 level, but when OC'ing was figured in, the weak OC ability of AMD cards left it 10% behind. The 1060 took over as the most popiular card in steam's hardware survey, a position previously held by the 970. Here I think nVidia will be trying to establish a new tier ... something that can actually drive a 4k monitor with motion blur reduction . Im hoping so cause if they push to establish dominance down to the 1050 level that pushes AMD to almost the point of irrelevancy.
I don't see nVidia doing that as it could lead to ant-trust or other regulatory concerns ... It also doesn't seem like a good idea for AMD. The idea that AMD has something up their sleeve to challenge the Ti is hard to swallow, they haven't competed at the top end since the 7xxx series, and that's 6+ years ago. I think AMDs best move is to start at the bottom up... head off any challenge posed by the 2050 and, instead of putting a card out to challenge the 2070, take that card and price between the 2060 and 2070 and make it a competitive choice over the 2060. in other words what nVidia did with the 970 .... at a relatively small price increase over the 960 and AMDs offering, it killed. And that's where the volume sales are.
looks like 2080ti is running tomb raider and drops to 30+ fps on heavy scenarios, still an early version of the game
You mistake theory for practice. The current iteration isnt practical, and economically it wont work.
Do you seriously believe Nvidia didnt drop a huge bag of money at each of those devs? No sane mind optimizes a game for the top 5% and considers that good business. You can look at Crytek to see how those developers fare...
Nvidia have a habit of making something that is only fully featured on games that suck ass. Arkham anyone? PhysX en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support
Lets claim its someting great when we actually have a good set of games using it and everyone can use it. Till then a few thousand people with hardware capable of something not implemented and that will likely demand more performance than the card is capable of producing in its first generation is worthless.
Dumb people see less Cuda cores/TFLOPS = it must be slower. 780 Ti (2880c/5.1 tflops) vs 980 (2048c/4.6 tflops) who is faster?Wolfenstein 2 Beta patch: Async Compute brings 5% extra performance for RX Vega 64
AMD finds 5% more performance with async
Async Compute Only Boosted HITMAN’s Performance By 5-10% on AMD cards
Even with the limited usefulness of raytracing in the GeForce 20xx series, the cards will still be the most powerful GPUs we've seen by far. It's astounding how all the "wise guys" on Youtube manages to know the performance of this new generation, failing to realize the obvious that the SMs are completely redesigned. What we normally refer to as "cores" in GPUs are not cores at all, it's FPUs (usually FPU/ALU combos). Turing uses a different structure where FPUs and ALUs are separate clusters, allowing much higher throughput. We'll have to wait and see how much of a difference this makes for gaming. I honestly don't know if we will see a large difference across the board, or more of a boost in certain types of games. But I do know that Nvidia wouldn't switch to this unless it was significantly better.
After commenting they go back to watch the new Netflix movie filled with CGI they don't even know is CGI... because of ray-tracing. Ignorance never goes away, does it?
Comparing ray-tracing to hairworks is the new VR is 3D-TV.
No one here is going to make CGI movies with a gaming card.
At this point in time ray-tracing is a gimmick. In two years it will be the best thing ever, now it's worthless for gaming.
This entire series was made as a stopgap until 7nm matures. Hype up people with RT, get them to buy Turing saying it's the best thing since sliced bread and cocaine, and then in mid 2019 come out with a new card, with everything improved, including ray-tracing capabilities. Get money from sheep twice in less than one year. Cupertino style.
This might be the one of the most pointless release in recent history along with Kaby Lake, hand full of Bulldozer iterations and 2000 series Radeon.
store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
www.roadtovr.com/valve-monthly-active-vr-users-on-steam-are-up-160-year-over-year/
.07% winning .. sure looks funny if it looks like that
Game needs DX12 support, On top of that needs DXR support. Then it has to be supported by Nvidia GameWorks RT SDK.
How many games include DX12 support? How many games run better on DX12 over DX11 on Nvidia hardware? How many games include GameWorks without Nvidia backing?
The lack of AMD competing in the high end is what creates this madness. AMD needs to get their stuff together, that 7nm vega aint going to cut it.