Wednesday, October 17th 2018
Remedy Shows The Preliminary Cost of NVIDIA RTX Ray Tracing Effects in Performance
Real time ray tracing won't be cheap. NVIDIA GeForce RTX 20 Series graphics cards are quite expensive, but even with that resources the cost to take advantage of this rendering technique will be high. We didn't know for sure what this cost would be, but the developers at Remedy have shown some preliminary results on that front. This company is working on Control, one of the first games with RTX support, and although they have not provided framerate numbers, what we do know is that the activation of ray tracing imposes a clear impact.
It does at least in these preliminary tests with its Northlight Engine. In an experimental scene with a wet marble floor and a lot of detailed furniture they were able to evaluate the cost of enabling RTX. There is a 9.2 ms performance overhead per frame in total: 2.3 ms to compute shadows; 4.4 ms to compute reflexions; and 2.5 ms for the global denoising lighting. These are not good news for those who enjoy games at 1080p60.
Remedy may be able to reduce that impact in the final version of its engine and in the game, but those 9.2 ms will clearly influence the framerate we can achieve. Playing at 30 fps requires 33 ms and playing at 60 fps requires 17 ms per frame. If we enable NVIDIA's RTX effects that would translate to a framerate of about 40 fps during the game with a 1920x1080 resolution on a GeForce RTX 2080 Ti. The result is excellent visually: clearer shadows and reflections that are independent of the camera and angle show up and give a photorealistic finish to the game, but the cost is high. Too much, maybe?
Source:
Golem
It does at least in these preliminary tests with its Northlight Engine. In an experimental scene with a wet marble floor and a lot of detailed furniture they were able to evaluate the cost of enabling RTX. There is a 9.2 ms performance overhead per frame in total: 2.3 ms to compute shadows; 4.4 ms to compute reflexions; and 2.5 ms for the global denoising lighting. These are not good news for those who enjoy games at 1080p60.
Remedy may be able to reduce that impact in the final version of its engine and in the game, but those 9.2 ms will clearly influence the framerate we can achieve. Playing at 30 fps requires 33 ms and playing at 60 fps requires 17 ms per frame. If we enable NVIDIA's RTX effects that would translate to a framerate of about 40 fps during the game with a 1920x1080 resolution on a GeForce RTX 2080 Ti. The result is excellent visually: clearer shadows and reflections that are independent of the camera and angle show up and give a photorealistic finish to the game, but the cost is high. Too much, maybe?
85 Comments on Remedy Shows The Preliminary Cost of NVIDIA RTX Ray Tracing Effects in Performance
The 2080ti is already a very large GPU, and a hungry one. If they released it with more RT cores in lieu of CUDA cores, the general outlook would only be worse. And lets be real here, they are leading and releasing products that are in competition with themselves alone. Why would they price things keenly? I know I wouldnt! They're not our friends, they dont need to do us a solid. They are a business, they want to make money, and even if some read that as ugly, its just a fact.
I doubt this gen1 of RT is going to perform well in RT titles tbh. But as someone else pointed out, why not look on the positive, the regular gains are only a third and if your currently happy gaming with whatever you are running at, then enjoy! If you were hoping for an upgrade when this gen launch, why not grab a 2nd hand 1080/ti? Still sweet cards.
Lower resolution does not really defeat the purpose. The point of RTRT is more accurate/realistic result not high resolution. Lower resolution for pretty much everything RT is proposed for is currently being used for rasterized methods of the same things. Anything specific in that video that strikes you as AMD being far ahead? Isn't that exactly what they did? RTX2080 replaces GTX1080Ti at similar price, RTX2070 replaces GTX1080 at similar price :D
The problem? they cost die space. TU 102 is a massive 750mm2 chip. I don't recall there has been any consumer grade/gaming card sold with such a massive die before, these sizes have been exclusive to professional grade Quadro cards which sold at much higher prices. The reason has always been yields. This is a chip that's nearly 3x times the one in something like GTX 980 for instance, and that doesn't equate to 3 times the chip cost, it costs many times more. I'm not saying a $1200 card is not profitable for them, it definitely is, and even probably more profitable that previous generations if we talk margins percentages, but I don't imagine it to be by a huge margin, it's not the rip-off that it seems to be. This is a high-end card, and it would have always came at a premium given the lack of competition.
The technology is still in infancy, and Nvidia wanted to make sure their are first. The decision to include a novel unproven technology in their high end cards, eventually leading to higher manufacturing costs due to its massive chip die, and pass the costs to consumers might seem a little bit premature. However, The timing is perfect given the lack of competition, Nvidia couldn't have afforded to do so if AMD was on top of its game, and I suspect Nvidia saw AMD making a push with its edge in 7nm tech and the development of Infinity Fabric and MCM cards that were widely expected to be the tech behind Navi up until last June.
For anyone not interested in RT in its current state, Pascal cards are still sold around. I know they are previous gen cards that are still sold at a premium with an extended life cycle, but that's only because there was no real competition from AMD. Progression in chip making has slowed down due to the diminishing of Moore's law, and with that context, it's hard to decide if it was AMD who performed bad, or it was Nvidia that performed extra well in the previous generation.
RT Ray-tracing or elements of it have been coming for a while but are held back by huge performance requirements. Research has been done, hardware had to start from somewhere. Profitable sure but I am not convinced about their better margins compared to say Pascal. I would say at the same price points Nvidia is making noticeably smaller margins with Turing.
$1200 RTX 2080Ti, maybe. RTX 2080 at the same price point as GTX 1080Ti? RTX 2070 at the same price point as GTX 1080?
In addition to considerably larger GPU itself, the boards seem to be more complex as well. The MSI interview from a few days back - www.techpowerup.com/248382/msi-talks-about-nvidia-supply-issues-us-trade-war-and-rtx-2080-ti-lightning RT Cores together with Turing cores should be 20% or less of the die space cost. This is not that bad. Even if these were left out and they only did the usual GPU we would still be looking at 600mm² chip for the xx102 GPU. There has been no process shrink, Turings are created effectively on the same process node as Pascal's (with a minor efficiency bump on the process side). Nvidia definitely knows much more than we do about what AMD is up to. Infinity Fabric and MCM were not going to be behind Navi. This was just a wet dream.
It is not so much Nvidia wanting to be first but they (and GPUs) need to find a place to go. And a technology to sell. Not only is Nvidia completely lacking competition in high end but there are not many generations left for rasterization as it is today. GTX 1080Ti was just shy of 4k and 60FPS and it no longer fell off during its lifetime as GPUs have tended to do. RTX 2080Ti basically does 4k and 60 FPS and is suprisingly often CPU-limited at 1440p. Another generation or two with 30% improvements - first of which will quite certainly be the transition to TSMC's 7nm next year - and there is nowhere to go for the high end. 4k gaming monitors are only now starting to be a thing. 5k/8k are there but are not that much of a benefit for games given the performance impact and realistic screen sizes. Plus, platform starts to be more and more the limiting factor.
I get that everyone is disappointed about no new generation with price points that are one step down but that does feel like a very entitled view on things. If Turing is not worth your money, do not buy one. This sounds an awful lot like FineWine™ :laugh::roll:
they should have make a batch in advance for game developers to pave the road with rt games and than sell to public the feature and asking the price...
it's like a veyron for which the factory don't give you the second key which unlock the max speed...
Can't optimize lower unless you cut some effects out or just don't denoise
Demos are selling a unattainable "promise".
This 4.4ms is for reflections including denoising with reflections taking most of the time there. From the other mentioned effects 2.5ms for GI includes denoising. 2.3ms for shadows very likely also includes denoising.
1ms or less for denoising here sounds about right.
The images are correct and this is the Control trailer:
Untill now ill be happy with games implementing Vulkan API, like Star Citizen said it will. Hell, i get double FPS in doom with Vulkan compared to OpenGL. Why the hell are they still using it, ist beyond me.
you have to remember you chaps who feel you need high end gfx cards are the few, not the many, most computers don't need high end graphics cards so it makes sense to recycle tech if your fighting to recover your company from almost going under.
I personally think AMD doesn't give 2 turds if Nvidia has the fastest gfx card, just as long as AMD can compete with people wanting something that's affordable and does the job.
In a climate where most people struggle just to live/pay the bills and anything else is a bonus AMD have pulled themself's back from the brink which in it's self is an amazing feat.
for AMD, it's about keeping the company alive rather than being the fastest.
It was pretty common knowledge that when DOOM was released, AMD cards would get a good performance boost from Vulkan and Nvidia cards would get a perf hit. After a while, both with DOOM patches as well as driver updates on both sides, things somewhat stabilized but AMD cards will still get a boost and Nvidia cards are at about the same level with both APIs. There are differences here and there (for example in CPU-limited situations) and DOOM runs better on AMD cards but when it comes to APIs, that's how it is.
Just as an example, this is about 2 months after DOOM release:
www.gamersnexus.net/game-bench/2510-doom-vulkan-vs-opengl-benchmark-rx-480-gtx-1080
Edit:
Sorry for offtopic. Wanted to delete my previous post but you had already replied :)
4k gaming will become reality when devs will actively target it.
Otherwise, you can keep resolution/fps lower and add complexity to the screen.
The times are only for each of these specific effects, there is the normal rendering time in addition and overlapping with the RT effects. Even if none of the effects can be done concurrently to each other, rendering definitely can. To what degree, we do not know.
9.2 milliseconds translates to 108 FPS. Again, we do not know how much rest of the rendering adds to that. Assuming none of what the game does happens concurrently (which is quite surely not the case) there are about 7 milliseconds left in the time budget for the game to run at 60 FPS. Translating that to FPS - if the game runs at 140 FPS without these 3 effects, it can run at 60 FPS with them.
The benefits of RTX will be in low end and indie games. Low end and indie games don't use the graphics card a lot, and RTX allows them to implement global ilumination and great shadows at low production cost. So they don't need a team of 1000 artist slaves to create a game, and instead can do a game that looks close to a current AAA with a team of 10 people.