Friday, December 13th 2019

Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2
Hardware-accelerated ray tracing and variable-rate shading will be the design focal points for AMD's next-generation RDNA2 graphics architecture. Microsoft's reveal of its Xbox Series X console attributed both features to AMD's "next generation RDNA" architecture (which logically happens to be RDNA2). The Xbox Series X uses a semi-custom SoC that features CPU cores based on the "Zen 2" microarchitecture and a GPU based on RDNA2. It's highly likely that the SoC could be fabricated on TSMC's 7 nm EUV node, as the RDNA2 graphics architecture is optimized for that. This would mean an optical shrink of "Zen 2" to 7 nm EUV. Besides the SoC that powers Xbox Series X, AMD is expected to leverage 7 nm EUV for its RDNA2 discrete GPUs and CPU chiplets based on its "Zen 3" microarchitecture in 2020.
Variable-rate shading (VRS) is an API-level feature that lets GPUs conserve resources by shading certain areas of a scene at a lower rate than the other, without perceptible difference to the viewer. Microsoft developed two tiers of VRS for its DirectX 12 API, tier-1 is currently supported by NVIDIA "Turing" and Intel Gen11 architectures, while tier-2 is supported by "Turing." The current RDNA architecture doesn't support either tiers. Hardware-accelerated ray-tracing is the cornerstone of NVIDIA's "Turing" RTX 20-series graphics cards, and AMD is catching up to it. Microsoft already standardized it on the software-side with the DXR (DirectX Raytracing) API. A combination of VRS and dynamic render-resolution will be crucial for next-gen consoles to achieve playability at 4K, and to even boast of being 8K-capable.
Variable-rate shading (VRS) is an API-level feature that lets GPUs conserve resources by shading certain areas of a scene at a lower rate than the other, without perceptible difference to the viewer. Microsoft developed two tiers of VRS for its DirectX 12 API, tier-1 is currently supported by NVIDIA "Turing" and Intel Gen11 architectures, while tier-2 is supported by "Turing." The current RDNA architecture doesn't support either tiers. Hardware-accelerated ray-tracing is the cornerstone of NVIDIA's "Turing" RTX 20-series graphics cards, and AMD is catching up to it. Microsoft already standardized it on the software-side with the DXR (DirectX Raytracing) API. A combination of VRS and dynamic render-resolution will be crucial for next-gen consoles to achieve playability at 4K, and to even boast of being 8K-capable.
119 Comments on Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2
Nvidia doesn't cut prices even more, because it doesn't have to, really. They currently have about 73% market share in dedicated GPUs. It is AMD that needs to gain market share and so has to subject itself to earn less.
Huuh. Cut price to offer better chips for the same value? I though it was the natural way of new graphics cards evolution and releases but I guess it has been brought down to the goodness of NVidia now.
You are missing the rest of the conversation to say NV doesn't have to cut prices. It will have to cut prices and you will see soon why. AMD my dear foreign friend doesn't need to do anything now and it is evidently clearly today. It is NV that is running around town like a boogeyman trying to scare people off with not having RT cores.
Second, Neon Noir runs at 1080p 30FPS on Vega 56 and about the same on GTX 1080. For comparison, Battlefield V with DXR on (High, not Ultra) can be run on GTX 1080 at very similar 1080p 30FPS.
Jesus the red base fans their theories :rolleyes: NeonNoir only has reflections at 1 ray per 4 pixels and it already suffers immensely.This is worse than RTX low doing 1 ray per 2 pixels in worst case scenario,and it's a synthetic benchmark not a game.
lol,you're in a big bubble sir. lel,just like 5500xt
Besides I'm not a red based fan so quit that. Who's being a prick now? :p We will see who is in a big bubble (Whatever that means) in time. New engine will be available in full extent soon and there will definitely be games using it. this will be a good indication of what is actually needed. Those rays per pixel can be increased you know. It is a demo showcase to show what it can do like a CPU sample. It is not the released product so be patient. You just don't see it yet and if I'm supposed to be a red based fan with theories than you are a blind green fan without any theories or reasoning for that matter. :)
The difference in performance in ray tracing scenarios and non ray tracing environment between 2080 and 1080 is more less the same in both cases. So how is the RT cores supposed to speed things up for ray tracing?
This means that the 2080S is simply faster graphics.
I will follow up this stuff a bit more to evaluate it and see if this is true for sure. I suggest you do the same thing.
I don't think he gets rasterized vs ray traced.
2080Ti with 13.5 tflops and rt+tensor cores 40 fps
Titan V with 15 tflops and no rt cores 28 fps when using simpler forms of RT,like shadows only,1080Ti is closer to 2060,still loses by 40%
www.purepc.pl/karty_graficzne/call_of_duty_modern_warfare_2019_test_wydajnosci_raytracingu?page=0,5
interestingly,the perfromance penalty is over 100% on 1080Ti,80% on 1660Ti (tensor) but only 19% on RTX 2060.
1080Ti tends to produce more noisy image too.
As can be seen in the Steam Hardware Survey, it has done little to impact AMD's market share and is still outsold by Nvidia's comparable products;
AMD Radeon RX 5700 XT 0.22% (+0.07%)
NVIDIA GeForce RTX 2060 1.95% (+0.41%)
NVIDIA GeForce RTX 2070 1.60% (+0.19%)
NVIDIA GeForce RTX 2070 SUPER 0.42% (+0.17%)
NVIDIA GeForce RTX 2060 SUPER 0.25% (+0.10%)
As you can see, in this segment Nvidia is outselling them ~10:1. You're not even trying to be serious. Grow up or go play somewhere else!
Anyone with a basic understanding of 3D graphics knows ray tracing to be necessary to get good lighting.
I am serious the same way I see you being serious.
The fact is that AMD's market share among gamers have stayed stagnant at 15%, which also includes APUs from AMD. For the past three years AMD have not been present in the high-end, stayed at ~10% or less of the mid-range, while many have been touting Polaris, Vega and now Navi as "great successes". In general sales AMD have about 20-25% discrete GPUs, but most people keep forgetting that a lot of this is from OEM sales of low-end GPUs that are not used for gaming. Steam is the most dominant platform among PC gamers, and is very much representative of the PC gaming market, anyone who understands representative samples would understand this. There is nothing that is more representative than the Steam statistics at this point. Over the past five years AMD have been making way more noise over "their stuff" than anyone else, including Mantle, the myth of "better" Direct3D 12 performance, FreeSync being "free", etc.
While RT may not be super useful yet, it will be at some point. All hardware support have to start somewhere, and hardware support have to come before software support. Just in the past month there has been added nearly twice as many RTX 2060s as there are RX 5700 XTs in total. If you add up the percentage-points of gain for these Nvidia cards it is 0.87% compared to RX 5700 XTs 0.07% gain.
You must think that the people who work at Nvidia are all stupid and make business decisions that involve millions and millions of dollars, just for the bragging rights.
www.techpowerup.com/forums/threads/cryteks-neon-noir-raytracing-benchmark-results.261155/
The problem Neon Noir has is accuracy, but RT isn't all that accurate yet either, it just resolves the lack of detail differently. Ill take the software box of tricks in Neon Noir over BFV's RT implementation any day of the week.
Really the debate is still ongoing on what is the best solution. Some hardware for it, sure. Large sections of a die? Not so sure, this will probably get integrated in a way and Turing is just an early PoC.
Essentially, Nvidia was looking for a new buyer's incentive/upgrade incentive and found it in RT. Marketing then made us believe the world is ready for it. That is how these things go :)
So really, lacking the content, Nvidia surely released Turing cards with the idea to brag about it. It is what Jensen has been doing since day one. It just works, right? We were going to buy more to save more because dev work was going to become so easy, if you winked at the GPU it'd do the work for you. Or something vague like that. And then there is reality: a handful of titles with so-so implementations at a massive FPS hit ;)
This also explains why AMD cares a lot less, and just now starts to push it to console. Their target market doesn't really care, and represents the midrange. AMD has no urge to push this forward other than telling the world they still play along.
Neon Noir has cool optimizations that benefit performance. Things like only doing RT for short range and falling back to Voxels when it is beneficial.
By the way, CryTek should (and plans to) use assistance from DXR or Vulkan's RT extensions in their engine.
In comparison to Neon Noir, what exactly makes you dislike BFV's RT implementation?
Best solution is relative. RT cores is not an RT solution. It is a hardware assist to casting rays. The exact algorithm and optimizations are up to developer. On Nvidia side of things, any DXR game will give an idea what RT performance differences are between Pascal, Turing and Turing with RT cores.
If we want to compare AMD vs Nvidia, we cannot. AMD cards/drivers have no DXR implementation.
Also I don't believe games need the high accuracy at all. Especially in motion, the cost of that detail just isn't worth it. On top of that, games are an artistic product, even those that say they want to 'look real'. Its still a scene and it still has its limitations, and therefore still needs tweaking because just RT lighting makes lots of stuff unplayable.
RT effects, including DXR support, are there or coming to large engines. Unreal has those, Unity has those (not sure if still in preview or production build), CryEngine has RT but no DXR support yet. Others will not be far behind.
Ray tracing has been requested by graphics developers for over a decade. Every new GPU generation has given us more performance and memory, easily allowing developers to throw in larger meshes, finer grained animations and higher detailed textures, which is easy since most assets are modeled in higher detail anyway. But lighting and shadows have been a continuous problem. Simple stencil shadows and pre-rendered shadow maps is not cutting it any more as the other details of the games keeps increasing. Pretty much every lighting effect you see in games are just cheap clever tricks to simulate the real thing, and quite often only "work well" under conditions and may result in unwanted side-effects. Programming all these effects is also quite challenging, and may have to be adapted to all the various scenes of a game.
Simply put; developers want RT more than Nvidia. But we are only in the infant stages of RT this far, it's still too slow to be used to the extent developers want. So for now, it has to be used in a limited fashion.
Even if just for sanity check purposes... because when I hear 'developers want for a decade' all I really hear is 'we've been working on this for 10 years, and finally, here it is' (Huang himself @ SIGGRAPH). I've seen too much spin in my life to take this at face value. There is always an agenda and its always about money.
Raytracing has been kind of coming for a while. Theory is there, research is there but performance simply has not been for anything real-time. Now Nvidia pushed the issue to the brink of being usable. Someone has to push new things for these to be implemented and widespread enough especially when it comes to hardware features.
@Vayra86 just look at how lighting and shadowing methods have evolved. Shadow maps, dynamic stencil shadows, soft/hard shadows and the ever more complex methods for these. Similarly and closely related - GI methods. Latest wave of GI methods were SVOGI (that CryEngine uses for fallback in Neon Noir and their RT solution is evolved from) and Nvidia's VGXI are both Voxel-based and with a very noticeable performance hit. In principle both get more and more closer to raytracing. Also keep in mind that rasterization will apply several different methods on top of each other, complicating things.
If you think Nvidia is doing this out of the blue - they are definitely not. A decade of experience in OptiX gives them a good idea about where the performance issues are. Of course, same applies to AMD and Intel.