- Joined
- Sep 17, 2014
- Messages
- 22,644 (6.04/day)
- Location
- The Washing Machine
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Aren't we mixing 2 things here? Or maybe 3?
1) Ray tracing (RT) is a basic rendering technique. It's been around for decades and is fundamental - not useless or unusable as one of AMD fanboys claims.
2) Real-time ray tracing (RTRT) is... RT in real time ;-), i.e. fast enough (whatever that means).
It's been around for a while, but used for previews - not final renders. Previews are greatly simplified - they ignore some materials and some effects. Also the resulting live render is usually low-res and under 30fps.
3) RTRT in games means it has to be efficient enough for processing all effects, at high-resolution (1080p+) and high frequency ( has to be acceptable for gaming, i.e at maybe 1080@60fps, maybe 4K@30fps...
You're talking about general processing implementation, i.e. what standard GPU cores do.
Nvidia used an ASIC and it's just way faster - just like tensor cores are way faster for neural networks.
Everything else you've said is more or less correct.
If one wants to combine RTRT with 4K@60fps, then doing that on GPGPU is 10 years away from now. But on ASIC it should be possible withing 1-2 generations, i.e. 4 years tops.
But thanks to RTX cards, you don't have to wait 10 years. For mere $1200 you can already make your games look as if it's 2028 (just at 1440p tops).
And when you buy your next RTX card in 2021 for another $1200, it should be OK for 4K@60fps.
There's just no way around it. AMD will have to respond with a similar tech, ignore RTRT ("Who needs realism? We're so romantic!") or magically make Navi 4x faster than Vega.
No need to argue semantics with me. You know exactly what I'm getting at All RT that is not done on the GPU in real time is not the ray tracing we're talking about when it comes to RTX / DXR, we already have pre-cooked lighting and that is what any kind of non-realtime RT boils down to - its the same as saying 'AI' when in fact its nothing more than lots of lines of code and data to cover every possiblity.