- Joined
- Jun 14, 2020
- Messages
- 3,423 (2.11/day)
System Name | Mean machine |
---|---|
Processor | 12900k |
Motherboard | MSI Unify X |
Cooling | Noctua U12A |
Memory | 7600c34 |
Video Card(s) | 4090 Gamerock oc |
Storage | 980 pro 2tb |
Display(s) | Samsung crg90 |
Case | Fractal Torent |
Audio Device(s) | Hifiman Arya / a30 - d30 pro stack |
Power Supply | Be quiet dark power pro 1200 |
Mouse | Viper ultimate |
Keyboard | Blackwidow 65% |
You are missing the point. I'm not arguing about the usefulness of rt. Let's for the sake of argument agree that's it's completely garbage and no game should ever use it. Still when you want to compare architectural efficiency you need to tske that into account. If half the 4090 is used for RT acceleration (it's not, just an example) then just comparing transistors to raster performance would get you completely flawed results. There are purely rt workloads / benchmarks you cna use for such a comparison. 3d Mark has such a test for exampleThe problem with RT is that we have no sane metric for it. The performance drop you get - in both camps - depends heavily on the amount of RT used. So we can compare architectures, and sure, Nvidia HAS more perf on offer there, but its misty as fuck. Fact is, every single GPU chokes hard on RT, today, with perf loss north of 40% just for mild IQ improvements. Its not sustainable 'as is'.
The reason I say this is because we're still in the early adopting days. There is no transparency here wrt which architecture will remain best fit going forward. What we are seeing though is that engines deploy lighter / more adaptive technologies to cater to different performance levels. Nvidia is capable of leveraging their proprietary approach today, but that advantage is going to end soon - and its not an architectural advantage so much as an optimization exercise with developers.
When RT gets more mainstream and there is more consensus on how it gets used, we can get a much better view on what the architectural impact is on RT perf. Today it simply is bonus, at best, and many gamers will be turning it off, especially lower down the stack.
The analogy to core counts on CPUs though... lol. There is a lot wrong with that - if games would have been primarily single threaded, guess what, the 8 core CPU is definitely the more competitive product and if it has, say, a fast cache to accelerate gaming workloads and topped out at 8 cores, while beating high core count adversaries in the other camp on a smaller footprint, then yes, it is architecturally stronger, at least for that use case. So I guess if you buy into the RT fairy tale, you can defend Ada is stronger for your use case right now.
But there is the long term perspective here, and with architectures that one matters. Nvidia I'm sure has a plan going forward, but the fact is, 600mm is the end of the road for Geforce products, and they're on it, while giving that same SKU a very high power limit while on a very efficient node. All I see here is another Intel, to be fair, with the added point of using RT/DLSS to avoid the death of monolithic instead of a big little approach
Regarding your comment about cpus, your argument leads to the 7700x being better architecturally than a 7950x,since they perform similarly in games. Which is just not true, it's just flawed comparison, exactly like comparing 4090 purely on raster is flawed.
Anyways we have the 4080 vs the xtx comparison, amd is achieving much less with much more. Bigger die, bigger bus widths, more vram, more of everything, it just ties the 4080 in raster, loses in rt.