- Joined
- Sep 10, 2018
- Messages
- 7,890 (3.31/day)
- Location
- California
System Name | His & Hers |
---|---|
Processor | R7 5800X/ R7 7950X3D Stock |
Motherboard | X670E Aorus Pro X/ROG Crosshair VIII Hero |
Cooling | Corsair h150 elite/ Corsair h115i Platinum |
Memory | Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk |
Video Card(s) | Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090 |
Storage | lots of SSD. |
Display(s) | A whole bunch OLED, VA, IPS..... |
Case | 011 Dynamic XL/ Phanteks Evolv X |
Audio Device(s) | Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B |
Power Supply | Seasonic Ultra Prime Titanium 1000w/850w |
Mouse | Logitech G502 Lightspeed/ Logitech G Pro Hero. |
Keyboard | Logitech - G915 LIGHTSPEED / Logitech G Pro |
I believe Nvidia when they say they've done XYZ to the RT cores, but where are the benefits? It's like AMD with the chiplet design in RDNA 3. It's highly advanced tech, but did it make the cards faster? Or cheaper? No. It didn't even go into the margins, as we all see they lost some massive cash on Radeon last year. A technology is only as good as the implementation of it. AMD has proved that with FX, then chiplets on RDNA 3, and now Nvidia is proving it with Blackwell, it seems.
I also don't think Blackwell is "forward thinking" in any way. Looking at the diagrams, it's still the same architecture as Turing, only with the INT/FP roles of the shader cores mixed up to be more versatile - which again, shows no gains in real life.
What gain in RT performance? Do you mean gain in performance in general? RT performance alone hasn't improved much since Turing (see my explanation above).
If you just separate how many RT cores they've added and the actual perfomance uplift in RT heavy gaming the uplift is always higher is all I was pointing out if RT perfomance was the same you'd see a regression becuase the amount of RT cores isn't going up significantly.
It's likely why you see much larger gains in RT with the 4090 vs 3090 than raster only.
but even path traced games have a ton of rasterization still so it's impossible to actually know how much better each RT core is.
I'm talking 20-30-40 series 5000 series who knows lol.
It's not 18%. It's less than 10%. If you compared with equal rasterization & shader count & r.o.p's Nvidia has only gained at most 6% increase in RT efficiency every generation. The 3080 ti has much higher rasterization than a 2080 ti. Every one of the comparisons you made here increased rasterization, it's very hard to compare Nvidia's own generations to each other. When nvidia keep changing parts for rasterization while only claiming their RT is better.
You definitely can't isolated them but I've noticed they haven't upped the RT count much each generation likely due to cost they probably want to keep the same level of RT to raster perfomance so people are more reliant on their features.