20 percent is just "you'd better have done nothing" kinda uplift. NV with the major help from AMD are doing their best to gaslight us into thinking it's okay for a next-gen $600 GPU to only outperform last-gen $600 GPUs by a dozen percent. 20 years ago, it was a thing for $100 GPUs to outperform last-gen $200 GPUs by about 100 to 300 percent depending on a game.
Sure, it’s partially is just an attempt to extract maximum profits. That became really noticeable with how, over time, NV switched the die stack up and what before was considered mid-range became high-end. The 4060 is so far the culmination of this approach - a blatantly 200 bucks 4050 being sold as a 4060 for a hundred more. But on the other hand, we obviously will never see the bonkers improvements from the early GPU days, that always happens with tech maturing. I don’t think that NV is sandbagging necessarily, they are just pricing what they can extract gen over gen as high as they seem to think can be done. So far, it works for them, unfortunately, especially with the consumer cards being less of a priority. For all the power the 4090 has, it’s still a cut-down reject of a card. Funny how technically the best Ada silicon quality you can get as a consumer is probably the 4080S and not the halo card.
1080 Ti is faster than 980 Ti by about two thirds. About 50 percent faster per dollar.
2080 Ti is faster than 1080 Ti by about a third, also being more expensive. It was and still is a horrible $ per FPS release.
3080 Ti is faster than 2080 Ti by about 50 to 60 percent. It's mediocre at best considering the even more so increased price. About 40% $ per FPS improvement.
4090 is faster than 3080 Ti by about the same 50 to 70 percent. Nothing impressive about that, either, since 4090 is so ridiculously expensive.
This seems to be an attempt at framing information negatively. There is a 4080 super, but you jumped to the 4090. If you were just referring the the highest end card of the consumer series, the 3090 TI existed which was more expensive than the 4090, but you skipped that one. Not sure what the point was besides to denigrate Nvidia for perceived misconduct in pricing.
As expected the argument just boils down to RT performance, fill rates and compute are not particularly related to RT. Worse RT isn't indicative of "serious issues with their architecture", I am sure most of these deficiencies come down to a few operations that are a lot slower on RDNA3 in the RT pipeline, hardware related problems are also obfuscated by the tons of Nvidia sponsored titles, where I have no doubt developers spend most of their time, if not all, profiling code and optimizing for Nvidia hardware. If you look at the PS5 it's quite impressive that developers can squeeze decent RT effects in games on what's otherwise laughably underpowered RDNA2 hardware, does anyone spend that much effort when they port their games on PC ? It's speculation but I seriously doubt it.
Which is only technically not a "normal" 4080. We are yet to see a game where it's faster than a 4080 by more than 10 percent. This is still, however a xx70 class GPU sold for double the price and praised like it's the CEO of novatorship awhilst the reality is only showing us a mild decrease in the NV's greed.
4090 is a more cut-down Ada than 3080 Ti is a cut-down Ampere. Much more so.
My point stands: 20 percent gen-to-gen FPS per $ improvement is bad, horrid, putrid, you name it. You can't justify it. The only reason why it is a thing is we don't have real competition. AMD do not try to compete (there was absolutely no reason to ask $1000 for a lame-ass 7900 XTX other than "oh these greens asked even more for their quote-unquote 4080." The market proved it: despite being cheaper, 7900 XTX was sold in 5+ times lesser numbers worldwide and it didn't affect NV SKU pricing whatsoever.). Not to mention AMD do still use Turing GPUs as their reference on the RDNA3 presentation slides. This alone proves they are just an ambient noise.
Intel try but they currently can't make NV sweat. Don't see how they will do it this or next year, yet fingers crossed they will.
...the Dro's point. He's talking 7900 XTX underperforms in general and by a lot. Tuned RDNA3 would've allowed for 96 CUs (7900 XTX) obliterating RTX 4080 in pure raster and maybe trailing behind in RT a tad but nothing all too crazy. And by obliteration, we mean 30+ percent difference, not these puny 3 to 15 % wins in AMD favouring games.
I also disagree on scaling being a real issue here. 7800 XT has 60 CUs, 7900 XTX has 96 CUs, and the latter is meant to be about 60 percent faster...
The problem isn't scaling. The problem is RDNA3 itself. It's underdelivering no matter how many CUs there is.
7900XTX was never going to "obliterate" a 4080, it has 25% more shading power which doesn't scale linearly anyway, the scaling is much worse going from AD103 to AD102 for example.
AD102 has 70% more shading power than AD103 and it's not even half as fast as you'd expect it to be, not even in RT workloads, it's obvious to me that between the two Ada is actually the one with much bigger architectural problems, it's actually comically bad if you think about it when you realize the 4090 doesn't even have a fully enabled AD102 chip. RDNA3 is doing alright.
Some might still wonder why there was no 4090ti, probably because it would most likely struggle to be even 5% faster than a 4090.
No, it's not even close to 50% more like 30% and that doesn't scale linearly either, RDNA3 is functioning as expected, you're just a troll as usual that can't even get his numbers right. This argumentation is completely meaningless anyway, comparing FP32 performance and ROPs and fill rate and whatever can only give you rough performance expectations, variations of -/+ 10% are perfectly in line. Claims such as "there something clearly wrong blah blah blah" are founded on nothing.
Ampere famously doubled FP32 performance as well and it obviously wasn't twice as fast, I do not recall "there is something very wrong with Ampere hur dur" comments at all, people just love clowning on AMD again for literally no reason.
By whom? All I see is an X dollar RDNA GPU doing a worse job overall than an X dollar Ada GPU, also being 25 percent more complicated. And it's not my perception, you can see the numbers for yourself.
With higher G6X latency added into the mix, it's closer to 50% than you think it is. Of course it's only 35% on paper, yet it's more. But RDNA3 can't make use of it regardless.
Latency has nothing to do with bandwidth, GPU workloads are also notoriously tolerant to high latencies as it simply does not matter as much, that's why they can make use of much faster memory chips in the first place vs CPUs, which are always higher latency in case you didn't know.
"It's more than you think", dude give me a brake, stop talking about this stuff like like you have a clue. This is my last comment on the matter, this is getting dumb.
By whom? All I see is an X dollar RDNA GPU doing a worse job overall than an X dollar Ada GPU, also being 25 percent more complicated. And it's not my perception, you can see the numbers for yourself.
How are they both X dollar gpu's. One is X one is > 1.6X. So in your tiny mind, a gpu that sells for well under $1K should perform the same as the $1.7K one but it's just because RDNA3 sucks it doesn't
Damn, yall really turned a battlemage thread into nvidia vs AMD. The inability to process topics is astounding. I will pray all of you can manage to get your blankets over you tonight.