It's not low performance.
It's ok for a 70 series card to match the TI/90 of the previous one.
The 3070 did the same to the 2080Ti.
The 4070 has the advantage to actually beat the 3090Ti in newer games while the 3070 does not acordingly.
For $900? Absolutely it would be low performance.
The problem is that NVIDIA appears to be going
backwards in terms of perf/dollar. It looks more and more credible to me that they frontloaded the 4090 because the 4090 is where they put the bulk of their generational performance increase. So sure, the 4090 looks good if you compare it to previous halo cards that were an absolutely terrible value--and no one ever pretended otherwise, by the way. Everyone acknowledged that 80 and below were where the value lies, yet all of a sudden the 4090 releases and we're supposed to fall all over ourselves congratulating NVIDIA for releasing a $1600 Ada card that handily beats a ludicrously overpriced Ampere card whose MSRP was based on a concurrent and unprecedented GPU shortage.
But the 3090 and its Ti used the same die as the 3080. Any projections we discuss must take that into account; in other words, we can't depend on the lower tier cards to perform at traditional levels, relative to the halo product. NVIDIA's own charts bear me out on that, at least so far.
I'm glad they relented on the goofy naming scheme for the $900 "4080," but it isn't obvious that that this card won't come back with a different name and at an equally ridiculous price--or, as some in this thread have suggested, perhaps this card will just disappear entirely for a long time, while NVIDIA burns off its remaining stock of Ampere. Either way, the $1200 version of the 4080 doesn't exactly look like a value king, either, ATM. It looks like it will give you fewer frames per dollar than its predecessor, once you strip away the hocus pocus about "DLSS 3.0"--which is nothing of the kind, incidentally; 3.0 has nothing to do with the extant versions of DLSS, and it's far less useful. The name implies that 3.0 (interpolation, "fake" or visual-only FPS) is
better than 2.0 (upscaling, real FPS), which is nonsense.
NVIDIA isn't just comparing apples to oranges in this case; they're selling you a single grape and telling you that it's the hottest new thing in orange technology.
That could be another reason that NVIDIA led with the 90 card, this time--DLSS 3.0 looks better visually at higher frame rates. Essentially they're selling you a feature that purports to raise frame rates dramatically, and thus a feature that would logically appeal most to lower-end consumers, but the feature really only works well when you already have high frame rates (and when you have an extremely high refresh rate monitor to take advantage of the extra AI-generated frames; since these frames don't reduce input latency, any of them above your monitor's max refresh are pointless).
And I'm not fanboying for AMD here, either. You'd think this situation presents a great opportunity for AMD to seize the day with aggressively priced products, but I'll believe it when I see it. NVIDIA sure doesn't seem to be worried. For the moment, I suppose at least Intel's getting a breather to refine Arc's drivers. Remember when we all lamented the timing of Intel's launch? Well, Big Blue may not have to worry about next-gen competition at their current price bracket any time soon. A third credible player in this market can't come soon enough.