How much of an issue would the 192 bit and only 12 GB vram be in the future? I cannot afford a 4080 or 7900XTX, the most I can go is the 7900XT. Than one has 320 bit and a whooping 20 GB of Vram. The 4070 is cheaper, and thus the more attractive option. I dont care for RT and FSR is good enough for me, so those are not a factor in my choice. But with only 12 GB of Vram (which is already borderline at 4K) and only 192 bit, I fear that the 4070 is a great performer today, but as newer more advanced games release, I fear it won't have the lasting power of the 7900. (I remember the old days, when the rule of thumb was: avoid the 128bit cards, always go for the 256bit ones)
The 12 GB of VRAM wouldn't be an issue (not in realistic workloads anyways), as rasterizing workloads will be bottlenecked by memory bandwidth or computational power before. (RT workloads tend to be bottlenecked by computational power first).
The one thing that nearly no one gets is that given a fixed memory bandwidth, the amount of VRAM which can be used within a given timeframe is also fixed. This is regardless of the game, algorithm or API. So if your GPU has 504 GB/s and your desired frame rate is 120 FPS, then the maximum theoretical utilized VRAM in a single frame i 504 / 120 = 4.2 GB. But this assumes the GPU only accesses the same memory once (which it doesn't), and does it 100% efficiently, so the real usage is probably less than half of this. The next logical deduction from this, if you mod a game with a giant texture pack with 16x the normal texture size, the frame rate will fall sharply simply because the memory bus can't deliver, long before you've fully utilized the VRAM. You'll quickly fall below 30 FPS because of the memory bottleneck.
So the next big question then, what about RX 7900 XT with its tempting 20 GB and impressive 800 GB/s bandwidth? Well, if this card was better balanced, then it would completely outclass RTX 4070 Ti even today, but it doesn't, because it's limited on the computational side*. Not only in pure TFlops and so on, there could be GPU scheduling and numerous tiny architectural details which "holds back" performance, and this is
always the case with
any GPU design. So for this reason, base your choice on a wide set of benchmarks (like this review), and the conclusions drawn from this is likely to hold true for the useful lifespan of the products, both 2 years and even 5 years from now, the relative performance between the products are likely to remain the same. And this has held true in the past, looking back at Polaris, Vega and Navi, they didn't stand the test of time any better than their green counterparts.
So pick the card that fits your situation best now, and it's likely to be the best choice in the long term.
*)
A texture mod pack would probably be the exception here, as these are unbalanced, where this card might get an edge.
1. Well it is borderline. I am hitting a wall with most games at 8 GB at 3440*1440 (Shadow of the Tomb Raider hits 7 Gb at 2560*1440 already). 4K is going to push that past 10 GB easy. So in 2 years time, do you think 12 GB will be enough?
Don't forget, memory allocated isn't the same as memory used. GPUs compress buffers and some textures heavily.
Secondly, to reiterate my point above, if your future games in two years are going to allocate more, then they will also demand more bandwidth and likely computational power too, so ineviteably you are going to sacrifice FPS or detail levels, and in either case you are not likely to run out of VRAM first.