That's speculation. One shouldn't buy any hardware with only the future in mind, as the future is fluid and you never know what it brings. There might come some revolutionary new technology that makes every current gen GPU obsolete. Or we might get another episode of Nvidia 50-series stagnation. Who knows. God forbid, you could also die in 6 months, and then you've spent your last 6 months speculating about future GPUs instead of playing your games. Crude example, but not totally unrealistic.
At Scan UK:
View attachment 388816
The only RDNA 4 cards in stock at the moment.
Edit: The only 5070 in stock is the MSi Gaming Trio OC for £709.98.
"One should not buy hardware with the future in mind" is...well...it's an opinion! You and I very much disagree on that. FWIW, I do play games some, still, but I actually enjoy how they work more than using them.
I also had a 2080 ti/6800xt that lived pretty damn well for going on a bajillion years. What other hardware has done that? That is what type of person I am. 1080p/1440p hardware that lives forever.
And 9070 will not in the next generation of gaming. I think 9070 xt will...but again, at the low-end of it (which is fine)! People that buy that now probably can have at least an okay experience for a long time.
Like the aforementioned parts. I suppose they could have with a 4080/4090 as well (for those appropriate tiers), but this is for a new tier of person (more in the realm of what more people would spend).
And again, next gen it will ever be lower, including the former '90' class. And bring in more people.
I know that it is weird to analyze this stuff to death. But, you know, guy has to have a hobby. Also, I do this so other people don't have to, and you can spend your time doing whatever while I figure it out.

I tell people like what happened with the 4070 series would happen (and it did), and why 5070 would be bad (and it was), and now I'm telling you the problems with 9070, and solutions of 9070 xt.
It's not really speculation as much as you think.
nVIDIA's bw requirements are very much solvable, and they have now used them for multiple generations. Also, they are one of the only configurations to make any sense.
It is also said Blackwell is back-ported, meaning Rubin is likely the fulfillment of that actual architecture.
Furthermore, we can *assume* that nVIDIA will first use the dense process (aimed at ~3700-4000mhz) bc there is a Rubin+.
For those that don't know what "+" means, that is essentially the same as saying something like G92 to G92B, or a similar architecture but with an enhanced process (ADA->BW is this too).
Hence, the above assumption. Which really isn't that speculative. It's possible they could jump straight to a similar config to AMD, or that AMD might first release slower chips like nVIDIA.
But it makes sense for both companies to do as I've stated both because it follows their M.O., as well as the fact AMD needs to make a move like this (while nVIDIA will do whatever it can to save money.)
nVIDIA can rest on their laurals and reputation, while AMD cannot. Since what I am proposing is possible, they not only *should* do it, they likely are.
This is shown even in the possible clocks of n48, which reach up to 3.9+ghz. If you add the capabilties of 3nm (16.4%), that puts clocks at *capability* of 4500-4600mhz. Apple M4 is 4.4-4.51ghz on n3e.
It also leaves room to improve the chips further on n3p (or any node past n3e), perhaps buffing cache and/or scaling slightly higher than 40gbps memory (either/or).
I do expect power to be a concern around 4400mhz on n3e, but if you figure in the potential improvements of N3P (~4% performance), that preparation with N48 makes perfect sense and lines up PERFECTLY.
I think AMD will very much try to make SKUs that compete with one level up from nVIDIA. I think the cut-down SKU will be similar to the high-end similar chips nVIDIA makes with lower clock.
Both faster than 9070 at the bottom tier (of actual gaming GPUs).
This can be ascertained by the idea AMD is using chiplets; allowing higher yields and clock/leakage to be more easily binned, as opposed to nVIDIA perhaps staying monolithic, which also would explain their sitch.
All of this literally follows what AMD and nVIDIA have done for generations, follows the capability of the n3b/n3e/n3p process nodes, available bandwidth configurations, and current trends in game requirements.
It also would allow cheaper skus to replace the current ones, but this time no longer limited by constraints such as <16GB or 45TF, as for both companies this would be solved with those configurations.
You think I'm just throwing shit at the wall, but I'm really not. I can explain it all day, and I accept heart eyes emoji bets at any time.
If at any time what I am saying does not make sense and/or conflicts with any known information or anything that can dispute it, please let me know, as I certainly am fallible. You're right, it's hypothesis.
Reality may turn out to be slightly different, but likely still a similar rough parity.
I understand you are not a futurist; that you live the life in front of you. That is fine, not everyone dissects everything to pieces and prepares. I do. That way I don't get blind-sided.
If you're okay with that possibility if not probability, that again is your perogative, but then lest you be (type, not necessarily you) the one to complain when that future arrives all the same.
