- Joined
- Sep 11, 2015
- Messages
- 624 (0.19/day)
The way I view a real 4K card (I think the same way most people view it) is that it can run the newest AAA games in 4K today. None of that non-sense you mentioned that some people may view as 4K. I could tell you I'm running 4K on Minecraft with a reduced field of view on my old R9 280X. That doesn't make my 280X a true 4K card. Sure, AAA games get better with time and also people will argue about the perfect frame rate a card should output for them, but all in all, hitting at least 60 fps on high or ultra settings on all the new AAA titles that are out today is where it's at with 4K for most people. I think 3090 could be the first card that actually will offer that and why I called it the first true 4K card.There are no 4K cards and there never will be. There still isn't a 1080p card. The goal posts move and viewport resolution is already far from the only major influence. What really will matter is how you render within the viewport. We already have many forms of pseudo 4K with internal render res that is far lower, and even dynamic scaling, on top of all the usual lod stuff etc. etc. On top of even that, we get tech like DLSS and obviously RT.
Basically with all that piled up anyone can say he's running 4K, or 1080p, whatever seems opportune at the time
CPUs are much different than GPUs when it comes to parallel processing. You can do a lot more with GPUs by just increasing the core counts and improving the memory bandwidth. Graphics is the best-known use case for parallelism while increasing the bandwidth to the memory and that's something they are clearly doing. That's actually what they have been doing forever. So yea, at some point, increasing cores all the time, the cards have to become bigger no matter what. And they have been already becoming gradually bigger throughout the years. Nothing new. The new part is this massive jump in size from the last generation, which means they are going to increase the cores by more than usual and clock them much higher. They may even bake in some new stuff that takes advantage of all that bandwidth (over 1 TB/s). This is also a huge jump in bandwidth from 2080 ti (620GB/s).Its not really the right perspective, perhaps.
If Nvidia has deemed it necessary to make a very big, power hungry GPU that even requires new connectors, it will royally step outside their very sensible product stack. If that is something they iterate on further, that only spells that Nvidia can't get a decent performance boost from node or architecture any more while doing a substantial RT push. It means Turing is the best we'll get on the architecture side, give or take some minor tweaks. I don't consider that unlikely, tbh. Like CPU, there is a limit to low hanging fruit.
This is not good news. It is really quite bad because it spells stagnation more than it does progress. The fact it is called 3090 and is supposed to have a 2k price tag tells us they want to ride that top-end for quite a while and that this is their big(gest) chip already. None of that is good news if you ask me.
Another option though is that they could not secure the optimal node for this, or the overall state of nodes isn't quite up to what they had projected just yet. After all, 7nm has been problematic for quite some time.
But none of this really tells you anything about the architecture improvement of Ampere. None of the info on power consumption or size really shows you anything about that. Also, the price means very little here. It could still be some totally crazy performance increase per Watt, we don't know. People just like to speculate and be negative without actual info.
Last edited: