You will only have fetching happening if data the GPU needs is not in cache. And typically chips cache designs and memory management have functionality where it will look ahead and make sure what it needs next is available and fetch or swap accordingly so that what it needs is available via the cache. There is logic in cache too that will only trigger the need to fetch from GPU ram or system RAM if a miss is triggered in the cache control logic.
The amount of clock cycles to fetch from cache vs GPU ram to system RAM is increase at each layer of memory in the system. GPU ram will require less cycles then system RAM etc. And then obtaining data in real time for whatever storage medium is available is even worst, by a lot.
View attachment 180107
Good points. But now we also have consoles that can access a storage media faster than a PC can do it. So effectively, they might be fetching faster than any normal PC with storage over SATA600. They will use that to their advantage and a regular GPU just can't. Nvidia said something about RTX I/O to counteract that. Looking forward to seeing more of it, but if you believe that technology is there to save them, then you also have to believe that the VRAM that's there isn't enough for the next gen of games. And what other conclusion can you draw anyway... they don't develop it just because they're bored and want to bleed money.
I just can't conclude that the current capacities are sufficient going forward, there are too many writings on the wall and if you want to stay current you need Nvidia's support. Not just for DLSS, or RTX... but also for things like RTX I/O. They are creating lots of boundaries by offering lower VRAM and intend to fix it with proprietary stuff. I'm not buying that shit, as much as I didn't buy into Gsync until it didn't carry a premium anymore... and honestly, even now that I have it, I'm not even using it because every single time a small subset of situations exist where its not flawless and worry-free.
This is all more of the same. Use the basic, non-altered, open to everyone tech or you're fucked and essentially just early adopting forever. I'm not doing that for my hard earned money. Make it work everywhere and make it no-nonsense or I'm not a fan. History shows its impossible to cater the driver for every single game coming out - and waiting for support is always nasty.
Feel is the key here. Such discussions should be based on rational arguments not feelings.
Cards manage memory differently, and newer cards do it even better than RX 580.
While there might be outliers which manage memory badly, games in general do have a similar usage pattern between memory capacity (used within a frame), bandwidth and computational workloads. Especially capacity and bandwidth are tied together, so if you need to use much more memory in the future, you also need more bandwidth. There is no escaping that, no matter how you feel about it.
You're not wrong, but nobody can look into the future with 100% certainty. We're all predicting here. And I think, this time around, the predictions on both sides have rational arguments to take them seriously. Its absolutely NOT
certain the current amount is sufficient going forward.
Do I trust Nvidia and AMD to manage memory allocation correctly? Sure. But they are definitely going to deploy workarounds to compensate for whatever trick the consoles do. Even AMD has a tech now that accelerates the memory. And in Nvidia's case, I reckon those workarounds are going to cost performance. Or worded differently: They won't age well.
Firstly, people like you have continuously complained about Nvidia not being "future proof" at least since Maxwell, due to AMD often offering ~33-100% more VRAM for their counterparts. This argumentation is not new with Ampere, it comes around every single time. Yet history has shown Nvidia's cards have been pretty well balanced over the last 10 years. I'll wait for the evidence to prove Ampere is any less capable.
To this I can agree - although I don't complain about future proof on most cards at all - the two last Nvidia gens just lack it
. Most cards are well balanced and I think Turing also has its capacities still in order even if slightly reduced already. But I also know Nvidia is right on the edge of what the memory can handle versus what the core can provide. It echoes in their releases. The 9 and 11GBps updates for Pascal, for example, earned same shader count cards about 3-4% in performance.
With the 3080 most of all, but also in some way for the 3070, I believe they crossed the line. In the same way as they did with 970 - and I reckon if we review it now against a number of current cards with 4GB and up, the 970 will punch below its weight compared to launch. The same goes for the Fury X example I gave against the 980ti - that WAS actually benched and the results are in some cases absolutely ridiculous. Its not a one-off, need more? 660ti was bad balance and only saved by a favorable price point... 680 vs 7970 - hell even the faster VRAM 770 with 2GB can't keep up.
Evidence for Ampere will have to come in by itself, I am truly interested in a topic or deep dive like that 3 years down the line.
Now you're approaching conspiracy territory.
You know very well that TSMC's 7nm capacity is maxed out, and Nvidia had to spread their orders to get enough capacity for launching consumer products (yet it's still not enough).
Double VRAM versions (assuming they arrive) will come because the market demands it, not because Nvidia has evil intentions.
Eh, 'motive' is meant in the sense that Nvidia offers less VRAM to have more headroom for the core clock. Not some nefarious plan to rule the world - just the thought that yields weren't great and might be improving now, too, making the upgrades feasible. And yes, an upside of offering low VRAM caps is that cards won't last as long. Its just a fact and a shareholder likes that.
I'm reading through this thread and trying to figure out who to troll, but it's not easy as both sides have valid points and the reality is probable somewhere in the middle.
For the pro vram crowd yeah having more vram can't hurt as worst case it will perform the same and you can sleep soundly knowing your "future" is safe.
For the anti vram crowd for most games on most settings at least for now it would be on par with the more vram version and when future happens the performance impact might be less than the percentage of money saved.
Let say you save 20% when buying 6gb version and present time has same performance and in the future is 20% behind well you had same performance in the present and got what you paid for in the future so still can feel good about the purchase.
As an example I'll use 1060 3gb vs 6b which is not perfect example as both don't have the same gpu count but close enough still something can be learned.
In the initial review 1060 6gb was 14% faster in 4k compared to the 3GB and looking at todays data for the same cards 1060 6GB is 31% faster at 4k than the 3GB version..
So call it that it lost 17% of performance over time, but that is only in resolution/detail that neither 3gb or 6gb will be able produce playable fps and if you look at the new reviews at 1080p the difference between 3gb and 6gb is similar at 17%.
So having that in mind if I would have saved 20% on getting the 3GB model I definitely got better deal at the time I bought the card and for the future.
If 3050ti 6GB has exact same specs as 12GB and costs 20% cheaper looking from price/performance standpoint one would make a good argument that it's better purchase than the 12G version and if the 12GB 3050ti price is close to the 3060ti the 3060ti will be better purchase all day everyday even though it will have less VRAM.
So now that I had time to chew on this and make educated guess if anybody questions the existence of either 6GB or 12GB 3050ti I would say 12GB makes less sense from current and future price/performance standpoint.
If you make an absolute argument yeah the 12GB will be always better card than the 6GB but that's like you being offered both at the same price and you be a fool to pick the 6GB but that is not the reality of things there is something called price $$$ and that together with fps are the only valid metric everything else like model number, shader count, amount of vram etc .. is all fart in the wind.
And people who can't accept the 6GB version is nothing but I mental barrier "how come 6GB card can match 11GB card like 1080ti it has to have 12GB vram" well ... it can.
Very fair! Now, include the relevance of resale value. That 6GB card has earned 17% of it over time, the other one lost about as much and has less life in it going forward. This makes the 6GB one fit for resale, and 3GB one ready for some low-end rig you might have in the house.