Sunday, June 7th 2020
AMD Declares That The Era of 4GB Graphics Cards is Over
AMD has declared that the era of 4 GB graphics cards is over and that users should "Game Beyond 4 GB". AMD has conducted testing of its 4 GB RX 5500XT & 8 GB RX 5500XT to see how much of a difference VRAM can make on gaming performance. AMD tested the cards on a variety of games at 1080p high/ultra settings with a 3600X & 16 GB 3200 MHz ram, on average the 8 GB model performed ~19% better than its 4 GB counterpart. With next-gen consoles featuring 16 GB of combined memory and developers showing no sign of slowing down, it will be interesting to see what happens.
Source:
AMD
31 Comments on AMD Declares That The Era of 4GB Graphics Cards is Over
What's the point of showing benefits of 8GB memory with low(er) end card.
So then, entry level, mid level will always have above 4GB? I doubt it.
here i am playing doom eternal on a 2gb 1050 on my other machine
Likewise, aside from many budget gamers simply not caring about the latest crappily optimized AAA's, most low-end gamers in general also tend to have more common sense than benchmarkers. Eg, rather than artificially cripple frame-rates down to 20fps with Mirrors Edge Catalyst style "Ultra Mega Super Hyper" presets, they'll just bump the presets down a notch or two and enjoy the game. For most games, High is where actual optimization starts with Ultra being more like "let's see how much over-exaggerated post-processing cr*p we can fill this with" and personally I turn half that junk off anyway even without performance / VRAM limitation simply because I want to actually see what I'm playing...
As you can see in the following analysis, the maximum is 8 GB VRAM at Ultra 3840x2160..
11 GB or 12 GB would suffice.
www.techpowerup.com/review/red-dead-redemption-2-benchmark-test-performance-analysis/4.html
www.techpowerup.com/review/gears-tactics-benchmark-test-performance-analysis/5.html
www.techpowerup.com/review/resident-evil-3-benchmark-test-performance-analysis/4.html
www.techpowerup.com/review/control-benchmark-test-performance-nvidia-rtx/5.html
Most users do not get near the max system ram, so my personal opinion is vram should be the same. There's also not enough memory to go beyond 4K for Resident Evil 3.
A user some time ago posted a screenshot of Resident Evil 2 needing nearly 14GB vram. There's a thread on this here on TPU.
I would have to agree with AMD, but only for most future games or for people maxing out their games.
The settings that I do play with the 1650 Super, only a small share of games I had to drop Texture Quality.
The one that I remember it was Doom Eternal, which I did drop the texture quality from Ultra Nightmare to high, so that it would fit inside the vram budget, all the rest of the settings was maxed and the experience was 1080P >60FPS.
With new consoles arriving VRAM usage and ram will go up, logically!
Cheers
In theory I wouldn't mind if consoles had 4x the amount of VRAM, if it didn't drive up cost significantly, and games utilized in a sensible way. It is possible to use more memory to add more details in backgrounds etc., but this is the chicken and the egg problem once again. If the game is at the point where it's swapping heavily, even 16x wouldn't save it, as latency would also be a huge problem forcing the framerate to a crawl.
x8 is enough though for resource streaming, if it's done properly. Why should you pay for VRAM that you don't need?
Granted you should have a little margin, but beyond that, what's the point? There is a difference in allocating and actually needing, some games allocate huge buffers.
Also, was this measurement of the game's own usage, or total usage? Since background tasks can consume a lot, especially Chrome. Buying extra VRAM for "future proofing" has rarely if ever paid off in the past. Generally speaking, the need for performance increases just as much (at least how games are commonly balanced), so the card is obsolete long before you get to enjoy that extra VRAM for gaming.
I for instance, have a GTX 680 4 GB in one machine and a GTX 1060 3 GB in another. Guess which one plays games better? It will be interesting to see what they utilize it for.
But if a game is going to have like ~50 GB of data per level (uncompressed), just a couple of such games will eat up that entire SSD.
Generally it would make more sense to store the assets with lossy compression at about 1:10 ratio, which usually retains good enough details for grainy textures, then decompress in the CPU and send uncompressed data to the GPU. This needs to be prefetched and ready though, but that's not a problem for a well crafted game engine.
6GB is now the safe zone, I'd say, for 1080p ultra which is what even mid range can easily push now. Ampere/RDNA2 might push that up to 8GB by the end of their generation. At that point, 8GB will still be the norm for any res up to 1080p and most of 1440p, even, which is still where the vast majority games at - no matter how fat their GPUs are. Many many years go by for these bars to move, and it just follows common denominator in both games and gpus available. The new consoles and gpus will push the envelope... but first you need content to utilize it, and only AFTER that a new baseline is established.
Turing is a good example of that. We're still not looking at lots of RT content, but the next gen will be having the hardware widely available. These things happen slowly and with generational leaps.
Another thing: this isn't system RAM where you need to immediately double up either. 8GB > 16GB makes for an expensive GPU. We have 1080ti's with 11GB. There are MANY steps in between 8 and 16 with coherent VRAM buses. 4K is also not standard, despite the push from manufacturers/market. A few unhappy souls jumped into it and are now constantly tweaking to get desired performance and pixel density :) Its not really gaining traction, just yet.
@John Naylor are you watching this? ;) Live and learn...
The performance numbers beside that single statement are maked as:
Testing done by AMD performance labs 11/29/2019 on Ryzen 5 3600X, 16GB DDR4-3200MHz, ASROCK X570 TAICHI
and the whole context of the blogpost seems to only tackle 1080p
So only 8 lanes but PCIe4 instead of PCIe3, and still the crippling to only 8 lanes affects the 4GB card more than the 8GB card,
wich is obvious because the smaller overall buffer leads to the need of more frequent data movement even if all actually needed textures can be saved in the 4GB,
in the moment when new or refreshment of data is needed the PCIe x8 can be an early bottelneck.
On the 8GB card the driver and/or game can make use of the bigger buffer by transmitting some maybe needed data in advance when the PCIe interface has spare time/capacity, when the 4GB card is already saturated with the momentarily needed data.
This also underlines the sometimes obvious advantage of the cleverer nvidia driver when acting in or near VRAM saturation, AMD seems to have a not as clever data movement strategy sometimes.
Jokes aside, you're comparing how many gen older gpu against newer one? :pimp: Not kinda fair comparison. GTX 680 still chopping le lumbers, impresive!