Wednesday, April 12th 2023
AMD Plays the VRAM Card Against NVIDIA
In a blog post, AMD has pulled the VRAM card against NVIDIA, telling potential graphics card buyers that they should consider AMD over NVIDIA, because current and future games will require more VRAM, especially at higher resolution. There's no secret that there has been something of a consensus from at least some of the PC gaming crowd that NVIDIA is being too stingy when it comes to VRAM on its graphics cards and AMD is clearly trying to cash in on that sentiment with its latest blog post. AMD is showing the VRAM usage in games such as Resident Evil 4—with and without ray tracing at that—The Last of US Part I and Hogwarts Legacy, all games that use over 11 GB of VRAM or more.
AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.
Source:
AMD
AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.
218 Comments on AMD Plays the VRAM Card Against NVIDIA
Anything to keep the Nvidia Cult pure and untainted by criticism really. Anything that circles back to "Nvidia is never wrong, so they couldn't possibly have cheaped out on VRAM, and you won't need VRAM".
And considering how the Cult works, clearly they'll keep singing that until the 5000s where Nvidia finally corrects and outputs only cards with 16Go and above, save maybe for the lowest end.
AMD called the VRAM rise correctly, that's all there is to it. The ultimate irony is that it comes now, as Nvidia is pushing for an extremely severe price hike. Being cheap with VRAM while you ask for massive hikes is going to put a lot of customers sideways. I'm already seeing a small, but very visible trend of cultists openly questioning how can Nvidia seriously ship a $1000 4070 Ti with too little VRAM for it, or regretting buying 3080s. The 3070 Ti buyers are some sort of first line cannon fodder, but even the other ones are feeling the breath down their neck.
I expect the damage to be quite extensive by the time the 5000s come out. AMD may even have a shot at not being totally terrible, although RDNA 3 started really poorly. All they need to do is release cards at decent prices with a decent VRAM buffer(16Go, nothing below). I expect that their Navi 33 will be total ass because it's still a 128 bit bus, so unless they release double VRAM on the bus, which is unlikely, it's going to be a bunch of just pointless cards, might as well just get a 6600 as it'll do just as good in the long run. But for Navi 32, and obviously 31, there is no real insufficiency, all that matters will be price. And I'm not too worried about AMD actually releasing good enough prices.
I am worried that these guys are in such a bad spot that 5 months later, there's still not a peep about any new Navis, or any kind of leak, or anything really, and the driver work on Navi 31 seems to be a very painful slow grind. You should see the power usage curve on my 7900 xt, it's sometimes looking like an sismograph with how much it peaks/falls/peaks/falls/peaks/falls...
Some other games were tested and he confirmed of course the game handles it automatically by not loading the proper textures, or if it doesnt games crashed. I think this video finally is showing the actual problems, and reviewers should update how they review cards in future when assessing VRAM. Dont leave benchmarks unattended e.g. as you have no idea whats going on.
Bear in mind it might be e.g. to load the proper textures it needs 9 gigs, the 9 gigs isnt there, so you then use the 6.39 gigs for the lower quality textures.
I am simply looking at it from a gamer's eyes, its there right in front of you, muddy textures and momentary freezes due to lack of VRAM.
Do you go on a campaign to get 1000s of dev's to code games differently (lower the quality available), or do you simply just add more VRAM to hardware?
Also, what you're describing would only happen if developers were all noobs. That's not how textures are handled (look up mipmaps, if you're curious).
Reminds me of that Intel engineer that summarised the entirety of the tech industry some ten years ago:
"No matter how much more power we build, they manage to use it!"
"letting developers off the hook" is what we've been doing since 1970. It's what we'll be doing until 2070. It's why Lisa Su talks about Zettascale.
Whatever line you think to hold on to will be broken, that's the very point of this industry.
It's TechPowerUp, not TechPowerStay!
I can buy a 16G GPU. I cannot give a dev a poor employee review for shoddy code.
It's not uncommon people to be finishing games in their first buggy version(s), instead of waiting for new patches that improve quality and performance. It's probably the rule, not the exception. People having the patience to wait for proper patches and price cuts, are not the majority. If they where, companies wouldn't be rushing to bring games out in probably what someone would describe as beta versions.
As for VRAM and developers. Considering that system requirements are what is pushing people to upgrade, I bet that sponsored titles from AMD, Intel or Nvidia are not coming with an obligation to developers to do their best in optimizations and memory management.
Card manufacturers are and have been for years hampering "progress" with curtailed VRAM rations for the lower/mid-segment performance minions. 2023 and in some supposedly graphics immersive titles we're still getting 10-year old polished texturisation in modern titles... rotten to the bone! Slap "ultra" on it and people get hypnotized by it.
Game Devs are simply not getting a large enough code-artist paint pallete to push on the more enriched or more consistent visual fidelity which is "readily accessible". In return, the 8GB limited crowd is compelled to settle with assets substitutions and now what seems to be top-graft optimisations which spells out "visual quality impairment". Some of us have become accustomed to these constraints with just about acceptable high/ultra graphics eye-candy without realising missed upper-class graphics opportunities. Thats the Nature of VRAM limitations, too many bottom-barrel feeding compromises.
Another thing to consider... for sometime now, we the consumer are paying VRAM-taxing RT/PT levies. Its completely mind-baffling having to pay these extortionately higher premiums and yet to be presented with memory bottlenecks. Even with RT disabled, the bottleneck for premium quality graphics (textures playing a big part) is extremely evident. To offer a little life, dynamic texture asset substitutions are highly observable, especially in denser graphics environments. This is hardly a solution, more of a temporary med-patch before the bigger and better bleeds you to death.
In the grand scheme of things, its not just whether games can comfortably sit in the inferior quality 8GB VRAM bracket but why should we the consumer encourage "inferiority" to exist in the first place? Forget the recent outcry from AMD or independent reviewers, more VRAM was always a desirable longing for game developers to push some of that higher visual fidelity to the mainstream budget/mid-performance gaming segment. There's a lot of "but i can still manage great FPS and smooth gameplay"... course you can, at the cost of less than desirable memory abetted wash-ups. I want a boat load of frames per second, but each frame would be way more nicer with some consistent eye-candy affirmative action opposed to a dynamically repulsive one.
Just a reminder. Excluding the 8GB HD 2900/X models and starting with the RX 480 that came out in 2016, we are about to close 10 years and probably keep counting many more, until the sub $300 segment moves over 8GBs of VRAM.
To think of AMD complaining about nVs lack of adherence and then to air drop 8GB-poop-bombs on their mid-segment cards is just revolting. This is a perfect long-awaited opportunity to size up on VRAM at all performance-tiers and it would be a damn-right shame if the largest consumer group (low/mid perf) are once again consigned to oblivion.
It's just puzzling to see a discussion around entertainment turning into hatred towards one side because supposedly they're holding back our entertainment rights.
Also it is puzzling that you talk about not convincing anyone, when above you said that people should play something else instead. Well, guess what. This does mean convincing someone and you know this, because you follow up that suggestion with the argument of devs not getting the compensation they expect. Arguments are used when we try to convince someone about something. Even when we don't intent to insist on our suggestion.
In any case convincing the public about game optimizations is mandatory for forcing developers to make optimization a priority. As long as games sell millions of copies in their original buggy version, optimization will not be the top priority in the development.
But saying "You play something else instead" or "Many developers receive compensation based on how their title fares" both those to happen does mean convincing the absolute majority of gamers to avoid games that need more than 8GBs of VRAM, because of bad optimization and not because of absolute need for more VRAM. Because there is also the case where the extra VRAM is needed no matter how much optimization someone does to a game.
Now realistically, most people waiting for a game, don't wait an extra 6-12 months for it to get optimized and games with a gazillion of bugs do sometimes sell extremely well. Meaning the extra VRAM is the only workaround available to consumers.