Wednesday, April 12th 2023
AMD Plays the VRAM Card Against NVIDIA
In a blog post, AMD has pulled the VRAM card against NVIDIA, telling potential graphics card buyers that they should consider AMD over NVIDIA, because current and future games will require more VRAM, especially at higher resolution. There's no secret that there has been something of a consensus from at least some of the PC gaming crowd that NVIDIA is being too stingy when it comes to VRAM on its graphics cards and AMD is clearly trying to cash in on that sentiment with its latest blog post. AMD is showing the VRAM usage in games such as Resident Evil 4—with and without ray tracing at that—The Last of US Part I and Hogwarts Legacy, all games that use over 11 GB of VRAM or more.
AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.
Source:
AMD
AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.
218 Comments on AMD Plays the VRAM Card Against NVIDIA
Where are your 500-700 dollar options? Where's RX 7800 series?
Seems like the reason the cards are disappointing is because a serious artifacting problem showed up after a few hours of benchmarking/usage. Serious enough to warrant putting a sort of slowdown that gimped the cards 10% or more below their promised target.
Navi 32 shouldn't have to do that. Nor 40.
Odds of AMD actually working on drivers for months yet to fix the defect in the XT/XTX : Zero if you ask me. I bought a gimped card. And I still feel like Nvidia would've been more of a scam.
I would be really curious to know how the 2080Ti, 1080Ti or even the RTX A4000 (3070 with lower clocks but 16GB of vram) handle some of the games the 3070 is struggling with. Does a 1080Ti allow for a better gameplay experience or is the GPU itself just too slow so it becomes compute limited before being VRAM limited.
RE4 works fine unless you go and specify a texture pool that exceeds your VRAM size.
We'll see what they do to TLOU with patches.
If anything AMD didn’t hit the clock/power target they wanted to. With hardware voltage and pl overrides, clock wise Navi 31 can push much higher. If I’m not mistaken the people who have gone the route of hardware modifications are seeing +15-20% performance over stock, the main issue being they’re drawing 600+ watts at that point.
The prevention of this artifacting is the reason why the cards aren't properly respecting AMD's promised performance target in that famous RDNA 3 presentation. That's also why the clocks are lower, etc.
And again the people who have hardware control over voltages and PL have gone above 3300, and I’ve heard of nothing about throttling behavior and artifacts related to a hardware defect we will never have proof of.
No shot they were going to release a 7900XTX that was 10% below 4090 while drawing 500-600w
You have no idea exactly your card does, besides some monitoring frequencies and voltages, and I don't either.
What I know with a reasonable amount of certainty is that AMD hasn't hit their perf target, that they worked hard on drivers to fix that but the issues persist, and that they might be working on a hardware remediation, hence the absolute radio silence on SKUs lower than their high end. Could it be the artifact issue mentioned before? Maybe, I cannot know for sure, and you cannot either, and no amount of playing around with your card will allow you to know, unless you somehow have the source code of the AMD driver on your desk.
I'll get off my soapbox now.
I'm going to explain this in the simplest sequence of events possible.
AMD designed RDNA 3.
They tested RDNA 3.
The found a serious artifacting problem that showed up hours into using the cards.
They made a "shitty fix" that stopped the artifacting problem.
The shitty fix stayed in the cards and they went to sale and reviews with the shitty fix in.
The shitty fix is now in your card.
Navi 32, Navi 40, and all other RDNA cards, either will have a proper fix, either don't need it (Navi 33 is monolithic).
The shitty fix can be a throttling, a re-sync between MCD/GCD, a this, a that, a those, we don't know, enter the AMD building, take a dev hostage at knifepoint, demand to know to them. We're not aware of the details of a big corporation's back kitchen.
What the rumour says is that the shitty fix is in for Navi 31 alone, and will not be necessary for later Navis.
Which makes me say that since months have passed, and that elusive "Big FineWine moment" where we expected that a driver update would take the cards from a somewhat paltry 35% perf improvement to the promised 50%, will probably not come. Because it's been months, and it may be months yet, and AMD has really little value in investing however many months to fix a problem that ultimately hits only certain cards.
I was under the original impression that Navi 31 had some driver issues and would HAVE to fix them because whatever wasn't fixed with the first chiplet design would go into later gens.
I have now been told that actually, no. So I'm surmising that AMD will probably not fix it and just happily move on to RDNA 4 with the fix in the silicon itself.
You can tinfoil hat all you like but there are more believable scenarios than immediately jumping down the route of hardware defects.
1) missed power/clock target by a large margin (they absolutely did)
2) are trying to clear excess inventory similar to nvidia after massively over producing thanks to the mining nonsense that went on for the previous two years
3) Don’t want to cannibalize price of remaining 6000 series.
4) Outside of updated I/O and hardware encode/decode, there isn’t much that Navi 32 is going to do price/performance wise with the supposedly high manf. costs
They’re a company, they exist to make money, and $$$ is going to pave the way on what and how it gets released. But sure, armchair yourself down a tinfoil hat route you have no technology knowledge to prove or confirm any of it let alone how to do so.
The PC versions have higher rez textures and lighting and etc etc. That's gonna use VRAM.
People didnt whine when consoles made 6-8 cores relevant, but when its big VRAM pools......