Wednesday, April 12th 2023

AMD Plays the VRAM Card Against NVIDIA

In a blog post, AMD has pulled the VRAM card against NVIDIA, telling potential graphics card buyers that they should consider AMD over NVIDIA, because current and future games will require more VRAM, especially at higher resolution. There's no secret that there has been something of a consensus from at least some of the PC gaming crowd that NVIDIA is being too stingy when it comes to VRAM on its graphics cards and AMD is clearly trying to cash in on that sentiment with its latest blog post. AMD is showing the VRAM usage in games such as Resident Evil 4—with and without ray tracing at that—The Last of US Part I and Hogwarts Legacy, all games that use over 11 GB of VRAM or more.

AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.
Source: AMD
Add your own comment

218 Comments on AMD Plays the VRAM Card Against NVIDIA

#51
Starks
shame on AMD's face - these games are poorly optimized and were made by amd itself who is exaggerating the use of vram on purpose. These things just make me feel more angry at amd
Posted on Reply
#52
dj-electric
Hey, AMD...
Where are your 500-700 dollar options? Where's RX 7800 series?
Posted on Reply
#53
freeagent
They are both out to exploit the consumer.. pretty scummy if you ask me :)
Posted on Reply
#54
Mahboi
dj-electricHey, AMD...
Where are your 500-700 dollar options? Where's RX 7800 series?
Apparently, delayed to fix whatever MCD to GCD defect is in the XT/XTX.
Seems like the reason the cards are disappointing is because a serious artifacting problem showed up after a few hours of benchmarking/usage. Serious enough to warrant putting a sort of slowdown that gimped the cards 10% or more below their promised target.
Navi 32 shouldn't have to do that. Nor 40.
Odds of AMD actually working on drivers for months yet to fix the defect in the XT/XTX : Zero if you ask me. I bought a gimped card. And I still feel like Nvidia would've been more of a scam.
Posted on Reply
#55
btk2k2
MahboiProbably...and honestly it's lower than that. AMD again with the top tier marketing of lowering prices, but telling nobody, lest they may buy.
The 4070 Ti can scarcely be found at the same price as the XT in my country. And the only models that can be found are Zotac/PNY.


So you think 16Gb will not suffice? Why? The PS5 has 16Gb of unified RAM. Counting the PS5 OS, obvious CPU needs, I'd expect that even with a PC with multiple monitors, 16Gb would be enough.
16GB is fine until the next VRAM jump which will be when PS6 and whatever the xbox is called that gen come out of the cross gen phase. So maybe 7/8 years time. Just pointing out that the 20/24GB cards of now are like the 11/12gb cards of then. Those with a 900 series Titan 12GB or a 1080Ti 11GB or a 2080Ti 11GB especially will see their cards last longer before becoming utterly unplayable. The 3070 and other 8GB cards are getting to the point that in some games they are simply unplayable without making huge IQ sacrifices.

I would be really curious to know how the 2080Ti, 1080Ti or even the RTX A4000 (3070 with lower clocks but 16GB of vram) handle some of the games the 3070 is struggling with. Does a 1080Ti allow for a better gameplay experience or is the GPU itself just too slow so it becomes compute limited before being VRAM limited.
Posted on Reply
#56
Mister300
SOAREVERSORPC has never been the most important platform and it has not been the showcase platform either for decades now. The showcase platform is the consoles. The PC is the 1080p 60hz low details platform dominated by x060 cards and then stuff like 1660s. People need to accept that.
Yep all you need to do is look at the MW 2 lobbies almost all play on Xbox or PS nowadays. Even my business partner prefers his Switch over his 3080 rig, PC gaming is dying fast unless developers get shitty ports under control.
Posted on Reply
#57
BoboOOZ
btk2k2I would be really curious to know how the 2080Ti, 1080Ti or even the RTX A4000 (3070 with lower clocks but 16GB of vram) handle some of the games the 3070 is struggling with. Does a 1080Ti allow for a better gameplay experience or is the GPU itself just too slow so it becomes compute limited before being VRAM limited.
You have comparisons between the 3060 (12GB) and the 3070 and there's no surprise, when VRAM limited the 3060 wins, although it is less powerful with slower memory.
Posted on Reply
#58
londiste
Hecate91The only port that is shit is TLOU Part 1, games like RE4 and HWL are fine.
Nvidia has been pushing planned obsolescence on cards with only 8GB of VRAM since the RTX 20 series, newer games are only going to continue to use more VRAM.
HWL got some patches and 8-10-12GB cards are quite OK now.
RE4 works fine unless you go and specify a texture pool that exceeds your VRAM size.

We'll see what they do to TLOU with patches.
Posted on Reply
#59
Mister300
With respect to e-peen memory has two factors length (bit bus 512 is max now?) and girth (BW HBM is 1 Tb/sec). Even with HBM AMD cards are not 4K killers as Fury X claimed. This is why I do not buy the lack of memory excuse for poor performance too many other factors.
Posted on Reply
#60
BoboOOZ
londisteHWL got some patches and 8-10-12GB cards are quite OK now.
Saw a video from Daniel Owen the other day, the game doesn't stutter anymore, but you have missing textures constantly on 8GB that look very bad. So you still have to lower textures to avoid glitches.
Posted on Reply
#61
wolf
Better Than Native
In today's news, AMD plays the only card it has. More at 10.
Posted on Reply
#62
rv8000
MahboiApparently, delayed to fix whatever MCD to GCD defect is in the XT/XTX.
Seems like the reason the cards are disappointing is because a serious artifacting problem showed up after a few hours of benchmarking/usage. Serious enough to warrant putting a sort of slowdown that gimped the cards 10% or more below their promised target.
Navi 32 shouldn't have to do that. Nor 40.
Odds of AMD actually working on drivers for months yet to fix the defect in the XT/XTX : Zero if you ask me. I bought a gimped card. And I still feel like Nvidia would've been more of a scam.
I’ve benchmarked and played games for 100s of hours at this point, not getting artifacts while pushing the card as hard as it can go within the defined PL. Where’s the proof on this?

If anything AMD didn’t hit the clock/power target they wanted to. With hardware voltage and pl overrides, clock wise Navi 31 can push much higher. If I’m not mistaken the people who have gone the route of hardware modifications are seeing +15-20% performance over stock, the main issue being they’re drawing 600+ watts at that point.
Posted on Reply
#63
Mahboi
rv8000I’ve benchmarked and played games for 100s of hours at this point, not getting artifacts while pushing the card as hard as it can go within the defined PL. Where’s the proof on this?

If anything AMD didn’t hit the clock/power target they wanted to. With hardware voltage and pl overrides, clock wise Navi 31 can push much higher. If I’m not mistaken the people who have gone the route of hardware modifications are seeing +15-20% performance over stock, the main issue being they’re drawing 600+ watts at that point.
What part of "they have put something to stop the artifacting" is unclear to you? Obviously they didn't release it in that state.
The prevention of this artifacting is the reason why the cards aren't properly respecting AMD's promised performance target in that famous RDNA 3 presentation. That's also why the clocks are lower, etc.
Posted on Reply
#64
BoboOOZ
rv8000I’ve benchmarked and played games for 100s of hours at this point, not getting artifacts while pushing the card as hard as it can go within the defined PL. Where’s the proof on this?
It's a leak, there can be no proof. And basically, the idea being that the current driver fixes the problem by throttling the GPU frequently, you cannot replicate the issue unless you have development drivers.
Posted on Reply
#65
rv8000
BoboOOZIt's a leak, there can be no proof. And basically, the idea being that the current driver fixes the problem by throttling the GPU frequently, you cannot replicate the issue unless you have development drivers.
So it’s all hearsay? My card doesn’t “frequently” throttle, it always stays between 2800-2850.

And again the people who have hardware control over voltages and PL have gone above 3300, and I’ve heard of nothing about throttling behavior and artifacts related to a hardware defect we will never have proof of.

No shot they were going to release a 7900XTX that was 10% below 4090 while drawing 500-600w
Posted on Reply
#66
BoboOOZ
rv8000So it’s all hearsay? My card doesn’t “frequently” throttle, it always stays between 2800-2850.
Right because you 've already decompiled and reverse engineered your scheduler and you know exactly what you card does.

You have no idea exactly your card does, besides some monitoring frequencies and voltages, and I don't either.

What I know with a reasonable amount of certainty is that AMD hasn't hit their perf target, that they worked hard on drivers to fix that but the issues persist, and that they might be working on a hardware remediation, hence the absolute radio silence on SKUs lower than their high end. Could it be the artifact issue mentioned before? Maybe, I cannot know for sure, and you cannot either, and no amount of playing around with your card will allow you to know, unless you somehow have the source code of the AMD driver on your desk.
Posted on Reply
#67
Aquinus
Resident Wat-man
Vayra86Yep... that alone proves the point either way, Fury X fell off much faster than the 6GB 980ti while they were about equal on launch.

Now the tables are turned.
HBM was cool on paper, but the reality is that it was far more bandwidth than the GPUs of the time needed. My Vega 64 is a great example of that. Who cares if you can overclock memory clocks by 25% if it quite literally gets you nothing. However with that said, it does use a lot less power compared to GDDR5 and 6. AMD should have focused on using HBM for mobile GPUs because it lets them dedicate more power to the GPU itself. The Radeon Pro 5600m in my laptop is a great example of that. If you compare it to the 5500M with GDDR6 which has the same TDP, you can really see how it shines. Compute wise, it's about 25% faster on paper at the same power. The non HBM2 Macbook Pros were also known for the fans spinning up from using external displays because the VRAM had to clock up to handle it. So there are a lot of really good reasons to use HBM and its successors, just maybe not in the desktop space. It's ideal when power budget and physical space are constraints.

I'll get off my soapbox now.
Posted on Reply
#68
Mahboi
rv8000So it’s all hearsay? My card doesn’t “frequently” throttle, it always stays between 2800-2850.

And again the people who have hardware control over voltages and PL have gone above 3300, and I’ve heard of nothing about throttling behavior and artifacts related to a hardware defect we will never have proof of.

No shot they were going to release a 7900XTX that was 10% below 4090 while drawing 500-600w
Sigh

I'm going to explain this in the simplest sequence of events possible.
AMD designed RDNA 3.
They tested RDNA 3.
The found a serious artifacting problem that showed up hours into using the cards.
They made a "shitty fix" that stopped the artifacting problem.
The shitty fix stayed in the cards and they went to sale and reviews with the shitty fix in.
The shitty fix is now in your card.
Navi 32, Navi 40, and all other RDNA cards, either will have a proper fix, either don't need it (Navi 33 is monolithic).

The shitty fix can be a throttling, a re-sync between MCD/GCD, a this, a that, a those, we don't know, enter the AMD building, take a dev hostage at knifepoint, demand to know to them. We're not aware of the details of a big corporation's back kitchen.
What the rumour says is that the shitty fix is in for Navi 31 alone, and will not be necessary for later Navis.
Which makes me say that since months have passed, and that elusive "Big FineWine moment" where we expected that a driver update would take the cards from a somewhat paltry 35% perf improvement to the promised 50%, will probably not come. Because it's been months, and it may be months yet, and AMD has really little value in investing however many months to fix a problem that ultimately hits only certain cards.

I was under the original impression that Navi 31 had some driver issues and would HAVE to fix them because whatever wasn't fixed with the first chiplet design would go into later gens.
I have now been told that actually, no. So I'm surmising that AMD will probably not fix it and just happily move on to RDNA 4 with the fix in the silicon itself.
Posted on Reply
#69
rv8000
BoboOOZRight because you 've already decompiled and reverse engineered your scheduler and you know exactly what you card does.

You have no idea exactly your card does, besides some monitoring frequencies and voltages, and I don't either.

What I know with a reasonable amount of certainty is that AMD hasn't hit their perf target, that they worked hard on drivers to fix that but the issues persist, and that they might be working on a hardware remediation, hence the absolute radio silence on SKUs lower than their high end. Could it be the artifact issue mentioned before? Maybe, I cannot know for sure, and you cannot either, and no amount of playing around with your card will allow you to know, unless you somehow have the source code of the AMD driver on your desk.
My point being, now that there’s people with hardware control pushing above stock limits, we know it’s not related to a hardware defect limiting clock speeds like you and the other bozo are claiming.

You can tinfoil hat all you like but there are more believable scenarios than immediately jumping down the route of hardware defects.

1) missed power/clock target by a large margin (they absolutely did)
2) are trying to clear excess inventory similar to nvidia after massively over producing thanks to the mining nonsense that went on for the previous two years
3) Don’t want to cannibalize price of remaining 6000 series.
4) Outside of updated I/O and hardware encode/decode, there isn’t much that Navi 32 is going to do price/performance wise with the supposedly high manf. costs

They’re a company, they exist to make money, and $$$ is going to pave the way on what and how it gets released. But sure, armchair yourself down a tinfoil hat route you have no technology knowledge to prove or confirm any of it let alone how to do so.
Posted on Reply
#70
TheLostSwede
News Editor
dj-electricHey, AMD...
Where are your 500-700 dollar options? Where's RX 7800 series?
I'd guess Computex at the end of May.
Posted on Reply
#71
InVasMani
Low on VRAM capacity open world game and trying to throttle image quality trade offs at higher resolution targets than one should rightfully be running on sh*tty antique GPU hardware...
Posted on Reply
#72
TheDeeGee
dj-electricHey, AMD...
Where are your 500-700 dollar options? Where's RX 7800 series?
Maybe this news is a hint they want to milk people with a 7900 first before other models come.
Posted on Reply
#73
TheinsanegamerN
Hecate91The only port that is shit is TLOU Part 1, games like RE4 and HWL are fine.
Nvidia has been pushing planned obsolescence on cards with only 8GB of VRAM since the RTX 20 series, newer games are only going to continue to use more VRAM.
None of the ports are bad TBH. The consoles have 16GB of RAM now. Devs are using 10-12GB just for graphics.

The PC versions have higher rez textures and lighting and etc etc. That's gonna use VRAM.

People didnt whine when consoles made 6-8 cores relevant, but when its big VRAM pools......
Posted on Reply
#74
kondamin
TheLostSwedeDRAM and VRAM are not the same though, so the pricing doesn't correlate.
Yeah still Gddr6 dropped 20% just like regular dram
Posted on Reply
#75
Crylune
Very bold of them to make another VRAM related blog post after the whole fiasco with the 6500 XT launch where they removed their blog post stating 4 GB VRAM GPUs are bad.
Posted on Reply
Add your own comment
Dec 2nd, 2024 09:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts