• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

12Gb GPUs already obsolete, brand new game takes up to 18Gb Vram at 1440p

Not sure what you're getting at. What is wrong with this type of test methodology? Best card, higher memory bandwidth and more than sufficient VRAM to measure memory usage. Why do we need results from lower specced GPUs (with hardware limitations) to identify max VRAM utilization? Thats almost like testing how fast a car can run on a road limited by a 60mph speed limit

because it doesn't work like that, a card with 24GB will ALLOCATE more, there is no utilisation identified
 
"Future Proof" is a myth in PC hardware. It has never existed and most likely never will.
I think back to a member here who, around 8 years ago, proposed a build with 4 Maxwell Titan X which was almost $5,000 in GPUs alone. He said he wanted to build a beast that would be future proof for at least 7 years. SLI died years ago and most people didn't see it coming but it did come. Another member bought a Titan Z for $3,000 and complained because of lack of support from Nvidia even during the Maxwell series. He said that he expected Nvidia to give the best quality support for his Titan Z for at least 10 years because of the price he paid.

Who knows what the future will bring but whatever it does it is certain that it will make today's hardware obsolete and puny in comparison.
$2999, I can hardly believe that's how much the Titan Z went for. Amazing. According to the CPI Inflation Calculator that's $3,797.91 in today's dollars. I had always wanted a TitanZ or a AMD 295x2 but they were crafted of unaffordium back then. Interestingly enough, the 4090 I paid $1800 for in April would've cost $1422 in 2014 dollars (cheaper even than the AMD r9 295x2!) so even top-tier GPU prices really have gone down, although the Titan Black was cheaper in cost adjusted dollars than a 4090 today (but not by much, maybe a 100 bones).
 
The way this trend is going, next generation of gpus will be with 32/64gb.
 
The way this trend is going, next generation of gpus will be with 32/64gb.

if you give lazy developers/engineers more why should they optimise? A dangerous path.
 
because it doesn't work like that, a card with 24GB will ALLOCATE more, there is no utilisation identified

Well if its allocating more than its required, either the title is poorly optimised or more is a requirement to keep up with larger assets management/compression (etc), or a mix of both. But lets not make assumptions to influence the idea less is adequate when its clearly not, "especially going forward".

Anyway, it doesn't make sense to test this configuration with less VRAM or any memory confined environment (bottleneck). Smart game engines will "dynamically" compromise on assets/graphics quality if VRAM is cramped. Already we're seeing waning graphics fidelity when titles are close to topping out VRAM, especially dynamic assets swapping which is observably evident in real-time gameplay. More refined and less weightier optimisations have been seen to work nicely but in some cases they present less of an "optimal" state traded with a feasible but lesser quality compromise (i suppose the illusion of makeshift "ultra"). To make matters worse, dev contracted mainstream slower bandwidths are met with pre-designated shrivelled utilisation of VRAM, often leaving the impression more VRAM is unavailing. Same applies to badly ported titles. Just because some heavy lifters in the memory department havent succumbed to stuttering, crashing, artifacts, etc etc... it doesn't mean we're getting the full produce of a games rendered resources at the highest quality setting.

On top, with RT/PT on the mainstream horizon surely its time to break open the VRAM seal across all performance segments (incl. 1080p ~hardware). If we're paying the global illumination tax, in the least memory provisioning should be adequately well balanced to support it.

I'm still trying to work out what all this consumer-drivelled fascination is with preserving current VRAM capacities, that too at lower/slower bandwidths. Hypothetically speaking, even if VRAM presented zero compromises/bottlenecks when did "more VRAM" become a bad idea? I would much rather have developers unrestricted to drawing on a smaller pallette and work less strenously in cramming everything in, which always ends up in a shit-show. Seeing consoles have adopted ~16GB this alone is a pretty good indication developers will unload greater graphics eye-candy possibilities across the board. So far the 6-8GB mass market whom developers foremost respond to have served as a limiting factor in graphics prowess. So which is it, preserve lower VRAM capacities or increase in unlocking easily attainable graphics galore alongside what is already copiously very fast modern day graphics cards and CPUs?

Its either more VRAM or a game-changer 2x/4x compression innovation - even Nvidia identifies where things are heading with visualized VRAM exploitation https://research.nvidia.com/labs/rtr/neural_texture_compression/

Oddly enough we're all jumping the trampolene at the sound of Directstorage incorp... but no, please not the VRAM. What gives?

if you give lazy developers/engineers more why should they optimise? A dangerous path.

When did giving less require less optimisations?
 
But lets not make assumptions to influence the idea less is adequate when its clearly not, "especially going forward".

We have to make assumptions, after all it's apparently impossible to test how much a game ALLOCATES of vram for different cards. Because of reasons and stuff. It's far better to just use a 4090 for 560p and conclude nothing and assume everything. Maybe we are meant to assume and not conclude, it certainly seems like it.
It does help to drive sales of new cards and the drama factor of any "review". I know its hard to drive views when no one cares about the new cards, so there is that.
 
We have to make assumptions, after all it's apparently impossible to test how much a game ALLOCATES of vram for different cards. Because of reasons and stuff. It's far better to just use a 4090 for 560p and conclude nothing and assume everything. Maybe we are meant to assume and not conclude, it certainly seems like it.
It does help to drive sales of new cards and the drama factor of any "review". I know its hard to drive views when no one cares about the new cards, so there is that.

I'm not sure how varied allocations across limited/unlimited capacities helps when determining max VRAM usage - Isn't that the primary objective? If you're just genuinely concerned whether a given card is capable of running a particular title at a given setting thats fine. I too wouldn't mind seeing how other cards compare or what sacrifices/compromises ensue with older or inferior hardware. But max VRAM usage is max VRAM usage without limitations. No level of "assumption" is going to change that.
 
please enlighten me

tested all the cards but only posts vram usage of one card


View attachment 294962


isn't this feeding the fear mongering?
That's one card tested to show the maximum VRAM usage, so you can tell if it fits inside any GPU.
What part of that is useless or fear mongering?
 
That's one card tested to show the maximum VRAM usage, so you can tell if it fits inside any GPU.
What part of that is useless or fear mongering?

are you sure a 8GB card will show the same allocation then a 24GB card? do the test and came back to me. No assumptions this time.
 
Well if its allocating more than its required, either the title is poorly optimised or more is a requirement to keep up with larger assets management/compression (etc), or a mix of both. But lets not make assumptions to influence the idea less is adequate when its clearly not, "especially going forward".
It's not a matter of optimisation. Many games will allocate vram depending on what's available, because it just can't hurt to have more. This is the default behavior of UE for instance. This is also clearly the behavior of this title since measured vram usage is much lower with the same settings on my 12GB card (around 9GB in UWQHD with RT no FSR).
Measuring vram usage on a 24GB card in ultra is useless to determine how much vram is actually needed to run the game. It does nothing to inform on the behaviour on cards with less vram or at lower settings, ie for 99% of gamers. It's misleading and fuels the current hysteria regarding vram.

There is also the fact that the test does not specify how the vram usage was measured. The default value showed by afterburner represents the whole system vram usage for instance. So this graph really is doing a disservice to TPU readers.

are you sure a 8GB card will show the same allocation then a 24GB card? do the test and came back to me. No assumptions this time.
The test itself proves it does not, at least not in the selected scene. Look at the performance of the 3080 and 4070 for instance.
 
Last edited:
That's one card tested to show the maximum VRAM usage, so you can tell if it fits inside any GPU.
What part of that is useless or fear mongering?
Today most if not all engines do some sort of texture streaming to a dynamic texture pool in VRAM. While there is a practical minimum size for said pool depending on game needs making it bigger could be beneficial or at least not harmful. Given a reasonably good algorithm for texture streaming that minimum size might be surprisingly small.

Properly testing this is a real bitch though.
 
With my brand new RX 6600 with 8GB VRam, i should be "future proofed" for @ least the next 5 years.
 
When did giving less require less optimisations?
Giving a lesser quality game, unoptimized to work on several tiers of GPU and CPU hardware, require's more of said hardware to run well. Remember he said;
if you give lazy developers/engineers more why should they optimise? A dangerous path.
So yeah, if you give them way overspecced hardware relative to consoles that (in theory) already run well/look good enough, like gobs of unnecessary VRAM, CPU power, GPU compute power etc, why would they optimize further.

I mean we're already bordering on some systems having 2X+ the amount of physical hardware a console game needs to run XYZ settings, potentially feeding it unnecessarily, like the big VRAM boogeyman, and it could be a dangerous path. Nek minnit we might re-enter situations like craptacular budget/low end cards with 2/3/4x the amount of VRAM they need and can reasonably use, relative to their compute power. None of this is to say that some current cards could use more than they have, but simply advocating for all of them to have shittonnes doesn't actually solve the problem at all, in fact it exacerbates it.

My solution? Allow AIB's to just make double memory capacity cards, at their own markup. Then people that want their GPU to last an almost unreasonable amount of time, like 6+ years or 3+ generations, can spend big on VRAM, without buying the top SKU, to run max textures far beyond when their GPU can run max anything else. Would a 20GB 3080 be cool? sure! would I have bought one given the choice at launch for $100-200 USD more than the 10GB? hell no. Some might have, I don't begrudge them that, hence why the option could be useful and well received. Then at least they can get ahead of any boogeyman arguments by giving your everyman the option to have bought double for a GPU that can only make purposeful use out of it in niche situations like modding, or when we have multiple fucking terrible ports in a row, or when they want to play the ultra long game.
 
Imagine how stupid this discussion is retrospectively when you look back every couple of years.

2000: These lazy developers, 32MB is enough.
2005: These lazy developers, 128MB is enough.
2010: These lazy developers, 1GB is enough.
2015: These lazy developers, 4GB is enough.
2023: These lazy developers, 8GB is enough.

You get the picture.
 
Imagine how stupid this discussion is retrospectively when you look back every couple of years.

2000: These lazy developers, 32MB is enough.
2005: These lazy developers, 128MB is enough.
2010: These lazy developers, 1GB is enough.
2015: These lazy developers, 4GB is enough.
2023: These lazy developers, 8GB is enough.

You get the picture.
Wow what a meaningful contribution to the discussion.

18 GB > 8 GB FYI.

18 GB > almost every other game that offers 4K textures.
 
Wow what a meaningful contribution to the discussion.
About as meaningful as 15 pages of people being in denial about the fact that VRAM requirements go up as the years go by.
 
About as meaningful as 15 pages of people being in denial about the fact that VRAM requirements go up as the years go by.
Ah yes, an accurate and non-disingenuous representation from you (as usual :laugh:) of the discussion of a specific game taking much more VRAM than is justified considering it's fidelity, when compared to other games released recently, and whether a badly produced game implies that anything under halo tier 24 GB cards are obsolete.
 
In every generation, you see some games that come out that just have shitty coding and optimization. It's a poor port. No excuse for not optimizing vram routines.
 
With my brand new RX 6600 with 8GB VRam, i should be "future proofed" for @ least the next 5 years.
If you believe buying an entry level video will "future proof" you for five years then allow me to sell you a WD 256GB SSD as it will hold your entire PC video game collection for the next five years plus the OS, apps, pics, etc., etc., plenty of room!


About as meaningful as 15 pages of people being in denial about the fact that VRAM requirements go up as the years go by.
Most of the comments are not in denial and know VRAM requirements go up. It's when those requirements are A)ludicrous or B) the game is poorly optimize to scale down to market level GPU performance that people get ticked off especially when they either just recently purchased a video card or the cost to move up to higher VRAM is absurd to what it was just three years ago.
 
If it takes AMD featured or sponsored titles to push for higher VRAM provisions, i'll back that too! You call it AMD, i call it progress!
For two consecutive days I played with the UHD 770. I call it pleasure. The pleasure of playing.
There really is a game where you have to count the blades of grass and the apples from the tree?
 
It's when those requirements are A)ludicrous
How exactly do you determine if the requirements are 'ludicrous', whatever that even means. If a game uses more VRAM than any other previous game does that count as ludicrous ? Because obviously that line of thought is ridiculous, at some point in time as VRAM requirements go up there is always a game that needs more VRAM than any other game before it, that's how that works.

B) the game is poorly optimize to scale down to market level GPU performance that people get ticked off especially when they either just recently purchased a video card or the cost to move up to higher VRAM is absurd to what it was just three years ago.
Is your thought process simply :

low VRAM usage -> good optimization
high VRAM usage -> bad optimization

Are you a game developer ? How do you know if something is poorly optimized ?
 
How exactly do you determine if the requirements are 'ludicrous', whatever that even means. If a game uses more VRAM than any other previous game does that count as ludicrous ? Because obviously that line of thought is ridiculous, at some point in time as VRAM requirements go up there is always a game that needs more VRAM than any other game before it, that's how that works.


Is your thought process simply :

low VRAM usage -> good optimization
high VRAM usage -> bad optimization

Are you a game developer ? How do you know if something is poorly optimized ?
Because people who aren't clowns can look at two similar games built on the same engine, and see that one has twice or more the VRAM requirements for zero additional, or perhaps worse fidelity and then draw conclusions.
 
  • Like
Reactions: 64K
I still have to disagree. I have a 6700 XT and I play at 4K60 without any problems.
 
Because people who aren't clowns can look at two similar games built on the same engine, and see that one has twice or more the VRAM requirements for zero additional, or perhaps worse fidelity and then draw conclusions.

The biggest clowns by far are those who don't realize that despite this those games run just fine even without 24GB.

1683651008580.png


4070ti is still faster than a 3090 despite half the VRAM.
 
Imagine how stupid this discussion is retrospectively when you look back every couple of years.

2000: These lazy developers, 32MB is enough.
2005: These lazy developers, 128MB is enough.
2010: These lazy developers, 1GB is enough.
2015: These lazy developers, 4GB is enough.
2023: These lazy developers, 8GB is enough.

You get the picture.

quote me another time in the past where every single AAA game is absolute dog shit, and worst, the improvements are hardly visible from past games, in some cases they look worst, perform worst, play worst, stutter worst.

Not even a 2000$ gpu can get you a decent frame rate, no stutters, and they still don't look that much better then 2,3 year old games.
 
Back
Top