• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

24 Gb video RAM is worth for gaming nowadays?

24 Gb video RAM is worth for gaming nowadays?

  • Yes

    Votes: 66 41.5%
  • No

    Votes: 93 58.5%

  • Total voters
    159
One thing to note about VRAM usage on benchmarks, these are clean systems or systems with very little on them. They don't have multiple applications open that consume a wide range of GPU resources. I don't have any games open, but a lot of windows and different applications installed that consume GPU resources. My VRAM utilization right now is 4.3GB. Now, if I close everything or restart it will go to 1~1.5GB. But why would I do that? I have seen my VRAM usage go as high as 6GB. I typically only have 10~12GB of available VRAM for video games for 4k output. I would absolutely love to have 24GB of VRAM. 16GB is absolute minimum for 4k and maybe even 1440p. I think I'd only go for a 12GB card if I had a 1080p monitor though 12GB would only leave 6~8GB of VRAM which I guess is good enough for 1080p? Been a super long time since I have played at that resolution.

Now if all your system is used for is strictly gaming, you close your browser when you start a game, and the only other application you have installed is something like Discord, then you could get buy with less.
There's also a part of system memory that's reserved to act as a sort of backup VRAM. If your actual VRAM gets full, currently unused things get dumped there (or even more things if the game needs more).

There's also the effect of the OS loading more things into your RAM and VRAM if you have more available. For example, I see around 1 GB VRAM used on the Windows desktop on my 6750 XT. I don't see nearly as much on the 1030 in my HTPC.

Except it now looks like it isn't several years old, so that totally depends if you've already played it, and what version. That's my take on it anyway. I had only played a few small parts of it on my friend's PS4 years ago, and it's still totally worth it to me, especially if the hardest two modes are noticeably harder.

The gameplay feels a bit easier than I thought it would, especially only having to kill one Bloater, well 2 if you count the one Ellie kills that goes down a bit easier, not needing to use many of the weapons, and not needing a lot of the upgrades with the ending being such an easy scenario of sneak killing, ending with an uber easy sneak.

That said, I'm hoping Survivor and Grounded modes will make up for the lack of challenge on Hard, because compared to games like Dead Space and The Evil Within 1, it feels more like adventure horror than survival horror so far. It kind of does the reverse of most horror games, where the story is the most compelling thing about it, though the story is really good.
I only play games for their story and atmosphere, so I'll definitely buy it when it gets cheaper. :)

I've never played the original, but a lot of people have, and there have been a lot of videos, reviews, articles, etc. about it that were simply impossible to avoid, so it still feels like an old game. Besides, I know it's just a PC port, there's no way around it. The developers didn't put as much effort into it as they would have had to when making a new game, and that isn't worth £30, let alone £60 to me.
 
For the end user that is entirely irrelevant tbh. Only the performance matters.
for most end users sure but if you are going to have a debate about it than it's easier to discuss one at a time rather than jump around
Alot has been said about the game being unreasonably cpu heavy
the only thing I've seen about it (granted I'm not looking for it) was the pcgamer review when the reviewer stated the game pushed his 9700k to full usage. Most likely the review was done prior to any patches in order to meet a deadline so it may seem CPU performance is be better from your statement (and Frag's) since the review.
 
Last edited:
wait until you spend day two on a tech/gamer forum and we discuss hardware people feel won't be to their liking

LOL, been there, done that. Tech forums invariably wind up with arguments that go nowhere. I was one of the first on the tech forum we were talking about it to say the 8700K was going to be a beast of a CPU when the announcement of it first hit. Years later I'm still running it, and still loving it, with no need whatsoever for an OC,

For the end user that is entirely irrelevant tbh. Only the performance matters.

Alot has been said about the game being unreasonably cpu heavy, and i don't see that being the case. My 11700 isn't exactly the fastest cpu, and the performance is stellar.

That's what I keep thinking through all TLoU Part I discussions. Too many people hung up on the numbers instead of the actual performance.

I only play games for their story and atmosphere, so I'll definitely buy it when it gets cheaper. :)

I've never played the original, but a lot of people have, and there have been a lot of videos, reviews, articles, etc. about it that were simply impossible to avoid, so it still feels like an old game. Besides, I know it's just a PC port, there's no way around it. The developers didn't put as much effort into it as they would have had to when making a new game, and that isn't worth £30, let alone £60 to me.

That sounds a lot like you're claiming price gouging, but only because you spoiled the game for yourself. C'mon man, you can do better than that! :D

Seriously though, I get what you're saying, the assets are already there to draw from. However when you look at the difference in visual quality, all the graphics features and settings added, and RT implementation and the testing required, it took a fair bit of time just the same. And as well, like it or not, there are FAR less people that buy AAA games on PC compared to on console, so it wouldn't be worth it to them to charge only half the price because of that alone.

Some things about the gaming industry like the attrition vs high demand disparities between platforms we cannot control, and neither developer, publisher, or consumer are at fault. It just is what it is.

for most end users sure but if you are going to have a debate about it than it's easier to discuss one at a time rather than jump around

the only thing I've seen about it (granted I'm not looking for it) was the pcgamer review when the reviewer stated the game pushed his 9700k to full usage. Most likely the review was done prior to any patches in order to meet a deadline so it may seem CPU performance is be better from your statement (and Frag's) since the review.

Yeah and I even get a bit cynical about highly regarded sites like Digital Foundry when they start out poking fun about the performance and showing blurry as hell brick walls, without even saying what build version it was. Then I go in game, on patch 1.0.1.0 mind you, and find no such blurriness on the same Med Textures. It actually makes me wonder if they somehow got a hold of a prerelease version that hadn't even gotten ANY patching.

Digital Foundry used to be my number one source for optimized PC settings on games, now I'm not so sure if I can trust what they are saying and showing is even true.

THAT SAID, game studios should be damn careful about releasing unpatched versions of their games without THOROGHLY checking them for performance and bug problems. So IMO, if Iron Galaxy did that, bad on them, as it's only asking for TONS of bad feedback and can ruin the reputation of a game at it's critical release time.
 
Last edited:
I really dont get how understanding what a cpu bottleneck is, and how to trigger it, is so hard for "some" people. But as you correctly said @fevgatos you just lower resolution, until you don't see an fps increase by lowering resolution any further... and that is your cpu performance level.

4k - 82 fps

dkbV7v6.jpg


1440p - 121 fps, but only 92% gpu load, so really we already know that it is cpu limited

MkukW2M.jpg


1080p - as expected didn't increase fps any further. So in this particular scene, with a 11700f cpu, the cpu performance level is 120 fps.

DJPt4SV.jpg


Then one could argue that a game is cpu heavy if you aren't able to obtain a satisfying fps, and the cpu bottleneck (aka gpu load is below 100%) is apparent all the time - in other words that the cpu performance (or lack) is the culprit of the poor performance in the game.

Judging cpu performance in a game on anything else than the game performance seems rather ridiculous.
Yeah but look at your CPU usage to achieve that result. You are hitting 90% on a relatively empty scene man. You can't tell me it's not a heavy game. Lot's of games (atomic heart for example) get similar / much higher framerate with much lower CPU utilization.
 
24 GB VRAM is overkill. Also just because a card is using a lot of VRAM doesn't necessarily mean the game requires that much VRAM. In some cases the engine is loading up the VRAM just because it's there.
 
What I find fascinating is that Plague Tale requires 4.5 to 6GB of VRAM at 4k ultra. TLOU requires 9.5 to 11gb of VRAM at 720p. Im sure this has nothing to do with lazy devs, it's just greedy nvidia not offering enough vram, right.
 
Yeah but look at your CPU usage to achieve that result. You are hitting 90% on a relatively empty scene man. You can't tell me it's not a heavy game. Lot's of games (atomic heart for example) get similar / much higher framerate with much lower CPU utilization.

You really don't seem to get it - in a cpu bottlenecked scenario, you want the usage to be as high as possible. In the oppersite scenario you are leaving a ton of performance on the table when it's just a single thread bottlenecking the performance.

As for other games getting higher fps with less usage - there are many reasons for that, and in this context they are rather irrelevant tbh. But the primary reason is that this game is specifically made for the ps5's decoder engine - something we don't have on pc, and it's therefore done by the cpu instead - much like in the spiderman game. The same will be the case with other sony games that are ps5 ports.
But as said, it's really rather irrelevant, as long as the performance is good, which it is.

What I find fascinating is that Plague Tale requires 4.5 to 6GB of VRAM at 4k ultra. TLOU requires 9.5 to 11gb of VRAM at 720p. Im sure this has nothing to do with lazy devs, it's just greedy nvidia not offering enough vram, right.

It's a question of how much bandwidth you wanna use on streaming textures, and how much vram you wanna use storing textures. Using as much vram as possible to cache usually provides better performance, provided you don't go over your vram limit.
 
What I find fascinating is that Plague Tale requires 4.5 to 6GB of VRAM at 4k ultra. TLOU requires 9.5 to 11gb of VRAM at 720p. Im sure this has nothing to do with lazy devs, it's just greedy nvidia not offering enough vram, right.

I think it's about priorities, not laziness.
The console devs make the games that way so they can use first the cpu then the ram and then the gpu.
The cpus are decent enough in consoles, the ram in marginally acceptable amount but the gpus are always the limiting factor.

So they develop the games the exact opposite way to the games they would develop firstly for PCs.
 
As for the 6800 having too much vram... a gpu cant have "too much vram". That's like saying a car has a too big gas tank.
That's a very poor analogy and actually really great at proving why you're incorrect

Do you know why planes dont fill up to 100% and instead get as accurate weight from passengers and cargo as they can to carry as little fuel as possible? Because they have to burn fuel to carry the weight of the extra fuel.

Do you know why GPU's dont have heaps of VRAM, cost be damned?

Because they use power and produce heat, my 3090 with it's 375W limit loses performance the more VRAM is used, because it cant power all the VRAM and the GPU at full speeds and remain under 375W - I need to undervolt or run a 450W BIOS to use it. That's why Nvidia didnt include more VRAM on the other models in the series, except the 3090Ti which then used half the VRAM chips at double density once they were available to avoid that problem.

You're saying a car can never have enough fuel, so let's attach a fuel tanker to the back - lets go even further and put a refinery back there!
(You'll inevitably say "thats not the point" "you took it too far" etc, but no, that was your claim. YOU made the claim you never have too much, but there is drawbacks to doing so)
 
24 GB VRAM is overkill. Also just because a card is using a lot of VRAM doesn't necessarily mean the game requires that much VRAM. In some cases the engine is loading up the VRAM just because it's there.

The problem with this logic is that it technically applies every single time there's any advances in capacity. Ten years ago when the original Titan launched, the only thing that one would replace from your post is "24" to "6" to make the exact same argument against it.

Yet... It's been six years since the GTX 1060 made the same 6 GB the "minimum acceptable" VRAM amount, gaming on a 4 GB card today is an exercise in patience to find which trade-off are you going to go with to play your game, no matter how powerful the GPU may be.

24 GB isn't overkill, IMO. It's an ample and adequate amount for a high end GPU which is expected to do high end things.

We'd eventually have 24 GB cards in the more popular segments, wasn't for the... death of the budget and performance segment GPUs. What an anomaly this generation has been...
 
You really don't seem to get it - in a cpu bottlenecked scenario, you want the usage to be as high as possible. In the oppersite scenario you are leaving a ton of performance on the table when it's just a single thread bottlenecking the performance.

As for other games getting higher fps with less usage - there are many reasons for that, and in this context they are rather irrelevant tbh. But the primary reason is that this game is specifically made for the ps5's decoder engine - something we don't have on pc, and it's therefore done by the cpu instead - much like in the spiderman game. The same will be the case with other sony games that are ps5 ports.
But as said, it's really rather irrelevant, as long as the performance is good, which it is.



It's a question of how much bandwidth you wanna use on streaming textures, and how much vram you wanna use storing textures. Using as much vram as possible to cache usually provides better performance, provided you don't go over your vram limit.
You don't ever want a bottleneck
A CPU bottleneck results in stuttering, while a GPU bottleneck results in CPU's rendering frames ahead and input lag.
 
That's a very poor analogy and actually really great at proving why you're incorrect

Do you know why planes dont fill up to 100% and instead get as accurate weight from passengers and cargo as they can to carry as little fuel as possible? Because they have to burn fuel to carry the weight of the extra fuel.

Do you know why GPU's dont have heaps of VRAM, cost be damned?

Because they use power and produce heat, my 3090 with it's 375W limit loses performance the more VRAM is used, because it cant power all the VRAM and the GPU at full speeds and remain under 375W - I need to undervolt or run a 450W BIOS to use it. That's why Nvidia didnt include more VRAM on the other models in the series, except the 3090Ti which then used half the VRAM chips at double density once they were available to avoid that problem.

You're saying a car can never have enough fuel, so let's attach a fuel tanker to the back - lets go even further and put a refinery back there!
(You'll inevitably say "thats not the point" "you took it too far" etc, but no, that was your claim. YOU made the claim you never have too much, but there is drawbacks to doing so)

That's not the point, you took it too far ! :p

Seriously though, i did not find the vram to be a limiting factor at all on my 3090 :)
 
That's a very poor analogy and actually really great at proving why you're incorrect

Do you know why planes dont fill up to 100% and instead get as accurate weight from passengers and cargo as they can to carry as little fuel as possible? Because they have to burn fuel to carry the weight of the extra fuel.

Do you know why GPU's dont have heaps of VRAM, cost be damned?

Because they use power and produce heat, my 3090 with it's 375W limit loses performance the more VRAM is used, because it cant power all the VRAM and the GPU at full speeds and remain under 375W - I need to undervolt or run a 450W BIOS to use it. That's why Nvidia didnt include more VRAM on the other models in the series, except the 3090Ti which then used half the VRAM chips at double density once they were available to avoid that problem.

You're saying a car can never have enough fuel, so let's attach a fuel tanker to the back - lets go even further and put a refinery back there!
(You'll inevitably say "thats not the point" "you took it too far" etc, but no, that was your claim. YOU made the claim you never have too much, but there is drawbacks to doing so)

Being fair this is a problem specific to the 3090, the 3090 Ti solved it and the Titan RTX wasn't really affected, the blame is more of excessive amount of power hungry first generation GDDR6X than actually the capacity in itself. 24 chips that suck 7, 8 watts each was a bad idea, but they needed a product to position as the Titan RTX's successor.
 
You don't ever want a bottleneck
A CPU bottleneck results in stuttering, while a GPU bottleneck results in CPU's rendering frames ahead and input lag.

Ofc you don't - you will have alot more frametime varience while cpu bottlenecked. But im saying he got it backwards about high potential cpu load being bad - the oppersite is what is bad, only having 1 thread being maxed out, while the rest of the cpu barely does anything. Sure, it will give less watt draw, but it will also leave ALOT of potential performance on the table vs if the game had proper parallelized cpu threads.
 
That sounds a lot like you're claiming price gouging, but only because you spoiled the game for yourself. C'mon man, you can do better than that! :D

Seriously though, I get what you're saying, the assets are already there to draw from. However when you look at the difference in visual quality, all the graphics features and settings added, and RT implementation and the testing required, it took a fair bit of time just the same. And as well, like it or not, there are FAR less people that buy AAA games on PC compared to on console, so it wouldn't be worth it to them to charge only half the price because of that alone.

Some things about the gaming industry like the attrition vs high demand disparities between platforms we cannot control, and neither developer, publisher, or consumer are at fault. It just is what it is.
So basically, look at the game as a remake, not as a resold old product. I get that. :)

It's only that when it comes to remakes, I can't help but think of the ones like Black Mesa, which isn't just a graphical upgrade over Half-Life, but a thoroughly redone game which was free in its early days, then started selling for something like 10 or 15 quid once Valve took on the devs to finish the project with proper funding. Now, that's something I consider worth your money! I also love what the devs did with Homeworld Remastered, not to mention that game wasn't too expensive at launch, either. As a big fan of The Witcher, I can only endorse the recent TW3 upgrade because it comes as a free patch for original owners, not as a new game (which it is definitely not).

The problem is, 9 out of 10 remakes of current days are nothing more than slight graphical upgrades for extortionate prices, and TLoU is no exception, I'm afraid. Like I said, I'll buy it once its price drops to about £10 because I'm eager to play it. But that's as much as a remake is worth to me, unfortunately.

Accepting bad prices as "market conditions" and shrugging them off saying "that's just how it is these days" is the wrong thing to do. We have the power to turn trends around when we vote with our wallets, and that's exactly what I'm doing.
 
Being fair this is a problem specific to the 3090, the 3090 Ti solved it and the Titan RTX wasn't really affected, the blame is more of excessive amount of power hungry first generation GDDR6X than actually the capacity in itself. 24 chips that suck 7, 8 watts each was a bad idea, but they needed a product to position as the Titan RTX's successor.

They did indeed. The amount of vram the 3090 got will also make it the only 30 series card to age well.
 
We have the power to turn trends around when we vote with our wallets, and that's exactly what I'm doing.

Yeah, well, people have been saying that for years, and look what good it's done, very little. If I were to be as nit picky as you imply we should be, I'd be bored to death with the endless waiting.
 
The problem with this logic is that it technically applies every single time there's any advances in capacity. Ten years ago when the original Titan launched, the only thing that one would replace from your post is "24" to "6" to make the exact same argument against it.

Yet... It's been six years since the GTX 1060 made the same 6 GB the "minimum acceptable" VRAM amount, gaming on a 4 GB card today is an exercise in patience to find which trade-off are you going to go with to play your game, no matter how powerful the GPU may be.

24 GB isn't overkill, IMO. It's an ample and adequate amount for a high end GPU which is expected to do high end things.

We'd eventually have 24 GB cards in the more popular segments, wasn't for the... death of the budget and performance segment GPUs. What an anomaly this generation has been...
Yeah and if the original titan did have 24 GB VRAM it would still be borderline useless today, because the VRAM is only useful if the card is powerful enough to use it.

The point of this thread is that there's obvious scenarios where cards have more VRAM than they need or can use. For anything below halo cards today, 24 GB VRAM is pointless.

I'd take a 4080 over a 3090 any day, because while it has 8 GB less VRAM and is therefore bad according to many in this thread, it's going to deliver a much better experience.
 
Yeah, well, people have been saying that for years, and look what good it's done, very little. If I were to be as nit picky as you imply we should be, I'd be bored to death with the endless waiting.
So what would be the alternative, hand over your money for things you find over priced? I believe both you and @AusWolf have a right to spend your money as you please, that doesn't mean your opinions have to align with how you spend it or don't spend it.
 
Yeah, well, people have been saying that for years, and look what good it's done, very little.
That's because not enough people have been saying that. The majority buys anything for any price and puts up with everything that entertainment companies say or do.

If I were to be as nit picky as you imply we should be, I'd be bored to death with the endless waiting.
I'm not bored. :) I've got more than 500 games on Steam and GOG, including old classics that I'm always happy to play again, and lots of other titles that I bought on sale but haven't tried yet.

Edit: I'm not saying one should be nitpicky about this stuff. I'm just stating that I am.
 
You don't ever want a bottleneck
A CPU bottleneck results in stuttering, while a GPU bottleneck results in CPU's rendering frames ahead and input lag.

You always have a bottleneck somewhere, there is no such thing as a "bottleneck free system", otherwise you'd get what ? Infinite frames per second ? And no, neither of those things have to mean stuttering or input lag, that's not how it works. According to this logic you always either get stutter or input lag which makes no sense.

CPU bottleneck => lower bound on CPU time for each frame
GPU bottleneck => lower bound on GPU time for each frame

That's all these things mean, nothing more nothing less.

I'd expect it to be pretty heavy
I don't, why should it be heavy ? Like I said the game logic is still that of a game that's 10 years old, there is obviously something amiss about CPU performance and I don't see what it could be other than legacy code.

The aforementioned Crysis Remaster was also updated to CryEngine 3, yet it had the same backend logic, the developers admitted it, so just because a game is ported to a new engine that does not mean it's actually rewritten from the ground up and I bet it's the same story here.
 
Last edited:
That's a very poor analogy and actually really great at proving why you're incorrect

Do you know why planes dont fill up to 100% and instead get as accurate weight from passengers and cargo as they can to carry as little fuel as possible? Because they have to burn fuel to carry the weight of the extra fuel.
Agreed and I'll do you one better with the fuel analogy. Think of automotive ultimate performance in race cars, open wheel cars like F1 are extremely weight conscious. Put in too big a gas tank and you weigh the car down preventing it from achieving top speeds, too small a gas tank and you create a situation where the car needs more pit stops. You want a just right size tank to be able to achieve top speeds yets carry you through the race with minimal stops. Similar to video cards, adding more RAM than the chip can effectively use just increased power used, can create more heat internally, and adds an additional cost to the end user. Too little and you can hamper performance as we have seen. You want the just right amount.
You don't ever want a bottleneck
A CPU bottleneck results in stuttering, while a GPU bottleneck results in CPU's rendering frames ahead and input lag.
I know what you are saying but everyone has a bottleneck be it CPU, GPU, SSD/HDD, monitor, RAM, games/software, even PSU. Really its, you don't want a bottleneck that severely prevents your hardware from performing up to its full potential with severely being the key word.
 
You always have a bottleneck somewhere, there is no such thing as a "bottleneck free system", otherwise you'd get what ? Infinite frames per second ? And no, neither of those things have to mean stuttering or input lag, that's not how it works. According to this logic you always either get stutter or input lag which makes no sense.

CPU bottleneck => lower bound on CPU time for each frame
GPU bottleneck => lower bound on GPU time for each frame

That's all these things mean, nothing more nothing less.

I think he means that ideally you want to use an fps limiter - which is also what i do.
 
You always have a bottleneck somewhere, there is no such thing as a "bottleneck free system", otherwise you'd get what ? Infinite frames per second ? And no, neither of those things have to mean stuttering or input lag, that's not how it works. According to this logic you always either get stutter or input lag which makes no sense.

CPU bottleneck => lower bound on CPU time for each frame
GPU bottleneck => lower bound on GPU time for each frame

That's all these things mean, nothing more nothing less.
I know what you are saying but everyone has a bottleneck be it CPU, GPU, SSD/HDD, monitor, RAM, games/software, even PSU. Really its, you don't want a bottleneck that severely prevents your hardware from performing up to its full potential with severely being the key word.
IMO, ideally, you'd always want to have a GPU bottleneck on your system, as it provides lower average framerates (whether "lower" means 60 instead of 80, or 200 instead of 250), while a RAM or VRAM bottleneck means stutters or freezes during asset loading, and a CPU bottleneck usually means random stutters at random points, both of which are infinitely more annoying than just a generally lower than "x" FPS.
 
I think he means that ideally you want to use an fps limiter - which is also what i do.

And what does that exactly achieve ? Unless there is some issue with the game engine there is no reason to artificially limit performance, you're not getting any less stutter and you're certainly getting more input lag if you do that.
 
Back
Top