• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

24 Gb video RAM is worth for gaming nowadays?

24 Gb video RAM is worth for gaming nowadays?

  • Yes

    Votes: 66 41.5%
  • No

    Votes: 93 58.5%

  • Total voters
    159
why run 144Hz with stutters or tearing, when 120Hz would have neither

Because for many people playing a game has become synonymous with running a benchmark and one can't be enjoyed without the other for them. It's not enough to enjoy the game bur rather enjoy it "at a level" others can't. Is it silly? 100% Is youtube filled with videos of people running games showing benchmarks across various games with countless views? you bet
 
When multiple people tell you that you misunderstood and explain how and why, then post the evidence to back it up say that they're wrong because they posted a link to the definitive source of information on that topic?
No man, that's all you. You do not understand this topic or how a conversation works. Make a claim, and back it up with proof - or don't make that claim.


GPU bottlenecks result in uneven frametimes and render latency, as the CPU renders ahead while it's waiting. Frametimes and FPS are the same thing expressed in a different way. 1000Hz in a second, divided by the FPS... thats a frametime. This is not the same as render time.

CPU bottlenecks result in microstutters at worst, or a lower ceiling cap to max FPS. That said, you're going to have lower input latency in this situation.

Variable refresh rates allow the system to enter new information in a shorter period - If you had a 240Hz display with a VRR range of 1Hz to 240Hz, a 1FPS signal is still updated in the speed the 240Hz is - meaning that a new frame can be displayed at any of those 240 incremental steps, as soon as it's ready. This allows a much faster recovery from any stutters, and offers the reduced input latency of the maximum refresh rate even if the frame rate is lower.


The only reason people use uncapped framerates is because some older game engines like CS gave a competitive advantage to high frame rates as an engine bug, combined with reducing the input latency of a maxed out GPU. VRR provides that reduced latency with Vsync on, and an FPS cap gives you all the benefits without needing anything.

You'd know all this - If you weren't too lazy and arrogant to click a damned link.

It takes 10 seconds to fire up nvidias monitoring stats and see this in real time, it's not rocket science.

These are render latency values. These are from nvidia at the GPU stage of the pipeline only, before the monitor is involved.
Every ms of render latency is a delay in you seeing the image, which means you are responding to an older image. It's not input latency from your input device, but it IS an added delay to you reacting to what is actually happening.



60Hz, Vsync on. Oooh yay i love 31ms of input lag!
View attachment 291088


Exactly the same but with a 60FPS cap that prevents the CPU from rendering ahead of the GPU. (2 frames in DX12, so 60 +2 rendered ahead by the CPU, just like if the GPU was maxed out)
5.6ms.

Let's not use an FPS cap and enjoy 5.6x more render latency
(There is some beauty in that it's 5.6ms, and that vsync was 5.6 times higher)
View attachment 291089


"BUT I WANT VSYNC OFF AND MAXING THE GPU IS BETTER BECAUSE THATS HOW I BENCHMARK"
View attachment 291091

Sure, we go from 59FPS to 151 on the those 99% lows but... oh wait the input latency doubled.

letting my GPU max out its clocks to get that maximum performance managed to... still be worse.
View attachment 291093



Freesync/Vsync is a whole nother bag of fun on top of this, because freesync behaves differently between AMD and Nvidia, and Gsync is nvidia exclusive and different again.

The main key is when they work properly (Vsync off and maxed out GPU's is not properly) they can update at divisions of the full refresh rate.
Using Samsungs 240Hz displays as an example here, they have a minimum of 80Hz because it's 1/3 the max refresh rate.

At 1/2 or 1/3 of a supported refresh rate, the highest rate is used - and the signal sent for a new frame is sent at the start of the final repetition, not at the start. So 80Hz is 12.5ms, 160Hz is 6.25ms and 240Hz is 4.166ms

80FPS at 240Hz VRR would give you better input latency and response times because it asks for the new frame at the start of the final duplicate - 4.166/4.166/4.166
In a GPU limited situation you lose that benefit with frames rendered ahead end up right back at the higher latency values anyway.

Running 235Hz and FPS would give you 4.255ms input latency. If your GPU was maxed out and the CPU had to render ahead to cover up the lack, you'll end up at 8.51ms or 12.765ms, despite being at the higher framerate

Unlimited framerates are not a positive. Maxing your GPU is not a positive. They need spare performance ready to render the next frame when it's needed, because if they're even 0.000001 milliseconds late, you're getting doubled or tripled input latency and that's the most common microstutter there is, as input latency values go crazy.

Doing stupid things like running the GPU at 100% all the time forces you into that higher latency state all the time and people mistake that consistent latency for being the best they can get.
It needs to be said also that framecap doesn't really lower your input latency unless your GPU can actually - comfortably - reach that fps target. I think that that is where nvidia reflex comes into play - and it also works without a framecap at all.
 
I had thought Hertz measured cycles per second? I had thought wavelength tied to Hz. but what's so magical about 1Khz.?
 
I had thought Hertz measured cycles per second? I had thought wavelength tied to Hz. but what's so magical about 1Khz.?
If this was aimed at something i wrote, I do have a habit of stuffing up wording with those things occasionally - it's borderline dyslexia where i mean another word and say or type the wrong one.

This is why i edit my posts a looooot.
 
Digital Combat Simulator can eat up a ton of RAM. I've never seen anything like it before or since!
I often fill up 24 Gigabyte's of VRAM and I'm also at my limit of 32 Gigabyte's of CPU RAM. I've seen benchmarks hitting the ceiling of 64 Gigabytes of CPU RAM as well.

16 Gigabyte's should be more than enough VRAM for native 4K resolutions. I was using 10 Gigabyte's on a RTX 3080 for a few year's. The hardest Triple-A game I ran was probably Assassins Creed Valhalla, as it didn't have DLSS.

Does anyone know of any other titles (other than DCS) that eat up so much VRAM and CPU RAM?
 
I do have a habit of stuffing up wording with those things occasionally
me too i suffer from word blindness thats why my posts are sometimes short or funny or both the doc say's its just one of my trate's with being a Aspie
 
For me sure ,4K max , Hogwart was the most on my system 22GB plus , 17 GB plus all other games .
 
For me sure ,4K max , Hogwart was the most on my system 22GB plus , 17 GB plus all other games .
It has more to do with VRAM allocation dependable on resolution+settings, or simply allocating all of the VRAM (careless coding), than the actual usage. It's hard to tell what the actually VRAM usage is. Personally, when a game is tested for VRAM usage here on TPU, I would like to see how a RTX 4060Ti 8GB would scale vs. RTX 4060Ti 16 GB with same settings, ie. do you actually lose on performance when you're apparently running out of memory, or is it mostly tied to allocation rather than real usage.

Edit: Hogwarts game didn't have both 4060Ti versions of 8 and 16 GB, because the GPUs were released after the game itself, but Avatar game does, proving it was a case of VRAM allocation, rather than a requirement. There was practically no difference in performance.
 
Last edited:
It seems according to DF's 2023 roundup video, if a sub 16 gig VRAM GPU owner is patient and lucky, the dev might patch the game, as is a few games in the video that had PS3 quality textures at launch but then improved significantly in later patches.


Also does high quality texture in settings always mean high quality? Seems not, it a max level for the dynamic LOD.

textureswtf.png
 
Well i for one hope the 5090 gets 32gb vram, cause when you do 8k gaming, 24gb is just on the edge - witcher 3 remaster used to not be playable at 8k with 4090 due to vram swapping, but i see with the latest patch, they lowered vram usage JUST enough for it to not vram swap.

TdZdCYc.jpg


But can't use framegen, otherwise you still go over the edge on vram, crashing performance entirely

rCQhwzA.jpg
 
Last edited:
nVidia gpu with 32GB of vram??....expect nothing less than 3K.
 
nVidia gpu with 32GB of vram??....expect nothing less than 3K.

Lol, so it should be twice as expensive as the 3090 and 4090, purely due to getting 8gb more vram? Amazing logics right there....

It will be the natural evolution of the halo card, as the last 2 gens had 24gb, and historically, nvidia never lets the halo card sit at the same amount of vram for several gens.

It will be very surprising if the 5090 DOESN'T have more vram than the 4090 - question is if it will have a 512 bit bus with 32gb or a 384 bit bus with 48gb.
 
The problem is that the green gpus are not only useful for gaming. That's why the price does not increase according to logic.
Did you forget about the Titan RTX?
 
The problem is that the green gpus are not only useful for gaming. That's why the price does not increase according to logic.
Did you forget about the Titan RTX?

How is a product that has been discountinued for 5 years relevant ?

And the 4090 IS actually priced according to performance, hence why it had a better price to performance ratio than the 4080.
 
cause when you do 8k gaming
Really? 8K gaming is currently unobtainium. Full stop. Unless NVidia and AMD bring back multi GPU support, it simply will not happen any time in the next few years. And really, with as detailed as 4k is, 8k is not needed. Hell, MOST people are still gaming at 1080p and are perfectly happy with it.

witcher 3 remaster used to not be playable at 8k with 4090 due to vram swapping, but i see with the latest patch, they lowered vram usage JUST enough for it to not vram swap.
It still isn't. There are only about a dozen 8k displays being made and the number sold can be counted in the hundreds. 4320P gaming is not viable today and will NOT be for at least another 6 years. The CPU and GPU compute does not exist on a consumer level and will not for at least several generations of hardware.

And the 4090 IS actually priced according to performance, hence why it had a better price to performance ratio than the 4080.
No it doesn't. The 4090 is a premium product that out-prices itself...
 
Really? 8K gaming is currently unobtainium. Full stop. Unless NVidia and AMD bring back multi GPU support, it simply will not happen any time in the next few years
I love you for this :)
 
Really? 8K gaming is currently unobtainium. Full stop. Unless NVidia and AMD bring back multi GPU support, it simply will not happen any time in the next few years. And really, with as detailed as 4k is, 8k is not needed. Hell, MOST people are still gaming at 1080p and are perfectly happy with it.


It still isn't. There are only about a dozen 8k displays being made and the number sold can be counted in the hundreds. 4320P gaming is not viable today and will NOT be for at least another 6 years. The CPU and GPU compute does not exist on a consumer level and will not for at least several generations of hardware.


No it doesn't. The 4090 is a premium product that out-prices itself...

Funny, cause i just posted screenshots of me playing at 8k, and i can post alot more, if you want...

And 8k is noticeably more detailed than 4k.
 
Funny, cause i just posted screenshots of me playing at 8k, and i can post alot more, if you want...

And 8k is noticeably more detailed than 4k.
You might be running the display at 8k, but the game itself does not have 8k assets. Additionally, Witcher3 is an older engine that is well optimized and even that is only running in the 50's FPS. And you are likely only getting that by turning most or all of the heavy hitting settings down or off.

True 8k is NOT possible currently. Full stop. No one is making game content in 8k.

Try running CyberPunk2077, Starfield or Hogwarts Legacy at 8k and let's see what you get.
 
You might be running the display at 8k, but the game itself does not have 8k assets. Additionally, Witcher3 is an older engine that is well optimized and even that is only running in the 50's FPS. And you are likely only getting that by turning most or all of the heavy hitting settings down or off.

True 8k is NOT possible currently. Full stop. No one is making game content in 8k.

Try running CyberPunk2077, Starfield or Hogwarts Legacy at 8k and let's see what you get.

I turn no settings down - what would be the point of running 8k, if you are running reduced settings... it's using ultra raytracing, and everything maxed.

But yes, i will post all of it for ya after man united game is done.
 
It is because those are loose files you see there. Normally they are compressed and thus halved in size. Anyone using textures mods knows that. What's even the debate about needing 32GB. Yea sure 8 isn't enough when using mods and 12 is tight... but more... not yet Snake.
 
You might be running the display at 8k, but the game itself does not have 8k assets. Additionally, Witcher3 is an older engine that is well optimized and even that is only running in the 50's FPS. And you are likely only getting that by turning most or all of the heavy hitting settings down or off.

True 8k is NOT possible currently. Full stop. No one is making game content in 8k.

Try running CyberPunk2077, Starfield or Hogwarts Legacy at 8k and let's see what you get.

Cyberpunk

8k pathtracing
EAkZYzK.jpg


8k ultra RT
SgbqkOT.jpg


8k no RT
lo59g9O.jpg


4k pathtracing
QD47DzK.jpg

Starfield

8k
Jh47S2s.jpg


4k
E3j9SdL.jpg

Hogwarts

8k RT
pl4t5TX.jpg


8k no RT
H6IhRK5.jpg


4k RT
yUfR0a8.jpg

Cyberpunk (like witcher 3 remaster) is right on the edge with vram at 8k. But you'd have to be blind to say that 8k doesn't have noticably more detail in the image.

And 30-50% more performance with the 5090, and a bit more vram, you'd even have cyberpunk at 8k above 60 fps with pathtracing.
 
Cyberpunk (like witcher 3 remaster) is right on the edge with vram at 8k.
Yeah, that makes sense as the game assets are being scaled up and stored in that scaled up state, which would take up much more VRAM.
But you'd have to be blind to say that 8k doesn't have noticably more detail in the image.
You're not following along with I said. No games are being made with assets in native 8k. The game engines and driver are scaling the existing assets up to the displayed resolution. I'm not saying it looks bad. But it's NOT 8k native.
And 30-50% more performance with the 5090, and a bit more vram, you'd even have cyberpunk at 8k above 60 fps with pathtracing.
Maybe. It would be interesting to see. However, I going to stand firm on the stance that the compute for native 8k doesn't exist yet for consumer level parts. The only way it's going to happen anytime soon is if AMD and Nvidia bring back multi-GPU support in a way that is transparent to the software.
 
Back
Top