• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS

You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.

Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.

So the card was not past it's usefulness by the time the memory was needed. I have more examples but point made.

Also PC gamers that are REALLY into high end gaming tend to upgrade their gpu every year unless the performance uplift is only like 15%. I have a 2080ti and will likely upgrade to a 3090 although I'll wait for game benchmarks to see how the 3080 performs.
 
Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
 
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.

exactly i also agree, death stranding only uses around 4000vram at 4K and the detail is awesome.
I'm not an expert but from what i read some games still use Megatextures like Middle-earth shadow of mordor, old game but vram hungry, but the present and future is on "Physically based rendering" and "variable shade rating", Death stranding ,Doom eternal, Forza Horizon 4 use them and they are graphically superior and faster framerate. Even Star Citizen alpha in it's current state with tons of textures, which is already huge in scale, uses Physically based rendering.

I'm curios to see is DLSS 2.0 in action, despite so few games use it, if the current boost is alot higher then the 2080ti
 
I think Nvidia is just doing this to stop the SLI. They know the gamers out there that wanted SLI for lower card because in the hay day you could buy two lower cards SLI them and get better performance then forking over a high price for a single card. Now Nvidia is doing there best to stop all that. Making you buy the High end card only. I remember Nvidia used to love gamers now they just love to make you pay. I might upgrade my 1070ti's in SLI to maybe couple of 2070's supers. Or just wait till nvidia makes the 4080's and finally make some 3070's with SLI support.

My two 1070's in SLI outperform a 1080 single any day. I'm still running two 660ti's in SLI on my 1080P work /gaming computer.

It's just a shame nvidia made more sales in the gamer market when they allowed SLI on lower end cards. I mean sure if I had tons of money I might fork out 1800.00 for a 3090 but kinda seems like a waste a money for just one card.
 
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.

High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

The 3080 has a lot more horsepower but is comparatively light on VRAM. It will be an issue.
 
It feels like a lot of people are forgetting that the 3090, and the 3080 will be using GDDR6x which is “effectively double the number of signal states in the GDDR6X memory bus” which increases memory bandwidth to 84GB/s for each component which translates to up to 1TB/s rates according to micron. So this should mean that even a 3070 Ti with 16GB of ram is going to still be slowed than a 3080 if it still uses GDDR6.
 
One should note that Doom Eternal is already running well on various hardware configurations, including much lower end rigs, the game is well optimised, as was it's predecessor.
 
High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

No. Unoptimized textures take up a lot of space. There are games with beautiful textures that use 4-5 GiB of VRAM in 4K. The games that fill up the VRAM completely usually have ugly textures and lowering their quality does not make much difference anyway.
 
I seem to be seeing something different about the 8nm Samsung/Nvidia process. Volume should be good as this isn't shared with anything else like 7nm is. Nvidia prices alone should be a good indication that they can make the card efficiently otherwise they wouldn't be at these price points when there isn't competition yet. From my understanding this Samsung/Nvidia process should be a better turn out than Turing 12nm. Guess we'll see. I expect demand for 30 series to be the biggest issue. Especially 3070.

Every single indicator I've seen is that there'll be virtually no stock, and yields are awful. All the analysis I've seen agrees.

There's no way they'd have such a high % of disabled cores otherwise. 20% would seem to indicate it's at the limits of the process. Power consumption is another giveaway, and will compound issues with defectiveness.

This was chosen because there was spare line capacity on their 8nm process. It's a modified version of what was available. There was no time for a full custom process like the SHP 12nm node that the last couple of gens of NVIDIA were on. This was thrown together very quickly when Jensen failed to strong arm TSMC. It being Samsung's least popular node of recent times is not at all to the benefit of the maturity of the node, or quality of early dies for Ampere.

They're at the price they are because of NVIDIA's expectations for RDNA2 desktop, and the strength of the consoles.

They probably aren't paying that much for wafers, simply because this was capacity on 8nm (/10nm) that would have otherwise gone unused, which doesn't benefit Samsung. But some of the talk of NVIDIA only paying for working dies is likely nonsense. Certainly on the 3080/3090 wafers. Samsung would most likely lose their shirt on those. Their engineers aren't idiots ... they'd have known as soon as they saw what NVIDIA wanted that decent yields were likely unachievable anywhere near launch (maybe at all). NVIDIA were likely offered an iteration of 7nm EUV HP(ish), but it would have cost a lot more, they wouldn't have had as many wafers guaranteed, and launch likely would have been pushed 1 - 2 quarters. Maybe more. They've gambled on the 8nm ... judging by power consumption and disabled CU count, they have not exactly 'won'.
 
Can't agree more, better get what suits you now and worry less about the future, a 1080ti with the best CPU of that period can't play a 4k HDR video on YouTube let alone 8K.
My issue with the 3080 is I'm not sure if the beam is enough right now, a 12gb would've been better, but yeah the price is also an issue, maybe AMD can under cut them with similar performance (minus come features) with more vram and a bit cheaper.

You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
 
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.

I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
 
i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
 
What's that :confused:
i believed its mentioned sometime ago, for each generation, nvidia has been improving their memory compression algorithm, and this time around they would utilize AI to compress vram storage, gotta make more use of them 3rd gen tensore cores
 
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
 
I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

With every generation for the past 10+ years people have raised concerns about Nvidia's GPUs having too little memory, yet time after time they've shown to do just fine. Never forget that both Nvidia and AMD have close collaboration with game developers, they have a good idea of where the game engines will be in a couple of years.

i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
With the consoles having 16 GB of total memory, split between OS, the software on the CPU and the GPU, it's highly unlikely that those games will delegate more than 10 GB of that for graphics.
If anything, this should mean that few games will use more than ~8GB of VRAM for the foreseeable future with these kinds of detail levels.

Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
Memory compression has improved with every recent architecture from Nvidia up until now. There are rumors of "tensor compression", but I haven't looked into that yet.

You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
Compression certainly helps, but it doesn't work quite that way.
Memory compression in GPUs is lossless compression transparent to the user. As with any kind of data, the compressions rate of lossless compression is tied to information density. While the memory compression has become more sophisticated with every generation, it's still limited to compress mostly "empty" data.

Render buffers with mostly sparse data is compressed very well, while textures are generally only compressed in "empty" sections. Depending on the game, the compression rate can vary a lot. Especially games with many render passes can see some substantial gains, sometimes over 50% I believe, while others are <10%. So please don't think of memory compression as something that expands memory by xx %.
 
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
 
You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
 
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
It was a rumour last time that they will tap into it, but not much info on it now or if its really true and yeah like
efikkan mentioned, Memory compression improves on every generation regardless of tensor assisted or not
 
Results nearly match RTX 2080 Super and RTX 3080 memory bandwidth scaling.
 
So where's the 3090 8k footage?
 
You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
I'm not convinced by this, in a recent Hardware Unboxed video, the 1080ti with more vram than the 2080 seems to matter. I believe the only reason is the price.
Let's wait for reviews.
 
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
Oh I missed this, it makes sense. That’s also unfortunate though.
 
More VRAM doesn't give you more performance.
It depends. Note the dive 2080 is taking at 4k (also tells you why those nice DF guys ran it that way):

1599415446706.png


"speed up":

1599415912039.png
 
Last edited:
Back
Top