Thursday, September 3rd 2020

GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS

NVIDIA on Thursday posted a taste of the performance on offer with its new GeForce RTX 3080 graphics card. In a gameplay video posted on YouTube with performance metrics enabled, the card was shown running "DOOM Eternal" with details maxed out at 4K UHD resolution, where it clocked over 100 frames per second, or roughly 50% higher than the RTX 2080 Super. In quite a few scenes the RTX 3080 manages close to 120 FPS, which should be a treat for high refresh-rate gamers.

Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere. Both cards have identical image quality as the settings are constant between both test beds. NVIDIA is positioning the RTX 3080 as a 4K gaming workhorse product, while the top-dog RTX 3090 was pitched as an "8K 60 Hz capable" card in its September 1 presentation. The RTX 3090 should offer 4K gaming with high refresh rates. DOOM Eternal continues to be one of the year's bright spots in PC gaming, with a new DLC expected to come out in October.
The NVIDIA presentation follows.

Add your own comment

57 Comments on GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS

#26
efikkan
d0x360Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
Posted on Reply
#27
Hatrix
efikkanMore VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
exactly i also agree, death stranding only uses around 4000vram at 4K and the detail is awesome.
I'm not an expert but from what i read some games still use Megatextures like Middle-earth shadow of mordor, old game but vram hungry, but the present and future is on "Physically based rendering" and "variable shade rating", Death stranding ,Doom eternal, Forza Horizon 4 use them and they are graphically superior and faster framerate. Even Star Citizen alpha in it's current state with tons of textures, which is already huge in scale, uses Physically based rendering.

I'm curios to see is DLSS 2.0 in action, despite so few games use it, if the current boost is alot higher then the 2080ti
Posted on Reply
#28
Lycanwolfen
I think Nvidia is just doing this to stop the SLI. They know the gamers out there that wanted SLI for lower card because in the hay day you could buy two lower cards SLI them and get better performance then forking over a high price for a single card. Now Nvidia is doing there best to stop all that. Making you buy the High end card only. I remember Nvidia used to love gamers now they just love to make you pay. I might upgrade my 1070ti's in SLI to maybe couple of 2070's supers. Or just wait till nvidia makes the 4080's and finally make some 3070's with SLI support.

My two 1070's in SLI outperform a 1080 single any day. I'm still running two 660ti's in SLI on my 1080P work /gaming computer.

It's just a shame nvidia made more sales in the gamer market when they allowed SLI on lower end cards. I mean sure if I had tons of money I might fork out 1800.00 for a 3090 but kinda seems like a waste a money for just one card.
Posted on Reply
#29
HugsNotDrugs
efikkanYou shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.
High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

The 3080 has a lot more horsepower but is comparatively light on VRAM. It will be an issue.
Posted on Reply
#30
Near
It feels like a lot of people are forgetting that the 3090, and the 3080 will be using GDDR6x which is “effectively double the number of signal states in the GDDR6X memory bus” which increases memory bandwidth to 84GB/s for each component which translates to up to 1TB/s rates according to micron. So this should mean that even a 3070 Ti with 16GB of ram is going to still be slowed than a 3080 if it still uses GDDR6.
Posted on Reply
#31
Easo
One should note that Doom Eternal is already running well on various hardware configurations, including much lower end rigs, the game is well optimised, as was it's predecessor.
Posted on Reply
#32
THU31
HugsNotDrugsHigh res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.
No. Unoptimized textures take up a lot of space. There are games with beautiful textures that use 4-5 GiB of VRAM in 4K. The games that fill up the VRAM completely usually have ugly textures and lowering their quality does not make much difference anyway.
Posted on Reply
#33
midnightoil
JaypI seem to be seeing something different about the 8nm Samsung/Nvidia process. Volume should be good as this isn't shared with anything else like 7nm is. Nvidia prices alone should be a good indication that they can make the card efficiently otherwise they wouldn't be at these price points when there isn't competition yet. From my understanding this Samsung/Nvidia process should be a better turn out than Turing 12nm. Guess we'll see. I expect demand for 30 series to be the biggest issue. Especially 3070.
Every single indicator I've seen is that there'll be virtually no stock, and yields are awful. All the analysis I've seen agrees.

There's no way they'd have such a high % of disabled cores otherwise. 20% would seem to indicate it's at the limits of the process. Power consumption is another giveaway, and will compound issues with defectiveness.

This was chosen because there was spare line capacity on their 8nm process. It's a modified version of what was available. There was no time for a full custom process like the SHP 12nm node that the last couple of gens of NVIDIA were on. This was thrown together very quickly when Jensen failed to strong arm TSMC. It being Samsung's least popular node of recent times is not at all to the benefit of the maturity of the node, or quality of early dies for Ampere.

They're at the price they are because of NVIDIA's expectations for RDNA2 desktop, and the strength of the consoles.

They probably aren't paying that much for wafers, simply because this was capacity on 8nm (/10nm) that would have otherwise gone unused, which doesn't benefit Samsung. But some of the talk of NVIDIA only paying for working dies is likely nonsense. Certainly on the 3080/3090 wafers. Samsung would most likely lose their shirt on those. Their engineers aren't idiots ... they'd have known as soon as they saw what NVIDIA wanted that decent yields were likely unachievable anywhere near launch (maybe at all). NVIDIA were likely offered an iteration of 7nm EUV HP(ish), but it would have cost a lot more, they wouldn't have had as many wafers guaranteed, and launch likely would have been pushed 1 - 2 quarters. Maybe more. They've gambled on the 8nm ... judging by power consumption and disabled CU count, they have not exactly 'won'.
Posted on Reply
#34
Minus Infinity
Xex360Can't agree more, better get what suits you now and worry less about the future, a 1080ti with the best CPU of that period can't play a 4k HDR video on YouTube let alone 8K.
My issue with the 3080 is I'm not sure if the beam is enough right now, a 12gb would've been better, but yeah the price is also an issue, maybe AMD can under cut them with similar performance (minus come features) with more vram and a bit cheaper.
You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
Posted on Reply
#35
Sandbo
efikkanYou shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.
I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
Posted on Reply
#36
ViperXTR
i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
Posted on Reply
#37
R0H1T
ViperXTRnew tensor core assisted memory compression
What's that :confused:
Posted on Reply
#38
ViperXTR
R0H1TWhat's that :confused:
i believed its mentioned sometime ago, for each generation, nvidia has been improving their memory compression algorithm, and this time around they would utilize AI to compress vram storage, gotta make more use of them 3rd gen tensore cores
Posted on Reply
#39
R0H1T
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
Posted on Reply
#40
efikkan
SandboI would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

With every generation for the past 10+ years people have raised concerns about Nvidia's GPUs having too little memory, yet time after time they've shown to do just fine. Never forget that both Nvidia and AMD have close collaboration with game developers, they have a good idea of where the game engines will be in a couple of years.
ViperXTRi think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
With the consoles having 16 GB of total memory, split between OS, the software on the CPU and the GPU, it's highly unlikely that those games will delegate more than 10 GB of that for graphics.
If anything, this should mean that few games will use more than ~8GB of VRAM for the foreseeable future with these kinds of detail levels.
R0H1TMemory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
Memory compression has improved with every recent architecture from Nvidia up until now. There are rumors of "tensor compression", but I haven't looked into that yet.
Minus InfinityYou forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
Compression certainly helps, but it doesn't work quite that way.
Memory compression in GPUs is lossless compression transparent to the user. As with any kind of data, the compressions rate of lossless compression is tied to information density. While the memory compression has become more sophisticated with every generation, it's still limited to compress mostly "empty" data.

Render buffers with mostly sparse data is compressed very well, while textures are generally only compressed in "empty" sections. Depending on the game, the compression rate can vary a lot. Especially games with many render passes can see some substantial gains, sometimes over 50% I believe, while others are <10%. So please don't think of memory compression as something that expands memory by xx %.
Posted on Reply
#41
blobster21
efikkanSo please don't think of memory compression as something that expands memory by xx %.
But it's so much more fun to throw numbers out of nowhere, compared to your explanation.
And don't forget you can always download more RAM, if need be.
Posted on Reply
#42
Sandbo
efikkanGPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.
You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
Posted on Reply
#43
efikkan
SandboYou are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
Posted on Reply
#44
ViperXTR
R0H1TMemory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
It was a rumour last time that they will tap into it, but not much info on it now or if its really true and yeah like
efikkan mentioned, Memory compression improves on every generation regardless of tensor assisted or not
Posted on Reply
#45
ValenOne
Results nearly match RTX 2080 Super and RTX 3080 memory bandwidth scaling.
Posted on Reply
#47
Unregistered
Minus InfinityYou forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
I'm not convinced by this, in a recent Hardware Unboxed video, the 1080ti with more vram than the 2080 seems to matter. I believe the only reason is the price.
Let's wait for reviews.
#48
Sandbo
efikkanRTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
Oh I missed this, it makes sense. That’s also unfortunate though.
Posted on Reply
#49
medi01
efikkanMore VRAM doesn't give you more performance.
It depends. Note the dive 2080 is taking at 4k (also tells you why those nice DF guys ran it that way):



"speed up":

Posted on Reply
#50
Unregistered
medi01It depends. Note the dive 2080 is taking at 4k (also tells you why those nice DF guys ran it that way):



"speed up":

That's huge I missed this, it reinforces my doubts on getting the 3080, and rumour has it that RDNA2 would have 16gb in GDDR6 and HBM2 (the real fastest graphic memory).
Add your own comment
Dec 18th, 2024 23:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts