Thursday, September 3rd 2020
GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS
NVIDIA on Thursday posted a taste of the performance on offer with its new GeForce RTX 3080 graphics card. In a gameplay video posted on YouTube with performance metrics enabled, the card was shown running "DOOM Eternal" with details maxed out at 4K UHD resolution, where it clocked over 100 frames per second, or roughly 50% higher than the RTX 2080 Super. In quite a few scenes the RTX 3080 manages close to 120 FPS, which should be a treat for high refresh-rate gamers.
Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere. Both cards have identical image quality as the settings are constant between both test beds. NVIDIA is positioning the RTX 3080 as a 4K gaming workhorse product, while the top-dog RTX 3090 was pitched as an "8K 60 Hz capable" card in its September 1 presentation. The RTX 3090 should offer 4K gaming with high refresh rates. DOOM Eternal continues to be one of the year's bright spots in PC gaming, with a new DLC expected to come out in October.The NVIDIA presentation follows.
Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere. Both cards have identical image quality as the settings are constant between both test beds. NVIDIA is positioning the RTX 3080 as a 4K gaming workhorse product, while the top-dog RTX 3090 was pitched as an "8K 60 Hz capable" card in its September 1 presentation. The RTX 3090 should offer 4K gaming with high refresh rates. DOOM Eternal continues to be one of the year's bright spots in PC gaming, with a new DLC expected to come out in October.The NVIDIA presentation follows.
57 Comments on GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS
How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?
RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
I'm not an expert but from what i read some games still use Megatextures like Middle-earth shadow of mordor, old game but vram hungry, but the present and future is on "Physically based rendering" and "variable shade rating", Death stranding ,Doom eternal, Forza Horizon 4 use them and they are graphically superior and faster framerate. Even Star Citizen alpha in it's current state with tons of textures, which is already huge in scale, uses Physically based rendering.
I'm curios to see is DLSS 2.0 in action, despite so few games use it, if the current boost is alot higher then the 2080ti
My two 1070's in SLI outperform a 1080 single any day. I'm still running two 660ti's in SLI on my 1080P work /gaming computer.
It's just a shame nvidia made more sales in the gamer market when they allowed SLI on lower end cards. I mean sure if I had tons of money I might fork out 1800.00 for a 3090 but kinda seems like a waste a money for just one card.
The 3080 has a lot more horsepower but is comparatively light on VRAM. It will be an issue.
There's no way they'd have such a high % of disabled cores otherwise. 20% would seem to indicate it's at the limits of the process. Power consumption is another giveaway, and will compound issues with defectiveness.
This was chosen because there was spare line capacity on their 8nm process. It's a modified version of what was available. There was no time for a full custom process like the SHP 12nm node that the last couple of gens of NVIDIA were on. This was thrown together very quickly when Jensen failed to strong arm TSMC. It being Samsung's least popular node of recent times is not at all to the benefit of the maturity of the node, or quality of early dies for Ampere.
They're at the price they are because of NVIDIA's expectations for RDNA2 desktop, and the strength of the consoles.
They probably aren't paying that much for wafers, simply because this was capacity on 8nm (/10nm) that would have otherwise gone unused, which doesn't benefit Samsung. But some of the talk of NVIDIA only paying for working dies is likely nonsense. Certainly on the 3080/3090 wafers. Samsung would most likely lose their shirt on those. Their engineers aren't idiots ... they'd have known as soon as they saw what NVIDIA wanted that decent yields were likely unachievable anywhere near launch (maybe at all). NVIDIA were likely offered an iteration of 7nm EUV HP(ish), but it would have cost a lot more, they wouldn't have had as many wafers guaranteed, and launch likely would have been pushed 1 - 2 quarters. Maybe more. They've gambled on the 8nm ... judging by power consumption and disabled CU count, they have not exactly 'won'.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
With every generation for the past 10+ years people have raised concerns about Nvidia's GPUs having too little memory, yet time after time they've shown to do just fine. Never forget that both Nvidia and AMD have close collaboration with game developers, they have a good idea of where the game engines will be in a couple of years. With the consoles having 16 GB of total memory, split between OS, the software on the CPU and the GPU, it's highly unlikely that those games will delegate more than 10 GB of that for graphics.
If anything, this should mean that few games will use more than ~8GB of VRAM for the foreseeable future with these kinds of detail levels. Memory compression has improved with every recent architecture from Nvidia up until now. There are rumors of "tensor compression", but I haven't looked into that yet. Compression certainly helps, but it doesn't work quite that way.
Memory compression in GPUs is lossless compression transparent to the user. As with any kind of data, the compressions rate of lossless compression is tied to information density. While the memory compression has become more sophisticated with every generation, it's still limited to compress mostly "empty" data.
Render buffers with mostly sparse data is compressed very well, while textures are generally only compressed in "empty" sections. Depending on the game, the compression rate can vary a lot. Especially games with many render passes can see some substantial gains, sometimes over 50% I believe, while others are <10%. So please don't think of memory compression as something that expands memory by xx %.
And don't forget you can always download more RAM, if need be.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
efikkan mentioned, Memory compression improves on every generation regardless of tensor assisted or not
Let's wait for reviews.
"speed up":