• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Performance Slide of RTX 3090 Ampere Leaks, 100% RTX Performance Gain Over Turing

Joined
Jul 19, 2016
Messages
482 (0.16/day)
Consoles split their RAM between system-RAM AND VRAM. The PS5 will have 16GB of GDDR6 RAM, but a standard computer with a 3080 will have 10GB VRAM + 16GB System RAM (easily 20+ RAM).

With Microsoft Flight Sim remaining below 8GBs of VRAM, I find it highly unlikely that any other video game in the next few years will come close to those capabilities. The 10+ GBs of VRAM is useful in many scientific and/or niche (ex: V-Ray) use cases, but I don't think its a serious issue for any video gamer.

No, the next gen console OS takes 2.5GB only, leaving 13.5GB purely for the GPU (on Series X the memory is split, but it's still 13.5GB for games), which again, is more than the paltry 10GB on this 3080.

And you can already push past 8GB today, it's going to get much worse once the new consoles are out and they drop PS4/XB1 for multiplatform titles.
 
Joined
Feb 23, 2008
Messages
1,064 (0.17/day)
Location
Montreal
System Name Aryzen / Sairikiki / Tesseract
Processor 5800x / i7 920@3.73 / 5800x
Motherboard Steel Legend B450M / GB EX58-UDP4 / Steel Legend B550M
Cooling Mugen 5 / Pure Rock / Glacier One 240
Memory Corsair Something 16 / Corsair Something 12 / G.Skill 32
Video Card(s) AMD 6800XT / AMD 6750XT / Sapphire 7800XT
Storage Way too many drives...
Display(s) LG 332GP850-B / Sony w800b / Sony X90J
Case EVOLV X / Carbide 540 / Carbide 280x
Audio Device(s) SB ZxR + GSP 500 / board / Denon X1700h + ELAC Uni-Fi 2 + Senn 6XX
Power Supply Seasonic PRIME GX-750 / Corsair HX750 / Seasonic Focus PX-650
Mouse G700 / none / G602
Keyboard G910
Software w11 64
Benchmark Scores I don't play benchmarks...
OK, so yet another confirmation that RTX 20x0 series RTX functionality is simply an expensive paperweight.
 
Joined
Apr 24, 2020
Messages
2,710 (1.62/day)
No, the next gen console OS takes 2.5GB only, leaving 13.5GB purely for the GPU (on Series X the memory is split, but it's still 13.5GB for games), which again, is more than the paltry 10GB on this 3080.

And you can already push past 8GB today, it's going to get much worse once the new consoles are out and they drop PS4/XB1 for multiplatform titles.

Microsoft Flight Sim takes up 24+GBs of system DDR4 RAM, but less than 8GBs of GDDR6 VRAM.

Its weird because consoles don't have a DDR4 vs GDDR6 split. But when a game gets ported over to the PC, the developers will have to make a decision over what stays in DDR4, and what stays in VRAM. Most people will have 8GB of DDR4, maybe 16GBs, in addition to the 8GB+s of VRAM on a typical PC.

Your assumption that everything will be stored in VRAM and VRAM only is... strange and wacky. I don't really know how to argue against it aside from saying "no. It doesn't work like that".
 
Joined
Jul 5, 2013
Messages
27,818 (6.68/day)
Note that all 3 games in the chart are running RTX, so this is just raytracing performance, not general rasterization.
Also this is running with DLSS on, and no idea if Ampere has any performance gains in DLSS over Turing either.
Have to agree here. The bigger picture has yet to be shown while a full suite of testing. Simply showing one limited set of tests, while still very eye opening and interesting, can not grant us the information we need to understand the potential of the new lineup.

NDA is in effect, but it stands to reason that most reviewers(many are hinting at this) have samples with retail ready(or very near ready) drivers. I would be willing to bet @W1zzard has at least one that will be reviewed on release.

Who the heck cares about RT?
You mean RTRT? And almost everyone does, which is why AMD has jumped on the RTRT bandwagon with their new GPUs are said to have full RTRT support for both consoles and PC.

Is it worth the price increase over Maxwell/Pascal? NO
Opinion that most disagree with. Don't like it? Don't buy it.

So there we have, as soon as I saw the image nvidia comparing 2080ti x 3090 then it became clear, 3090 was meant to be 3080ti but for some reason they wanted a different name but in truth 3090 = 3080ti.
That would seem logical. NVidia likes to play with their naming conventions. They always have.

OK, so yet another confirmation that RTX 20x0 series RTX functionality is simply an expensive paperweight.
Fully disagree with this. My 2080 has been worth the money spent. The non-RTRT performance has been exceptional over the 1080 I had before. The RTRT features have been a sight to behold. Quake2RTX was just amazing. Everything else has been beautiful. Even if non-RTRT performance is only 50% better on a per-tier basis, as long as the prices are reasonable, the upgrade will be worth it.
 
Last edited:
Joined
May 3, 2018
Messages
2,881 (1.20/day)
Not that impressive at all, it needed to be more like 200% to really make it useful. And this means the lesser models will have much smaller improvements.
 
Joined
Mar 28, 2020
Messages
1,755 (1.03/day)
Note that all 3 games in the chart are running RTX, so this is just raytracing performance, not general rasterization.
Also this is running with DLSS on, and no idea if Ampere has any performance gains in DLSS over Turing either.

I think this is to be expected. I am pretty sure that Nvidia dedicated a lot of die space to RT and Tensor cores to bump performance up in these areas. If we are looking at actual no RT and DLSS performance, the numbers may not be that impressive considering the move from 12nm (essentially a refined TSMC 16nm) to 7nm is a full node improvement.
 
Joined
Mar 28, 2020
Messages
1,755 (1.03/day)
Fully disagree with this. My 2080 has been worth the money spent. The non-RTRT performance has been exceptional over the 1080 I had before. The RTRT features have been a sight to behold. Quake2RTX was just amazing. Everything else has been beautiful. Even if non-RTRT performance is only 50% better on a per-tier basis, as long as the prices are reasonable, the upgrade will be worth it.

The RTX 2080 is a good GPU, though marred by a high asking price due to the lack of competition. As for RT, I agree it is a visual treat. However while you are able to improve image quality by enabling RT, you generally lose sharpness due to the need to run at a lower resolution. While DLSS is here for this reason, but version 1.0 was a mess. 2.0 is a lot better but still not widely available and not sure how well it will work for games that are not as well optimized. Most of the time, we see how "great" DLSS 2.0 is due to selected titles where Nvidia worked very closely with the game developers. To me, its a proof of concept, but if game developers are left to optimized it themselves, the results may not be that great.

Not that impressive at all, it needed to be more like 200% to really make it useful. And this means the lesser models will have much smaller improvements.

I've never seen a 200% improvement in graphic performance moving from 1 gen to another so far. You may need to manage your expectations here.

As for "lesser" models, I believe you mean the likes of RTX xx70 and xx60 series. If so, it may be too premature to make that conclusion. In fact, I feel it is usually the mid end range that gets the bigger bump in performance because this is where it gets very competitive and sells the most. Consider the last gen where the RTX 2070 had a huge jump in performance over the GTX 1070 that it is replacing. The jump in performance is significant enough to bring the RTX 2070 close to the performance of a GTX 1080 Ti. The subsequent Super refresh basically allow it to outperform the GTX 1080 Ti in almost all games. So if Nvidia is to introduce the RTX 2070 with around the same CUDA cores as the RTX 2080, the improved clockspeed and IPC should give it a significant boost in performance, along with the improved RT and Tensor cores.
 
Last edited:
Joined
Jul 5, 2013
Messages
27,818 (6.68/day)
you generally lose sharpness due to the need to run at a lower resolution.
How so? I run my card on dual 1440p displays and run games generally at the native res. However 1080p is a perfectly acceptable res to run at.
While DLSS is here for this reason, but version 1.0 was a mess. 2.0 is a lot better but still not widely available and not sure how well it will work for games that are not as well optimized. Most of the time, we see how "great" DLSS 2.0 is due to selected titles where Nvidia worked very closely with the game developers.
I don't use it(disabled), so I couldn't care less what state it's in. And it'll stay disabled even on an RTX30xx card.
 
Joined
Apr 24, 2020
Messages
2,710 (1.62/day)
While DLSS is here for this reason

I know variable rate shading is a completely different feature than DLSS... but... I'm more hype about VRS instead. Its a similar "upscaling" effect, except it actually looks really good.

I don't own an NVidia GPU, but the DLSS samples I've been seeing weren't as impressive as VRS samples. NVidia, and Intel (iGPUs) implement VRS, so it'd be a much more widespread, and widely acceptable technique to reduce the resolution (ish) and minimize GPU compute, while still providing a sharper image where the player is likely to look.

AMD will likely provide VRS in the near future. Its implemented in PS5 and XBox SeX (XBSX? What's the correct shortcut for this console?? I don't want to be saying "Series X" all the time, that's too long)

-----------

In any case, methods like VRS will make all computations (including raytracing) easier for GPUs to calculate. As long as the 2x2, or 4x4 regions are carefully selected, the player won't even notice. (Ex: fast moving objects can be shaded at 4x4 and probably be off the screen before the player notices that they were rendered at a lower graphical setting, especially if everything else in the world was at full 4k or 1x1 rate).

To me, its a proof of concept, but if game developers are left to optimized it themselves, the results may not be that great.

Many game developers are pushing VRS, due to PS5 / XBox support.

It seems highly unlikely that tensor cores for DLSS would be widely implemented in anything aside from NVidia's machine-learning based GPUs. DLSS is a solution looking for a problem: NVidia knows they have tensor cores and they want the GPU to use them somehow. But they don't really make the image look much better, or save much compute. (Those deep learning cores need a heck of a lot of multiplications to work, ... and these special FP16 multiplications that only are accelerated on special tensor cores).
 
Last edited:
Joined
May 15, 2014
Messages
235 (0.06/day)
NDA is in effect, but it stands to reason that most reviewers(many are hinting at this) have samples with retail ready(or very near ready) drivers. I would be willing to bet @W1zzard has at least one that will be reviewed on release.
Links to NDA drivers went live over the weekend. There was a non-signed driver that also worked for some, but there have been white/black listed drivers for some time.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Far less interested in the RTX/DLSS performance I want to see it's pure raw rasterization performance because that's how about 99% of games run while those other two things might be selectively accessible in a handful of newer titles. I'm far more keen on the standard usage performance.
 
Joined
May 11, 2016
Messages
55 (0.02/day)
System Name Custom
Processor AMD Ryzen 5800X 3D
Motherboard Asus Prime
Cooling Li Lian All in One RGB
Memory 64 GB DDR4
Video Card(s) Gigabyte RTX 4090
Storage Samsung 980 2tb
Display(s) Samsung 8K 85inch
Case Cooler Master Cosmos
Audio Device(s) Samsung Dolby Atmos 7.1 Surround Sound
Power Supply EVGA 120-G2-1000-XR 80 PLUS GOLD 1000 W Power Supply
How soon before we get the 3090TI? Next March? Or as soon as we find out AMD can compete with the 3090 in game reviews? Will it be called the 3090TI or the 3090 Super?
 
Joined
Jul 5, 2013
Messages
27,818 (6.68/day)
Far less interested in the RTX/DLSS performance I want to see it's pure raw rasterization performance because that's how about 99% of games run while those other two things might be selectively accessible in a handful of newer titles. I'm far more keen on the standard usage performance.
Those performance metrics are coming. I believe the NDA lifts tomorrow or tuesday, so there isn't long to wait.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce ÎĽDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Based on TPU review of Control. RTX on will cut 2080Ti performance to half in 1080p


Without DLSS, 2080Ti gets about 100FPS avg at 1080p, 50fps avg at RTX 1080P.

So now RTX3090 gets about 100 fps on RTX 1080p (>2X from this chart)

That means RTX3090 needs at least 200FPS on regulation 1080p, that is without RTX. Of course this is assuming no RT efficiency improvement. Let's assume there is some major RT effieceny improvement so instead of 0.5X performance penalty we have 0.7X performance penalty. Then RTX3090 would be running 133FPS to 150FPS without RTX. So a 30% to 50% performance uplift in non-RTX games. Also we have 5248 versus 4352 CUDA core. So CUDA core increase by itself should give at least 20% performance in nonRTX

That is just some quick napkin math. I am more leaning towards 30%~35% performance increase. But there is a good chance that I am wrong
This is a fundamental misunderstanding of how RT performance relates to rasterization performance. An increase in RT performance doesn't necessarily indicate any increase in rasterization performance whatsoever. Don't get me wrong, there will undoubtedly be one, but it doesn't work the way you are sketching out here.
You dont need to damage control for them.

I'm saying 10GB on a $800 graphics card is a joke. With that kind of money you should expect to use it for 3-4 years, 10gb wont be enough when both new consoles have 16gb, RDNA2 will have 16gb cards for the same price or likely cheaper, and Nvidia will nickle and dime everyone with a 20gb model 3080 next month. The only people buying a 10gb $800 graphics card are pretty clueless ones.
No games actually use that much VRAM, and with more modern texture loading techniques (such as MS bringing parts of DirectStorage to Windows) VRAM needs are likely to stand still if not drop in the future. Also, the 16GB of system memory on the consoles are (at least on the XSX) split into 2.5GB OS, 3.5GB medium bandwidth, and 10GB high bandwidth "GPU optimized". Given that the non-graphics parts of the game will need some memory, we are highly unlikely to see games meant for those consoles exceed 10GB VRAM at 4k (they might of course at higher settings).
Yeah we all know RT and DLSS is nonsense meant to find a use for datacenter hardware and mask bad performance gains. There are many better ways to get nice visuals. Look at all the upcoming PS5 games like Rachet and Clank (where RT is basically off and no DLSS).
What? Ratchet & Clank is chock-full of RT reflections, on glass, floors, Clank's body, etc.
No, the next gen console OS takes 2.5GB only, leaving 13.5GB purely for the GPU (on Series X the memory is split, but it's still 13.5GB for games), which again, is more than the paltry 10GB on this 3080.

And you can already push past 8GB today, it's going to get much worse once the new consoles are out and they drop PS4/XB1 for multiplatform titles.
Entirely agree. Also, as I noted above, of those 13.5GB, some memory needs to be used for the game code, not just graphics assets. For the XSX they have cut some costs here by making 3.5GB of the game memory pool lower bandwidth, which is a pretty clear indication of how they envision the worst-case scenario split between game code and VRAM for this console generation. I sincerely doubt we'll see any games using 12GB VRAM with just 1.5GB system memory ...
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
When was it ever, to begin with?
A long time ago I bought a mighty Geforce 256 for 200 USD, and it was head and shoulders above the competition...
 

bug

Joined
May 22, 2015
Messages
13,773 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
OK, so yet another confirmation that RTX 20x0 series RTX functionality is simply an expensive paperweight.
It's a first taste of the tech. Since when does a first iteration get everything right? Was the first iPhone an expensive paperweight? Was 8086? 3dfx Voodoo?
 
Joined
Mar 28, 2020
Messages
1,755 (1.03/day)
Many game developers are pushing VRS, due to PS5 / XBox support.

It seems highly unlikely that tensor cores for DLSS would be widely implemented in anything aside from NVidia's machine-learning based GPUs. DLSS is a solution looking for a problem: NVidia knows they have tensor cores and they want the GPU to use them somehow. But they don't really make the image look much better, or save much compute. (Those deep learning cores need a heck of a lot of multiplications to work, ... and these special FP16 multiplications that only are accelerated on special tensor cores).

I am no tech person, but looking at how DLSS works, it appears that it requires quite a fair bit of investment from the game developers to make it work properly and affects time to market. And because it is proprietary to Nvidia, there is little motivation for a game developer to spend much time on it, unless Nvidia is willing to put in significant effort to help them optimize it. I don't think this is sustainable. I do agree that VRS seems to make more sense since future products from AMD and Intel should support VRS, and is universal.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
You mean RTRT? And almost everyone does, which is why AMD has jumped on the RTRT bandwagon with their new GPUs are said to have full RTRT support for both consoles and PC.
The same story as with VR. And where is VR now?
EPIC not using RT in Unreal 5 is quite telling.
 
Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
In the past 25 years every Nvidia card with 7% gain this becomes another VGA card entity (model).
I bet that very few have memories from the era of Nvidia TNT and the TNT2 and so on and on and on, and today GTX and soon RTX .
How soon we are going to see the VTX ? :p
 

bug

Joined
May 22, 2015
Messages
13,773 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I am no tech person, but looking at how DLSS works, it appears that it requires quite a fair bit of investment from the game developers to make it work properly and affects time to market.
That's not how it works. Nvidia spends time to train some neural networks, the developer only has to make some specific API calls.
For DLSS 1.0, Nvidia had to train on a per-title basis. Past a certain point that was no longer necessary and now we have DLSS 2.0. I've even heard DLSS 3.0 may do away with the API specific calls, but I'll believe that when I see it (I don't doubt that would be the ideal implementation, but I have no idea how close/far Nvidia is from it).
 
Joined
Nov 4, 2019
Messages
234 (0.13/day)
You see how people's minds work? I couldn't resist mentioning Ratchet and Clank since it is the first next gen only game coming. All those visual improvements are in rasterization, but because there is some inconsequential amount of RT in the game, somehow it is an RT showcase for silly people already. nVidia has already won the marketing battle for the silly people that is for sure. I believe there will be an RT off setting for 60fps, then we can compare after launch. Prepare to prefer the non RT version...

Or compare it to PS5 not using "RT features":




I call fake cause it's not NV style to have charts with 0 as the baseline.

View attachment 167341

And, frankly, I don't get what "oddities" of jpeg compression are supposed to indicate.
This kind of "leak" does not need to modify an existing image, it's plain text and bars.

Yeah that is another good example :)
 
Joined
Jul 5, 2013
Messages
27,818 (6.68/day)
The same story as with VR. And where is VR now?
You really gonna go with that comparison? Weak, very weak.
EPIC not using RT in Unreal 5 is quite telling.
No, it isn't. It just says that they aren't developing RTRT yet. Why? Because they use that same engine on more that one platform, which means that engine has to have a common base code for Windows, Android and iOS. Two of those platforms are not and will not be capable of RTRT anytime soon, therefore Unreal5 does not support RTRT and it likely will not.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
You really gonna go with that comparison? Weak, very weak.
I am going against "see, everyone's doing it, it's gotta be cool".

No, it isn't. It just says that they aren't developing RTRT yet. Why? Because they use that same engine on more that one platform, which means that engine has to have a common base code for Windows, Android and iOS. Two of those platforms are not and will not be capable of RTRT anytime soon, therefore Unreal5 does not support RTRT and it likely will not.
Why is quite telling too (and applies to any game developer, not just game engine developers)
The fact that they didn't need any "RTRT" yet deliver light effects that impressive was what I was referring to.
 
Top