• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Remedy Shows The Preliminary Cost of NVIDIA RTX Ray Tracing Effects in Performance

Joined
Jul 5, 2013
Messages
28,334 (6.75/day)
Yeah it's a nice thought, but I suspect it won't be till next year until they can even fully compete with Pascal if at all. Maybe the plan is for RTRT to become a dirty word like tessellation. I mean there are zero titles and it's already been written off in this very thread.

Shame.
I think the real problem is that the tech is so new that some feel no choice but to be skeptical. As for AMD, I really think they have something up their sleeves. You might be right about it being next year though.
 
Joined
Dec 14, 2011
Messages
275 (0.06/day)
Processor 12900K @5.1all Pcore only, 1.23v
Motherboard MSI Edge
Cooling D15 Chromax Black
Memory 32GB 4000 C15
Video Card(s) 4090 Suprim X
Storage Various Samsung M.2s, 860 evo other
Display(s) Predator X27 / Deck (Nreal air) / LG C3 83
Case FD Torrent
Audio Device(s) Hifiman Ananda / AudioEngine A5+
Power Supply Seasonic Prime TX 1000W
Mouse Amazon finest (no brand)
Keyboard Amazon finest (no brand)
VR HMD Index
Benchmark Scores I got some numbers.
It needs to start somewhere tho.

The 2080ti is already a very large GPU, and a hungry one. If they released it with more RT cores in lieu of CUDA cores, the general outlook would only be worse. And lets be real here, they are leading and releasing products that are in competition with themselves alone. Why would they price things keenly? I know I wouldnt! They're not our friends, they dont need to do us a solid. They are a business, they want to make money, and even if some read that as ugly, its just a fact.

I doubt this gen1 of RT is going to perform well in RT titles tbh. But as someone else pointed out, why not look on the positive, the regular gains are only a third and if your currently happy gaming with whatever you are running at, then enjoy! If you were hoping for an upgrade when this gen launch, why not grab a 2nd hand 1080/ti? Still sweet cards.
 
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Did they go into detail of what that would be ?

The Low, Med, High settings might just be exclusions of features.

Example: GI only being enabled at High or Shadows on Med & High only. Another way they can go is what the BF5 Devs alluded to, Lower LOD for RT effects (Makes sense but defeats the purpose of RT since your back to "faking it"). Could end up with combinations of all these things.
That is up to developer. Disabling some features is an obvious option, reducing the resolution of gi/shadows/reflections/whatever is another. 1-2 spp is probably all the current gen can do but that can be played around with to find more optimal solutions.

Lower resolution does not really defeat the purpose. The point of RTRT is more accurate/realistic result not high resolution. Lower resolution for pretty much everything RT is proposed for is currently being used for rasterized methods of the same things.

I think you might be surprised to know how far ahead AMD is in their plans to get parity, or even leap frog Nvidia :cool:
Anything specific in that video that strikes you as AMD being far ahead?

Nvidia should have also released non RTX cards to replace the GTX 1xxx series at similar prices. RTX in Quadro makes more sense at this point and maybe the flagship 2080 Ti, which is really a RTX Titan, not a 1080 Ti replacement.
Isn't that exactly what they did? RTX2080 replaces GTX1080Ti at similar price, RTX2070 replaces GTX1080 at similar price :D
 
Last edited:

Zendo911

New Member
Joined
Oct 18, 2018
Messages
9 (0.00/day)
So as it stands, 1 year ago, Ray Tracing as a technology has been exclusive to movie making because of the exceptionally high computing costs required. Nvidia "decided" that Ray Tracing is the way for the future, and manages to create RT Cores, and integrate them in their high end TU 102, enabling them to run some RT features at playable frame rates at a decent resolution. (I'm not saying that the technology is exclusive to Nvidia, AMD can incorporate RT in their next gen cards if it made sense to them from an economical point of view, and I suspect it doesn't)

The problem? they cost die space. TU 102 is a massive 750mm2 chip. I don't recall there has been any consumer grade/gaming card sold with such a massive die before, these sizes have been exclusive to professional grade Quadro cards which sold at much higher prices. The reason has always been yields. This is a chip that's nearly 3x times the one in something like GTX 980 for instance, and that doesn't equate to 3 times the chip cost, it costs many times more. I'm not saying a $1200 card is not profitable for them, it definitely is, and even probably more profitable that previous generations if we talk margins percentages, but I don't imagine it to be by a huge margin, it's not the rip-off that it seems to be. This is a high-end card, and it would have always came at a premium given the lack of competition.

The technology is still in infancy, and Nvidia wanted to make sure their are first. The decision to include a novel unproven technology in their high end cards, eventually leading to higher manufacturing costs due to its massive chip die, and pass the costs to consumers might seem a little bit premature. However, The timing is perfect given the lack of competition, Nvidia couldn't have afforded to do so if AMD was on top of its game, and I suspect Nvidia saw AMD making a push with its edge in 7nm tech and the development of Infinity Fabric and MCM cards that were widely expected to be the tech behind Navi up until last June.

For anyone not interested in RT in its current state, Pascal cards are still sold around. I know they are previous gen cards that are still sold at a premium with an extended life cycle, but that's only because there was no real competition from AMD. Progression in chip making has slowed down due to the diminishing of Moore's law, and with that context, it's hard to decide if it was AMD who performed bad, or it was Nvidia that performed extra well in the previous generation.
 
Joined
Jan 20, 2014
Messages
299 (0.07/day)
System Name gamingPZ
Processor i7-6700k
Motherboard Asrock Z170M Pro4S
Cooling scythe mugen4
Memory 32GB ddr4 2400mhz crucial ballistix sport lt
Video Card(s) gigabyte GTX 1070 ti
Storage ssd - crucial MX500 1TB
Case silverstone sugo sg10
Power Supply Evga G2 650w
Software win10
as it was stated multiple times - by the time RayTracing(hybrid) will be an usable option in games - RTX 2080ti will have inadequate performance and will be obsoleted by next generations (stated - not by nivdia of corse :D)... and some people try to justify RTX 20xx prices just because "some day..." :shadedshu:
 
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Nvidia "decided" that Ray Tracing is the way for the future
It is not as simple as that. The research towards this has been going on for a decade or more.
RT Ray-tracing or elements of it have been coming for a while but are held back by huge performance requirements. Research has been done, hardware had to start from somewhere.

The problem? they cost die space. TU 102 is a massive 750mm2 chip. I don't recall there has been any consumer grade/gaming card sold with such a massive die before, these sizes have been exclusive to professional grade Quadro cards which sold at much higher prices. The reason has always been yields. This is a chip that's nearly 3x times the one in something like GTX 980 for instance, and that doesn't equate to 3 times the chip cost, it costs many times more. I'm not saying a $1200 card is not profitable for them, it definitely is, and even probably more profitable that previous generations if we talk margins percentages, but I don't imagine it to be by a huge margin, it's not the rip-off that it seems to be.
Profitable sure but I am not convinced about their better margins compared to say Pascal. I would say at the same price points Nvidia is making noticeably smaller margins with Turing.
$1200 RTX 2080Ti, maybe. RTX 2080 at the same price point as GTX 1080Ti? RTX 2070 at the same price point as GTX 1080?
In addition to considerably larger GPU itself, the boards seem to be more complex as well. The MSI interview from a few days back - https://www.techpowerup.com/248382/...issues-us-trade-war-and-rtx-2080-ti-lightning
RTX 2080 Ti using some 2600 components compared to the GTX 1080 Ti at 1600. For further comparison, it is said the RTX 2080 uses 2400 components and the RTX 2070 some 2200.
RT Cores together with Turing cores should be 20% or less of the die space cost. This is not that bad. Even if these were left out and they only did the usual GPU we would still be looking at 600mm² chip for the xx102 GPU. There has been no process shrink, Turings are created effectively on the same process node as Pascal's (with a minor efficiency bump on the process side).

The technology is still in infancy, and Nvidia wanted to make sure their are first. The decision to include a novel unproven technology in their high end cards, eventually leading to higher manufacturing costs due to its massive chip die, and pass the costs to consumers might seem a little bit premature. However, The timing is perfect given the lack of competition, Nvidia couldn't have afforded to do so if AMD was on top of its game, and I suspect Nvidia saw AMD making a push with its edge in 7nm tech and the development of Infinity Fabric and MCM cards that were widely expected to be the tech behind Navi up until last June.
Nvidia definitely knows much more than we do about what AMD is up to. Infinity Fabric and MCM were not going to be behind Navi. This was just a wet dream.

It is not so much Nvidia wanting to be first but they (and GPUs) need to find a place to go. And a technology to sell. Not only is Nvidia completely lacking competition in high end but there are not many generations left for rasterization as it is today. GTX 1080Ti was just shy of 4k and 60FPS and it no longer fell off during its lifetime as GPUs have tended to do. RTX 2080Ti basically does 4k and 60 FPS and is suprisingly often CPU-limited at 1440p. Another generation or two with 30% improvements - first of which will quite certainly be the transition to TSMC's 7nm next year - and there is nowhere to go for the high end. 4k gaming monitors are only now starting to be a thing. 5k/8k are there but are not that much of a benefit for games given the performance impact and realistic screen sizes. Plus, platform starts to be more and more the limiting factor.

I get that everyone is disappointed about no new generation with price points that are one step down but that does feel like a very entitled view on things. If Turing is not worth your money, do not buy one.

and some people try to justify RTX 20xx prices just because "some day..." :shadedshu:
This sounds an awful lot like FineWine™ :laugh::roll:
 
Last edited:
Joined
Jan 11, 2005
Messages
1,491 (0.20/day)
Location
66 feet from the ground
System Name 2nd AMD puppy
Processor FX-8350 vishera
Motherboard Gigabyte GA-970A-UD3
Cooling Cooler Master Hyper TX2
Memory 16 Gb DDR3:8GB Kingston HyperX Beast + 8Gb G.Skill Sniper(by courtesy of tabascosauz &TPU)
Video Card(s) Sapphire RX 580 Nitro+;1450/2000 Mhz
Storage SSD :840 pro 128 Gb;Iridium pro 240Gb ; HDD 2xWD-1Tb
Display(s) Benq XL2730Z 144 Hz freesync
Case NZXT 820 PHANTOM
Audio Device(s) Audigy SE with Logitech Z-5500
Power Supply Riotoro Enigma G2 850W
Mouse Razer copperhead / Gamdias zeus (by courtesy of sneekypeet & TPU)
Keyboard MS Sidewinder x4
Software win10 64bit ltsc
Benchmark Scores irrelevant for me
I think it's great technology, which will one day be usable at the same frame rates we expect today with non-RT (60 to 60++). But it's first generation and not ready for that yet. All those who say non-purchasers of RTX will be left behind are wrong. It will be several generations of cards before this is a huge thing. By then, those who have not upgraded with this first gen will have likely upgraded once already, so your argument is moot.

Those that don't adopt the RTX 20xx series because of either cost or immature technology are perfectly ok in not doing so. Likewise those that want to, by all means do so. RT will see it's day in affordable mainstream because it is great, just not at this time.

exactly my thoughts when they launched...who buy now won't use rt features for a while and when games pop-up it may be not powerful enough to run them...

they should have make a batch in advance for game developers to pave the road with rt games and than sell to public the feature and asking the price...

it's like a veyron for which the factory don't give you the second key which unlock the max speed...
 
Joined
Apr 30, 2012
Messages
3,881 (0.84/day)
That is up to developer. Disabling some features is an obvious option, reducing the resolution of gi/shadows/reflections/whatever is another. 1-2 spp is probably all the current gen can do but that can be played around with to find more optimal solutions.

Lower resolution does not really defeat the purpose. The point of RTRT is more accurate/realistic result not high resolution. Lower resolution for pretty much everything RT is proposed for is currently being used for rasterized methods of the same things.

Well their not even meeting their own expectations

Nvidia said:
The aim is to reach a denoising budget of ~1 ms or less for 1080p target resolution on gaming class GPUs


Can't optimize lower unless you cut some effects out or just don't denoise

Demos are selling a unattainable "promise".
 
Last edited:
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Well their not even meeting their own expectations
Can't optimize lower unless you cut some effects out or just don't denoise
Actually, I think they are. None of the RTRT effects work without some denoising/filtering at this stage (if ever).
This 4.4ms is for reflections including denoising with reflections taking most of the time there. From the other mentioned effects 2.5ms for GI includes denoising. 2.3ms for shadows very likely also includes denoising.
1ms or less for denoising here sounds about right.
 
Joined
Dec 10, 2015
Messages
545 (0.16/day)
Location
Here
System Name Skypas
Processor Intel Core i7-6700
Motherboard Asus H170 Pro Gaming
Cooling Cooler Master Hyper 212X Turbo
Memory Corsair Vengeance LPX 16GB
Video Card(s) MSI GTX 1060 Gaming X 6GB
Storage Corsair Neutron GTX 120GB + WD Blue 1TB
Display(s) LG 22EA63V
Case Corsair Carbide 400Q
Power Supply Seasonic SS-460FL2 w/ Deepcool XFan 120
Mouse Logitech B100
Keyboard Corsair Vengeance K70
Software Windows 10 Pro (to be replaced by 2025)
clearer shadows and reflections
But from what I can see the reflections are murky even on things that are supposed to reflect things clearly like the trash bin upper part, and only in second part which is a brighter scene things look slightly better, and remember you needs RTX 2080 Ti to achive this.
 
Joined
Jul 5, 2013
Messages
28,334 (6.75/day)
RTX2070 replaces GTX1080 at similar price :D
And outperforms it. I really don't see what everyone is whining about.

Can't optimize lower unless you cut some effects out or just don't denoise
Or change/modify/optimize the way the denoise function works. There might also be a way to change the way rays reflect/refract to limit the level of noise in the first place.
 
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
By the way, the video in the article is a bad choice. This is from March 2018, back when Turings were not a thing. Considering timing, this was probably running on a Titan V or two.

The images are correct and this is the Control trailer:
 
Joined
Oct 15, 2010
Messages
208 (0.04/day)
Im pernoally waiting for 4k 144fps raytracing AMD Card.
Untill now ill be happy with games implementing Vulkan API, like Star Citizen said it will. Hell, i get double FPS in doom with Vulkan compared to OpenGL. Why the hell are they still using it, ist beyond me.
 
Joined
Oct 6, 2018
Messages
220 (0.10/day)
System Name SALTY
Processor A10-5800K
Motherboard A75
Cooling Air
Memory 10Gig DDR133
Video Card(s) HD 7660D
Storage HDD
Display(s) 4k HDR TV
Power Supply 320 Watt
With AMD, I said years back, when it looked like AMD could go under, they need to keep their gfx re-branded not only to max the profit from the tech they already have but also help cut down on costs until the company has room to breath.

you have to remember you chaps who feel you need high end gfx cards are the few, not the many, most computers don't need high end graphics cards so it makes sense to recycle tech if your fighting to recover your company from almost going under.

I personally think AMD doesn't give 2 turds if Nvidia has the fastest gfx card, just as long as AMD can compete with people wanting something that's affordable and does the job.

In a climate where most people struggle just to live/pay the bills and anything else is a bonus AMD have pulled themself's back from the brink which in it's self is an amazing feat.

for AMD, it's about keeping the company alive rather than being the fastest.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Untill now ill be happy with games implementing Vulkan API, like Star Citizen said it will. Hell, i get double FPS in doom with Vulkan compared to OpenGL. Why the hell are they still using it, ist beyond me.
You have an AMD card, don't you? AMD's drivers are notoriously bad with OpenGL so Vulkan seems to improve things a lot. In many cases the same applies to DX11 and DX12.
 
Joined
Jul 5, 2013
Messages
28,334 (6.75/day)
I personally dont think AMD gives 2 turds if Nvidia has the fastest gfx card, just as long as AMD can compete with people wanting something that's affordable and does the job.
I gotta disagree. As much as I like my Intel Xeon and Geforce based PC, credit must be given where it's due. AMD is reaching for the stars. They are handily giving Intel a solid thumping in the CPU arena and shows no signs of stopping. I have no doubts they are working on retaking the GPU crown from NVidia. Maybe they'll do it again like they have in the past and maybe they'll only come close, who knows at all. But make no mistake, they are going to forge ahead and fight the good fight.

AMD's drivers are notoriously bad with OpenGL so Vulkan seems to improve things a lot.
Oh they are not. Like anyone who makes drivers for hardware there are the odd bugs and glitches. NVidia has had just as many. Let's not make a mountain out of a mole hill.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
I meant performance, not bugs.

It was pretty common knowledge that when DOOM was released, AMD cards would get a good performance boost from Vulkan and Nvidia cards would get a perf hit. After a while, both with DOOM patches as well as driver updates on both sides, things somewhat stabilized but AMD cards will still get a boost and Nvidia cards are at about the same level with both APIs. There are differences here and there (for example in CPU-limited situations) and DOOM runs better on AMD cards but when it comes to APIs, that's how it is.

Just as an example, this is about 2 months after DOOM release:
https://www.gamersnexus.net/game-bench/2510-doom-vulkan-vs-opengl-benchmark-rx-480-gtx-1080

Edit:
Sorry for offtopic. Wanted to delete my previous post but you had already replied :)
 
Last edited:

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
13,013 (2.49/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | LG 24" IPS 1440p
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Corsair K65 Plus 75% Wireless - USB Mode
Software Windows 11 Pro 64-Bit
By the time there are a good amount of games that actually have proper implementation of ray tracing, Nvidia will have a new card out that will do it a lot better than this 2000 series. Im waiting for that.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Wonderful OP, so very kind of you to reference FPS in ms, that certainly won't confuse most of your readers and would make terrible performance impact look less terrible, in the best traditions of FUD.

The selling point with the Pascal gen was finally being able to game at ~60fps (maybe often closer to 45-50fps, but close enough) and 4k. That's a pretty major step-function in capability for users over the past gens

I wish people would spend a bit more time thinking about what "running at 4k" actually means.
4k gaming will become reality when devs will actively target it.
Otherwise, you can keep resolution/fps lower and add complexity to the screen.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
This like a SEGA type of hardware move really. Sure the hardware is great in a lot of ways, but it's not the right time for it unfortunately. Hell if you look at a game like quake or doom had they come out 2-3 years earlier they'd have failed so miserably due to choppy game play. The point at which those games were released was the right place and right time which led to their sort of cult status nostalgia you see today. Nvidia played this hand a bit too soon.
 
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Wonderful OP, so very kind of you to reference FPS in ms, that certainly won't confuse most of your readers and would make terrible performance impact look less terrible, in the best traditions of FUD.
How would you propose the times in the article be represented in FPS? Milliseconds is quite literally the time it takes for each of those stages/effects. While there are very few details, everything we know suggests these can at least partially be overlapping with asynchronous/concurrent compute.

The times are only for each of these specific effects, there is the normal rendering time in addition and overlapping with the RT effects. Even if none of the effects can be done concurrently to each other, rendering definitely can. To what degree, we do not know.

9.2 milliseconds translates to 108 FPS. Again, we do not know how much rest of the rendering adds to that. Assuming none of what the game does happens concurrently (which is quite surely not the case) there are about 7 milliseconds left in the time budget for the game to run at 60 FPS. Translating that to FPS - if the game runs at 140 FPS without these 3 effects, it can run at 60 FPS with them.
 
Top