# Crytek Shows Off Neon Noir, A Real-Time Ray Tracing Demo For CRYENGINE



## crazyeyesreaper (Mar 15, 2019)

Crytek has released a new video demonstrating the results of a CRYENGINE research and development project. Neon Noir shows how real-time mesh ray-traced reflections and refractions can deliver highly realistic visuals for games. The Neon Noir demo was created with the new advanced version of CRYENGINE's Total Illumination showcasing real time ray tracing. This feature will be added to CRYENGINE release roadmap in 2019, enabling developers around the world to build more immersive scenes, more easily, with a production-ready version of the feature.












Neon Noir follows the journey of a police drone investigating a crime scene. As the drone descends into the streets of a futuristic city, illuminated by neon lights, we see its reflection accurately displayed in the windows it passes by, or scattered across the shards of a broken mirror while it emits a red and blue lighting routine that will bounce off the different surfaces utilizing CRYENGINE's advanced Total Illumination feature. Demonstrating further how ray tracing can deliver a lifelike environment, neon lights are reflected in the puddles below them, street lights flicker on wet surfaces, and windows reflect the scene opposite them accurately.



 

Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

Ray tracing is a rendering technique that simulates complex lighting behaviors. Realism is achieved by simulating the propagation of discreet fractions of energy and their interaction with surfaces. With contemporary GPUs, ray tracing has become more widely adopted by real-time applications like video games, in combination with traditionally less resource hungry rendering techniques like cube maps; utilized where applicable.



 

The experimental ray tracing tool feature simplifies and automates the rendering and content creation process to ensure that animated objects and changes in lighting are correctly reflected with a high level of detail in real-time. This eliminates the known limitation of pre-baked cube maps and local screen space reflections when creating smooth surfaces like mirrors, and allows developers to create more realistic, consistent scenes. To showcase the benefits of real time ray tracing, screen space reflections were not used in this demo.

*View at TechPowerUp Main Site*


----------



## Ferrum Master (Mar 15, 2019)

RT and made on Vega...

Prepare popcorn everyone.


----------



## kastriot (Mar 15, 2019)

Beautiful.


----------



## mtcn77 (Mar 15, 2019)

Ferrum Master said:


> RT and made on Vega...
> 
> Prepare popcorn everyone.


Once again, we will be able to ask the obvious question...


----------



## Tomgang (Mar 15, 2019)

That is all great for developers and such. But come on Crytek, give us the game that started it all and showed us what the cryengine cut really do. I am off cause talking about Crysis. I want a Crysis 4 that shows the Cryengine newest advantage with ray-tracing, DX12 and of cause the glories graphics the games are so known for and make the GPU´s sweet once again.

Please Crytek. Im tired of demos after Demos.  I want crysis


----------



## Deleted member 67555 (Mar 15, 2019)

Really really like the no RTX needed...
And as soon as I seen Crytek I thought "Yes Crysis 4"... But it's just a demo...
It's cool and all tho.


----------



## Warsaw (Mar 15, 2019)

Tomgang said:


> That is all great for developers and such. But come on Crytek, give us the game that started it all and showed us what the cryengine cut really do. I am off cause talking about Crysis. I want a Crysis 4 that shows the Cryengine newest advantage with ray-tracing, DX12 and of cause the glories graphics the games are so known for and make the GPU´s sweet once again.
> 
> Please Crytek. Im tired of demos after Demos.  I want crysis



++++++++1 I agree 99999%! I want graphics to push hardware again and be in awe with it! I even wrote a thank you message to Crytek when I noticed they responded to one of the comments in the YouTube video. I want a new Crysis and SWEET SWEET graphics to make our machines crawl. Never before and since then has a game made me just want to look at and appreciate the visuals the same that Crysis did. Just blew me away


----------



## JB_Gamer (Mar 15, 2019)

Ok..., then what about RTX, will it be redundant???


----------



## Nxodus (Mar 15, 2019)

JB_Gamer said:


> Ok..., then what about RTX, will it be redundant???



Don't expect Crytek raytracing to be easy on hardware. 
The extra RT cores on Nvidia cards will give a serious edge over non-RT cards


----------



## Deleted member 158293 (Mar 15, 2019)

(hybrid)Ray Tracing & Vulkan

All good then!

nvidia will probably use their dedicated cores for it, and AMD can leverage their superior compute for it.  Interesting to see how this progresses.

Only hardware left out seem to be non-rtx nvidia cards which also do not have necessary compute performance.


----------



## dinmaster (Mar 15, 2019)

No one remembers this company almost folding multiple times because they couldn't pay their employees? They are selling a game engine and the demos show the improvements to it. The hunt game is small and worth it for them. If they come out with a battle royal game that makes them some good money, they will be able to make those games again. I don't see them making another crisis for some time unfortunatly...


----------



## Mistral (Mar 15, 2019)

JB_Gamer said:


> Ok..., then what about RTX, will it be redundant???


Well, there's still a chance nVidia muscles it's way with Crytek like they did with the Crysis 2 tessellation back in the day...


----------



## natr0n (Mar 15, 2019)

Would be cool if they released the demo for us to check out.


----------



## FreedomEclipse (Mar 15, 2019)

Ubisoft: *DOWNGRADING INTENSIFIES*


----------



## XiGMAKiD (Mar 15, 2019)

Nice to see that they're still afloat, and the demo is pretty nice too


----------



## Tomorrow (Mar 15, 2019)

Tomgang said:


> That is all great for developers and such. But come on Crytek, give us the game that started it all and showed us what the cryengine cut really do. I am off cause talking about Crysis. I want a Crysis 4 that shows the Cryengine newest advantage with ray-tracing, DX12 and of cause the glories graphics the games are so known for and make the GPU´s sweet once again.
> 
> Please Crytek. Im tired of demos after Demos.  I want crysis


Rather Vulkan. Thay way Crysis 4 if it ever comes would not be limited to Win10 and could ever run on Linux potentially. We can dream tho...


----------



## lexluthermiester (Mar 15, 2019)

Nxodus said:


> The extra RT cores on Nvidia cards will give a serious edge over non-RT cards


Very likely. There's no reason not to use them.


----------



## Deleted member 67555 (Mar 15, 2019)

yakk said:


> (hybrid)Ray Tracing & Vulkan
> 
> All good then!
> 
> ...


https://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/13
Yes AMD is better but it really just sucks for 1060 owners.


----------



## Deleted member 158293 (Mar 15, 2019)

jmcslob said:


> https://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/13
> Yes AMD is better but it really just sucks for 1060 owners.



Yeah, unfortunately it looks like 1060 cards may not age very well.


----------



## mtcn77 (Mar 15, 2019)

jmcslob said:


> https://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/13
> Yes AMD is better but it really just sucks for 1060 owners.


And there is so much AMD users can do. Memory timing strap trick in Wattman, bandwidth optimisation via poclmembench. I can safely say that memory system is peaked. Memory error meter, even. Not everyone can overclock, but these things are so basic.


----------



## Imsochobo (Mar 15, 2019)

Nxodus said:


> Don't expect Crytek raytracing to be easy on hardware.
> The extra RT cores on Nvidia cards will give a serious edge over non-RT cards



No and yes.
they're just more efficient cores, they have absolutely no performance benefit per se looking away from energy use, IE a RTX2060 may be slower than VII in RT games despite the VII having no RT cores.
Big misconception that is brought forward by nVidia's marketing team that makes people believe that there is some special sauce in every nook of their arch kinda like apple does.

It's just less precision cores that isn't as fat and thus efficient in one way, and it makes the chips very inefficient in performance\die area where I doubt they can do this on smaller nodes at all so MCM is likely where we see RT can actually take off as I feel the RTX2xxx is just too early!

Not to take anything away from nvidia's highly efficient arch, superior performance etc, but new magical stuff it is not.. it's the same old design choice and for the first time since kepler I think they've done the wrong choice, Titan and 2080TI as RTX would actually not be so dumb with rest non RT.
The arch and technology is all capable it is just too early and that is absolutely not something nvidia does often 

Edit: rephrased quite a bit


----------



## cdawall (Mar 15, 2019)

Ferrum Master said:


> RT and made on Vega...
> 
> Prepare popcorn everyone.



Yea AMD has talked about ray tracing since vega first released they had gobs of PR about it for the Vega FE.


----------



## Zubasa (Mar 16, 2019)

JB_Gamer said:


> Ok..., then what about RTX, will it be redundant???


Thats the thing, in BF5 the RTRT option is actually labeled DXR.
RTX is just nVidia's branding of DXR + DLSS etc in this case.


----------



## cucker tarlson (Mar 16, 2019)

Imsochobo said:


> No and yes.
> they're just more efficient cores, they have absolutely no performance benefit per se looking away from energy use





Imsochobo said:


> a RTX2060 may be slower than VII in RT games



fanboy fantasies.
no performance benefit,yeah,right.








2080Ti is 1.55x faster than Titan V in BF5 RTX.So 2060 would be around Titan V performance in RTRT.
If you think RVII can beat a 15 TFlop nvidia card with 640 dedicated tensor cores (more than 2080Ti) then yeah, good luck with that.











I already heard fanboys say that Fury X will be faster in RTRT than Pascal,now RVII would be faster than *RTX* cards in doing *RTX*


----------



## Vayra86 (Mar 16, 2019)

JB_Gamer said:


> Ok..., then what about RTX, will it be redundant???



Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were _more efficient. _Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.

This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the _entire GPU_ instead of just a part of it, while the _entire GPU is also available_ should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is _viable on a marketplace._

Another striking difference I feel is the quality of this demo compared to what Nvidia has put out with RTX. This feels like a next step in graphics in every way, the fidelity, the atmosphere simply feels right. With every RTX demo thus far, even in Metro Exodus, I don't have that same feeling. It truly feels like some weird overlay that doesn't come out quite right. Which, in reality, it also is. The cinematically badly lit scenes of Metro only emphasize that when you put them side by side with non-RT scenes. The latter may not always be 'correct' but it sure is a whole lot more playable.



cucker tarlson said:


> fanboy fantasies.
> no performance benefit,yeah,right.
> 
> View attachment 118780
> ...



**DXR*. In the end Nvidia is using a customized setup that works for them, it remains to be seen how well AMD can plug into DXR with their solution, or how Crytek does it now, and/or whether they even want to or need to. The DX12 requirement sure doesn't help it and DXR will be bogged down by rasterization as well as it sits within the same API. There is a chance the overall trend will move away from DXR altogether, leaving RTX in the dust or out to find a new point of entry.


----------



## PanicLake (Mar 16, 2019)

lexluthermiester said:


> Very likely. There's no reason not to use them.


Considering there are only two games with RT support, I think that with a "run everywhere" engine those RT cores will end up sitting there watching... unless the engine ha RT implementation from the get go of course.

I think it will be interesting to have this demo as a benchmark, make it so Crytek!


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were _more efficient. _Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.
> 
> This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the _entire GPU_ instead of just a part of it, while the _entire GPU is also available_ should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is _viable on a marketplace._
> 
> ...


well,we're stuck with it so you might get a start on supporting amd with your wallet now.
you think a software developer will ditch dxr to hurt nvidia so that amd can have their cards running at 15 fps ?


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> well,we're stuck with it so you might get a start on supporting amd with your wallet now.



Nope. This is still squarely into the early adopters' territory, and in a way of competing standards, its the worst moment to vote for anyone. There is no room for a niche that does it differently.

Patience, young gwasshoppa



cucker tarlson said:


> you think a software developer will ditch dxr to hurt nvidia so that amd can have their cards running at 15 fps ?



Software devs will implement the feature that hits the largest target audience if they have a say in it. Performance comes afterwards, and low performance isn't directly a problem for adoption, look at history. Its the same as VR adoption: the cheapest HMD still leads in sales (PSVR). Low barrier of entry is king.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Nope. This is still squarely into the early adopters' territory, and in a way of competing standards, its the worst moment to vote for anyone. There is no room for a niche that does it differently.
> 
> Patience, young gwasshoppa


currently a RVII is a more niche product than 2080.Costs them more to manufacture than a second down the line nvidia card.
not saying what you're saying isn't true,but you're not considering the circumstances.it's all about the current situation why nvidia decided to do that imo.
it's clear their approach is "what are you going to do,buy AMD?lol". they might as well replace "the way it's meant to be played" with this one.


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> currently a RVII is a more niche product than 2080.Costs them more to manufacture than a second down the line nvidia card.
> not saying what you're saying isn't true,but you're not considering the circumstances.it's all about the current situation why nvidia decided to do that imo.



The current situation is that Nvidia is looking at a product stack with no incentive for upgrading. Simply because the vast majority runs 1080p and even el cheapo 2060 can totally kill that for every game. I don't even fully utilize my 1080 with a 120 FPS target...

So, they needed a new incentive and RT was the way. I'm not buying into shareholders' dreams though, I look at the right time for tech and games, and right now its premature. Thus far it seems shareholders tend to agree on that.

Also Nvidia's primary goal isn't 'beat AMD'. Their primary goal is to maintain growth and market share, and they already had the most interesting range of products. In that sense they had zero need for RTX.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> The current situation is that Nvidia is looking at a product stack with no incentive for upgrading. Simply because the vast majority runs 1080p and even el cheapo 2060 can totally kill that for every game. I don't even fully utilize my 1080 with a 120 FPS target...


yeah right,good night.
looking at tpu's review,1080 gets the avg. of 120 in 7 out of 21 games. 
had 1080 too,and it ran 1440p really well.But you're taking best case scenarios where it runs 120 with no problems and apply it across the board. There were planty of games I played on my gtx 1080 at 1440p that could barely manage 60-70 fps.


----------



## moproblems99 (Mar 16, 2019)

Vayra86 said:


> The current situation is that Nvidia is looking at a product stack with no incentive for upgrading. Simply because the vast majority runs 1080p and even el cheapo 2060 can totally kill that for every game. I don't even fully utilize my 1080 with a 120 FPS target...



I find myself in a predicament as well.  I sit at 144hz 21:9 by right now and my V56 is now...insufficient.  I contemplated a VII with a water block but by the time I expand my loop to accommodate, I am not that far from a 2080ti...which is gross.  

I'm with you that I see the relevance of RTX2XXX series being rendered grossly obsolete by a similarly architected but much more powerfull RTX3XXX or a completely different method.  Last, it could strictly be tabled if neither of the former happen and buy in stays low.


----------



## Vayra86 (Mar 16, 2019)

moproblems99 said:


> I find myself in a predicament as well.  I sit at 144hz 21:9 by right now and my V56 is now...insufficient.  I contemplated a VII with a water block but by the time I expand my loop to accommodate, I am not that far from a 2080ti...which is gross.
> 
> I'm with you that I see the relevance of RTX2XXX series being rendered grossly obsolete by a similarly architected but much more powerfull RTX3XXX or a completely different method.  Last, it could strictly be tabled if neither of the former happen and buy in stays low.



Even from V56 I'd say there isn't enough to gain and the price/perf is so off I'm personally not even considering it. People who think this is somehow 'the new normal' are completely deluded and above all, very impatient. That's all this is. We're just arriving at 7nm. Good things are ahead of us. WAIT.


----------



## cucker tarlson (Mar 16, 2019)

moproblems99 said:


> I find myself in a predicament as well.  I sit at 144hz 21:9 by right now and my V56 is now...insufficient.  I contemplated a VII with a water block but by the time I expand my loop to accommodate, I am not that far from a 2080ti...which is gross.
> 
> I'm with you that I see the relevance of RTX2XXX series being rendered grossly obsolete by a similarly architected but much more powerfull RTX3XXX or a completely different method.  Last, it could strictly be tabled if neither of the former happen and buy in stays low.


look for a deal on a used 1080Ti imo.



Vayra86 said:


> Even from V56 I'd say there isn't enough to gain and the price/perf is so off I'm personally not even considering it. People who think this is somehow 'the new normal' are completely deluded and above all, very impatient. That's all this is. We're just arriving at 7nm. Good things are ahead of us. WAIT.


like buying 1080 wasn't already.
please,all these people saying they're not getting sucked into this,you're already in it.


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> like buying 1080 wasn't already.



Wasn't already what? Bad price/perf? I paid 450 eur for mine, can't say I'm complaining  But yes, overall, prices for Pascal have been inflated almost all the time.



cucker tarlson said:


> please,all these people saying they're not getting sucked into this,you're already in it.



We aren't. Even today people have the common sense to identify that for example the RTX 2060 is the only truly interesting product on offer because it pushes perf/dollar in the right direction. GTX 970 was a sales cannon, Pascal was a sales cannon, simply because they offered major performance jumps at the _same or similar price points._ Don't mistake enthusiast hype with reality. They couldn't be further apart.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Wasn't already what? Bad price/perf? I paid 450 eur for mine, can't say I'm complaining  But yes, overall, prices for Pascal have been inflated almost all the time.


new normal.
at least back then the new die-shrunk card beat the old high end by 20% and hands down in efficiency.
now a 300mm2 R7 can't even beat the 1080Ti and it's using more power at the same time.
please,don't tell me there's somehow a way for us to aviod the new normal.you're delusional if you think you can.you got no choice,it's going all along the line-up.get a console,you'll avoid the new normal then.


----------



## moproblems99 (Mar 16, 2019)

Vayra86 said:


> Even from V56 I'd say there isn't enough to gain and the price/perf is so off I'm personally not even considering it. People who think this is somehow 'the new normal' are completely deluded and above all, very impatient. That's all this is. We're just arriving at 7nm. Good things are ahead of us. WAIT.



Yeah, if it wasn't for FreeSync 2, I would have bailed and bought a new card already.  I'm torn on Navi delivering anything other than a price cut.  They are being pretty quiet about it but that means it is one of two things: disappointing or a long way off.



cucker tarlson said:


> look for a deal on a used 1080Ti imo.



I would but it just isn't big enough jump at this point.  Pretty sure compatibility with FS2 won't a problem so that is a plus.


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> new normal.
> at least back then the new die-shrunk card beat the old high end by 20% and hands down in efficiency.
> now a 300mm2 R7 can't even beat the 1080Ti and it's using more power at the same time.
> please,don't tell me there's somehow a way for us to aviod the new normal.you're delusional if you think you can.you got no choice,it's going all along the line-up.get a console,you'll avoid the new normal then.



Again: RTX 2060 proves you wrong and RVII is the worst possible example, its a GPU for a different segment with a gaming sticker on it and 16GB HBM.



> 2060 is nice,but look how much you're paying for a +50% performance increase over 1060 that launched in 2016.Same one that 1060 delivered over 980 at $250.



We're paying $299 > $350 for a 50% perf increase. That is not bad at all. Yes there is more time in between, but apart from that, this is a solid jump like we've come to expect from a new gen.

And the price of a 980...is almost TWICE that of a 1060.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Again: RTX 2060 proves you wrong and RVII is the worst possible example, its a GPU for a different segment with a gaming sticker on it and 16GB HBM.


2060 is nice,but look how much you're paying for a +50% performance increase over 1060 that launched in 2016.Same one that 1060 delivered over 960 at $250.


----------



## moproblems99 (Mar 16, 2019)

Vayra86 said:


> Again: RTX 2060 proves you wrong and RVII is the worst possible example, its a GPU for a different segment with a gaming sticker on it and 16GB HBM.



I'll be honest, I am intrigued of WC a VII.  I see reports of 2100 to 2200 clock which slots it ahead of 2080ti.  Correction: 2080.  typo there.


----------



## cucker tarlson (Mar 16, 2019)

moproblems99 said:


> I'll be honest, I am intrigued of WC a VII.  I see reports of 2100 to 2200 clock which slots it ahead of 2080ti.  Correction: 2080.  typo there.


you'd need 1.4x oc for that
getting a r7 and wc setup capable of cooling 500w to beat 2080 seems like a stupid idea money-wise.


----------



## moproblems99 (Mar 16, 2019)

cucker tarlson said:


> you'd need 1.4x oc for that
> getting a r7 and wc setup capable of cooling 500w to beat 2080 seems like a stupid idea money-wise.



Not disagreeing.  That is why I still have my V56. 

EDIT:

Also, it doesn't take 1.4OC to get pass the 2080.  It is the 2080ti that has the large goal post.  However, in certain loads, the WC VII can match or pass the 2080ti I believe.  Of course, those workloads don't do me any good.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Again: RTX 2060 proves you wrong and RVII is the worst possible example, its a GPU for a different segment with a gaming sticker on it and 16GB HBM.
> 
> 
> 
> ...


I meant 960,you knew that.I meant 1060 matched 980 at $250,rtx 2060 matches 1080 at $350.Hard to really not see that.but *FIIIIINNEEEEEE*,keep defending your point.
and you're not paying $50, you're using fe prices of 1060 to skew the facts,where I think I'll just pass on further comments.



moproblems99 said:


> Not disagreeing.  That is why I still have my V56.


like I said,a used 1080ti is a nice option for you,best one hands down at this point.
it'll be a looooong time till you can buy something better 2080/R7 at more attractive prices.



moproblems99 said:


> Not disagreeing.  That is why I still have my V56.
> 
> EDIT:
> 
> Also, it doesn't take 1.4OC to get pass the 2080.  It is the 2080ti that has the large goal post.  However, in certain loads, the WC VII can match or pass the 2080ti I believe.  Of course, those workloads don't do me any good.




okay,you're correcting my statement now cause you made your own one wrong and edited it,which is pretty annoying for me to have to explain.


----------



## moproblems99 (Mar 16, 2019)

Yeah, I can really hold out though.  AC Odyssey is really the only thing I am playing right now that is sub 60FPS and FreeSync is keeping it enjoyable.  Unless something comes along that changes my mind, I'll just stick it out.  I am moving on from my 4770K this year too pending Ryzen 3000 series so we'll see what happens then.


----------



## cucker tarlson (Mar 16, 2019)

moproblems99 said:


> Yeah, I can really hold out though.  AC Odyssey is really the only thing I am playing right now that is sub 60FPS and FreeSync is keeping it enjoyable.  Unless something comes along that changes my mind, I'll just stick it out.  I am moving on from my 4770K this year too pending Ryzen 3000 series so we'll see what happens then.


turn down volumetric clouds.


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> I meant 960,you knew that.I meant 1060 matched 980 at $250,rtx 2060 matches 1080 at $350.



No I didn't. But yes, in that sense you are absolutely correct price creeps up.



cucker tarlson said:


> and you're saying they'll climb back down cause rtx will implode.
> good one.



No. That's what you make of it... I'm saying there is a limit to what people will accept in terms of price increases and RTX is going over it. Also don't forget time itself plays a role as well, ie inflation and whatnot. The overall trend is STILL, even with the 2060, that the *same performance costs less* a generation later. And its the only card in the RTX line that does this.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> No I didn't. But yes, in that sense you are absolutely correct price creeps up.


and you're saying they'll climb back down cause rtx will implode.
good one.jensen would laugh hard.not how ngreedia operates.



Vayra86 said:


> No I didn't. But yes, in that sense you are absolutely correct price creeps up.
> 
> 
> 
> No. That's what you make of it... I'm saying there is a limit to what people will accept in terms of price increases and RTX is going over it. Also don't forget time itself plays a role as well, ie inflation and whatnot. The overall trend is STILL, even with the 2060, that the *same performance costs less* a generation later. And its the only card in the RTX line that does this.


given how a rogue attempt at rtx is still crushing the competition in sales,their 7nm launch can only get better (or let's call it less messy).


----------



## Vayra86 (Mar 16, 2019)

You seem awfully emotional over this. I'm just stating what I see, you're at liberty to have a different view on it...


----------



## SoNic67 (Mar 16, 2019)

cucker tarlson said:


> cause rtx will implode


Yes it will implode because nobody will code games _specifically_ for it.
Games are coded principally for consoles and for the upper main stream video cards. RTX right now works _well_ only on two cards: 2080 and 2080Ti. Too niche of a market.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> You seem awfully emotional over this. I'm just stating what I see, you're at liberty to have a different view on it...


boy,it is hard to read a person you don't know sitting in front of your computer.
I'm just bored of hearing the things you say.



SoNic67 said:


> Yes it will implode because nobody will code games _specifically_ for it.
> Games are coded principally for consoles and for the upper main stream video cards. RTX right now works _well_ only on two cards: 2080 and 2080Ti. Too niche of a market.


why is it that I keep hearing of all major engines adding rtx support ?
I must be getting fake info.


----------



## moproblems99 (Mar 16, 2019)

cucker tarlson said:


> okay,you're correcting my statement now cause you made your own one wrong and edited it,which is pretty annoying for me to have to explain.



LOL, my edit was in your quote!


----------



## cucker tarlson (Mar 16, 2019)

moproblems99 said:


> LOL, my edit was in your quote!


didn't see it and it doesn't change things.
you knew you said 2080Ti.


----------



## Vayra86 (Mar 16, 2019)

cucker tarlson said:


> boy,it is hard to read a person you don't know sitting in front of your computer.
> I'm just bored of hearing the things you say.
> 
> 
> ...



Seems I did read you quite right as being annoyed with my comments, then, did I not? You're getting blunt, take a breather/step back, because I do agree beaten horse is dead. We can see things differently and coexist.


----------



## moproblems99 (Mar 16, 2019)

cucker tarlson said:


> didn't see it and it doesn't change things.
> you knew you said 2080Ti.



I changed it fast enough that you quoted it when you replied.  I'm sorry you didn't read it.  There were also fringe cases where the 2080ti was beaten by the WC OC VII so it technically wasn't a wrong statement either.  I just don't use AMD marketing tactics.

We had a good thing going.  Let's save the rest for the second date.


----------



## cucker tarlson (Mar 16, 2019)

Vayra86 said:


> Seems I did read you quite right as being annoyed with my comments, then, did I not? You're getting blunt, take a breather/step back, because I do agree beaten horse is dead. We can see things differently and coexist.


we can.
I'm not annoyed.
but I told you I'm just tired of hearing that we as educated consumers can divert the market.we're helpless.



moproblems99 said:


> There were also fringe cases where the 2080ti was beaten by the WC OC VII so it technically wasn't a wrong statement either.


can you link those extreme fringe cases ?







I'm sure you'll find a case of 2060 beating RVII,I persoanlly saw tests where 980Ti beat Vega 64.So technically they're both true,right ?


----------



## Nxodus (Mar 16, 2019)

Vayra86 said:


> Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were _more efficient. _Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.
> 
> This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the _entire GPU_ instead of just a part of it, while the _entire GPU is also available_ should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is _viable on a marketplace._
> 
> ...



I'm one of those idiots who bought an RTX card for RT. I don't care about the future, I wanted to enjoy RT now! And Metro Exodus, man, I gotta say, RTX blew my mind away. I can't even find words for the exceptional beauty of it.. I mean, all those screenshots did no justice to RT at all. Turning RT off in Metro Exodus made it look like a 2009 game. The realism, the ambience of RTX justified every cent I paid for my 2060. I only play 6 or so games a year, I wanted the best visual experience and it was well worth it.

I understand your points, but It's important to note, there is a tiny minority of people like me, who fell in love with the RTX line of cards and my motto is: Once you RTX you never go back


----------



## Vayra86 (Mar 16, 2019)

Nxodus said:


> I'm one of those idiots who bought an RTX card for RT. I don't care about the future, I wanted to enjoy RT now! And Metro Exodus, man, I gotta say, RTX blew my mind away. I can't even find words for the exceptional beauty of it.. I mean, all those screenshots did no justice to RT at all. Turning RT off in Metro Exodus made it look like a 2009 game. The realism, the ambience of RTX justified every cent I paid for my 2060. I only play 6 or so games a year, I wanted the best visual experience and it was well worth it.
> 
> I understand your points, but It's important to note, there is a tiny minority of people like me, who fell in love with the RTX line of cards and my motto is: Once you RTX you never go back



Oh I get you completely and I agree it is progress and it does improve immersion. I just disagree with the cost (and timing) of it.


----------



## cucker tarlson (Mar 16, 2019)

frankly rtx is the only reason i'd buy exodus.
it' meh graphically otherwise.


----------



## moproblems99 (Mar 16, 2019)

cucker tarlson said:


> but I told you I'm just tired of hearing that we as educated consumers can divert the market.we're helpless.



But we can.  If they don't sell, something has to change.  It isn't a magic bullet (or a reality).


----------



## cucker tarlson (Mar 16, 2019)

moproblems99 said:


> But we can.  If they don't sell, something has to change.  It isn't a magic bullet (or a reality).


in reality all we might do is have them lanuch 7nm faster instead of dragging their feet again like did with keeping pascal for 2.5 years.
I'll give you a like for being optimistic tho.


----------



## INSTG8R (Mar 16, 2019)

Neat all my wasted compute power might get used.


----------



## Nxodus (Mar 16, 2019)

Vayra86 said:


> Oh I get you completely and I agree it is progress and it does improve immersion. I just disagree with the cost (and timing) of it.



10 series cards were and are still too powerful for people to upgrade again after 2-3 years. They had to come up with something to keep sales numbers and share holders happy. A bit pricey, but expecting cheaper and(!) better tech every new generation is an idealist utopian dream. Just look at AMD's R7. AMD is considered the master of cheap cards, and they are charging nGreedia prices. People seriously expected a 2080 killer for chump change.



cucker tarlson said:


> frankly rtx is the only reason i'd buy exodus.
> it' meh graphically otherwise.



There's lots of reasons to buy the game, lore, a beautifully crafted single-player storyline (a dying breed..)
Graphically, well I'm no expert, but I was blown away by reflections, textures, meshes, fur, flora, weather, especially rain, and various effects on your gasmask. Fog was gorgeous, especially in underground  levels.
The desert level though... I've never seen such a realistic desert ever in a game. I've been sweating, I could feel the sun and sand in my eyes.

Shadows look very dated without RT though


----------



## INSTG8R (Mar 16, 2019)

Nxodus said:


> 10 series cards were and are still too powerful for people to upgrade again after 2-3 years. They had to come up with something to keep sales numbers and share holders happy. A bit pricey, but expecting cheaper and(!) better tech every new generation is an idealist utopian dream. Just look at AMD's R7. AMD is considered the master of cheap cards, and they are charging nGreedia prices. People seriously expected a 2080 killer for chump change.


As for pricing here in Norway a VII is 7500kr a custom 2080 is 10k. I paid 7500kr for my Vega 6-8 months ago. So I’m a little sad about that but they are much cheaper here vs 2080s.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> As for pricing here in Norway a VII is 7500kr a custom 2080 is 10k. I paid 7500kr for my Vega 6-8 months ago. So I’m a little sad about that but they are much cheaper here vs 2080s.


3500 for VII and 3100 for 2080 here.
been that way since fury x,vega was 1080Ti price.

and it seems 2080 is 7K in Norway,not 10k as you said.That makes it cheaper than VII too.
*https://prisguiden.no/produkt/nvidia-geforce-rtx-2080-339551*


----------



## bug (Mar 16, 2019)

Nice. So now that it has been done on AMD hardware, we have three pages of comments and nobody calls RTRT a "gimmick" or a "fad" anymore. Who could have seen that one coming?


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> 3500 for VII and 3100 for 2080 here.
> been that way since fury x,vega was 1080Ti price.
> 
> and it seems 2080 is 7K in Norway,not 10k as you said.That makes it cheaper than VII too.
> *https://prisguiden.no/produkt/nvidia-geforce-rtx-2080-339551*


Maybe you missed custom...I only see 2 custom that are the same price or cheaper and we’re talking low tier Gainward and low tier ASUS Armor


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Maybe you missed custom...


no,I didn't

https://www.netonnet.no/art/datakomponenter/skjermkort/nvidia/palit-geforce-rtx2080-dual-8g/1006033.11111/?utm_source=prisguide&utm_medium=cpc&utm_term=1006033+- Palit Geforce RTX2080 Dual 8G&utm_campaign=prisguide_prisjamforelse&dclid=CjgKEAjwvbLkBRC3-aO30pu54mwSJAC2qLNMLHfjc_A30aI702GaRWusAsbPT3esrTlFj9xaO__pwPD_BwE

here's a triple fan one for 6900

https://www.komplett.no/product/111...ermkort/gainward-geforce-rtx-2080-triple-fan#


10K,eh ?


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> no,I didn't
> 
> https://www.netonnet.no/art/datakomponenter/skjermkort/nvidia/palit-geforce-rtx2080-dual-8g/1006033.11111/?utm_source=prisguide&utm_medium=cpc&utm_term=1006033+- Palit Geforce RTX2080 Dual 8G&utm_campaign=prisguide_prisjamforelse&dclid=CjgKEAjwvbLkBRC3-aO30pu54mwSJAC2qLNMLHfjc_A30aI702GaRWusAsbPT3esrTlFj9xaO__pwPD_BwE


Good I saw those as well but that’s $35 I’d still take the VII with the extra VRAM that pairs with my Freesync 2 HDR monitor. My bottom line is I’m fine with VIIs pricing here., Look at the quality ones  you’d ACTUALLY buy not the budget ones we can both cherry pick.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Good I saw those as well but that’s $35 I’d still take the VII with the extra VRAM that pairs with my Freesync 2 HDR monitor.


lol,you inflated the numbers by 40% for the nvidia card,then got caught lying,then said you saw the ones I posted but would buy VII anyway for the vram.
that's funny to me cause it wouldn't matter if you mentioned 2080's real price at the very beginning,not at all.
if you had to inflate it to 10k to make us see r7 in better light,I don't think you're completely chill with RVII price to be honest.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> lol,you inflated the numbers by 40% for the nvidia card,then got caught lying,then said you saw the ones I posted but would buy VII anyway for the vram.
> that's funny to me.


Let’s look at a more complete list and see how much I really “inflated” it....
https://www.komplett.no/category/10412/datautstyr/pc-komponenter/skjermkort?nlevel=10000§28003§10412&cnet=Grafikkprosessorfabrikant_A03616 §NVIDIA&cnet=Grafikkprosessor_A00247 §NVIDIA GeForce RTX 2080
VII is 7500kr Period! 2080 not so much...


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Let’s look at a more complete list and see how much I really “inflated” it....
> https://www.komplett.no/category/10412/datautstyr/pc-komponenter/skjermkort?nlevel=10000§28003§10412&cnet=Grafikkprosessorfabrikant_A03616 §NVIDIA&cnet=Grafikkprosessor_A00247 §NVIDIA GeForce RTX 2080


I think there's something you missed,and that's fair standards for comparison.
if you can get a tiple fan aib 2080 for the same or lower price than Radeon VII,then they're not 40% apart.
you took the most grossly overpriced strix/clc versions and rounded up the 9k price to 10k while* 6900k buys you the same thing *basically.


----------



## Vya Domus (Mar 16, 2019)

Vayra86 said:


> Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it.



RTX is Nvidia's business, what they do with it is up to them, they may dedicate entire chips to it or do it exclusively in software there is no requirement for any particular hardware implementation.

Regardless , with dedicated hardware or without RTRT isn't feasible as a particularly useful effect right now, there is no way to get around that.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> I think there's something you missed,and that's fair standards for comparison.
> if you can get a tiple fan aib 2080 for the same or lower price than Radeon VII,then they're not 40% apart.
> you took the most grossly overpriced strix/clc versions and rounded up the 9k price to 10k while* 6900k buys you the same thing *basically.


Funny most in the list are 8-9k not rounding up...


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Funny most in the list are 8-9k not rounding up...


there's 9 aib models at 7.5k or lower.another 16 models under 9k.
those 7-7.5k palit/gainward cards are friggin amazing,had 1080 sjs myself,awesome card,cool,quiet,oc'd and uv'd live a devil
once again,I don't know why you're doing this for any other reason than bias or insecurity.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> there's 9 aib models at 7.5k or lower.another 16 models under 9k.
> once again,I don't know why you're doing this for any other reason than bias or insecurity.


You’re the one that wants to make this about bias. They are equal too or more expensive than VII for the MAJORITY of available cards full stop. 2-3 Budget line examples  doesn’t change the fact that VII is reference only so one size/price fits all.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> You’re the one that wants to make this about bias. They are equal too or more expensive than VII for the MAJORITY of available cards full stop. 2-3 Budget line examples  doesn’t change the fact that VII is reference only so one size/price fits all.


I wasn't the one that made up numbers,I just checked them cause 40% more for a card with the same msrp seemed way off.
so now a card that doesn't fit your 9K-10K class is too budget,eh?
maybe do some research on those 7K models,they're cooler and quieter than VII.
I'm having fun.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> so now a card that doesn't fit your 9K-10K class is too budget,eh?
> maybe do some research on those 7K models,they're cooler and quieter than VII.


Budget card is budget not a hard concept to grasp? You get what you pay for.  Stil, doesn’t change the fact the majority of cards are more expensive than VII no matter how you’re trying to split hairs now. 
Let’s get back to Ray Tracing shall we? I don’t feel like arguing with the resident Nvidia cheerleader anymore.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Budget card is budget not a hard concept to grasp? You get what you pay for.  Stil, doesn’t change the fact the majority of cards are more expensive than VII no matter how you’re trying to split hairs now.
> Let’s get back to Ray Tracing shall we? I don’t feel like arguing with the resident Nvidia cheerleader anymore.


sorry,but you're the one that brought up pricing in the first place and are now resorting to name calling.


----------



## moproblems99 (Mar 16, 2019)

bug said:


> Nice. So now that it has been done on AMD hardware, we have three pages of comments and nobody calls RTRT a "gimmick" or a "fad" anymore. Who could have seen that one coming?



Well, there are at least two people who still don't believe in it.  @me and @Vayra86.  Can't speak for the others but TPU has a lot of bandwagoners.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> sorry,but you're the one that brought up pricing in the first place.


And you being the resident NV cheerleader couldn’t help but latch on to it to support your team. Bottom line is still most 2080 are more expensive than VII let’s just stop there.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> And you being the resident NV cheerleader couldn’t help but latch on to it to support your team. Bottom line is still most 2080 are more expensive than VII let’s just stop there.


if you count correcting your statements as "supporting my team"......


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> if you count correcting your statements as "supporting my team"......


Call it what you want. My point still stands.Just let me sort the list highest to lowest for clarification. 
https://www.komplett.no/category/10...eForce RTX 2080&hits=48&sort=Price:DESCENDING


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Call it what you want. My point still stands.Just let me sort the list highest to lowest for clarification.
> https://www.komplett.no/category/10412/datautstyr/pc-komponenter/skjermkort?nlevel=10000§28003§10412&cnet=Grafikkprosessorfabrikant_A03616 §NVIDIA&cnet=Grafikkprosessor_A00247 §NVIDIA GeForce RTX 2080&hits=48&sort=PriceESCENDING


okay big guy,there's no need for that,I told you it wouldn't matter if you said you prefer rvii to 2080 anyway.

ps for other users,you can use your mouse to scroll the list down


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> okay big guy,there's no need for that,I told you it wouldn't matter if you said you prefer rvii to 2080 anyway.
> 
> ps for other users,you can use your mouse to scroll the list down


And you prefer 2080 to VII but the numbers are still In my favour. Let’s just stop the back and forth.


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> And you prefer 2080 to VII but* the numbers are still In my favour*. Let’s just stop the back and forth.


I think you're either blind or obnoxiously lying.
and why "in your favor" ? are you,by any chance, an AMD cheerleader ?


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> I think you're either blind or obnoxiously lying.


Sorry but the price list is lying? Give it up now you’re just trying being intentionally argumentative...


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> Sorry but the price list is lying? Give it up now you’re just trying being intentionally argumentative...


it isn't lying,and that's the whole thing.


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> it isn't lying,and that's the whole thing.


33 2080s 27 more expensive than VII. Where’s the lie?


----------



## cucker tarlson (Mar 16, 2019)

INSTG8R said:


> 33 2080s 27 more expensive than VII. Where’s the lie?


what am I gonna repeat myself five times for you to understand ?


----------



## INSTG8R (Mar 16, 2019)

cucker tarlson said:


> what am I gonna repeat myself five times for you to understand ?


Not but you’ll try....


----------



## rtwjunkie (Mar 16, 2019)

Can we please move back on topic?  I find this software ray tracing done by Crytek to be fantastic!  It really does make me think there may be more ways to crack this nut.  

If they can gain a little traction and it finds interest with a developer or two, I also think we could have a vhs/beta type competition that will take a few years to decide (by consumers) which will be the way forward for ray tracing in games.  I for one am happy I can sit this out and watch for now.


----------



## bug (Mar 16, 2019)

INSTG8R said:


> Sorry but the price list is lying? Give it up now you’re just trying being intentionally argumentative...


The price list isn't lying, but you comparing a video card that's only available in reference design to another card that in its reference design costs the same and then bringing custom models into the discussion was either you trying to convince yourself Radeon VII was the better pick (it could be, I don't know your needs) or just obvious flame bait.



rtwjunkie said:


> Can we please move back on topic?  I find this software ray tracing done by Crytek to be fantastic!  It really does make me think there may be more ways to crack this nut.
> 
> If they can gain a little traction and it finds interest with a developer or two, I also think we could have a vhs/beta type competition that will take a few years to decide (by consumers) which will be the way forward for ray tracing in games.  I for one am happy I can sit this out and watch for now.


Oh, sure there are many ways to go about it. And it obviously won't take off without more widespread support. I'm going to make a rather out of place comparison here, the Turing is like Gagarin's flight. It didn't mean that's how we're going to be flying from then on, but it was the moment when the idea was no longer something that would happen at some point, but it was there and then all of a sudden.


----------



## Recus (Mar 16, 2019)

No DXR or fallback layer. So this "ray tracing" method will be locked on Cryengine. While rest will use industry standard DXR. 

Crytek convulsions...


----------



## M2B (Mar 16, 2019)

INSTG8R said:


> 33 2080s 27 more expensive than VII. Where’s the lie?



If you can comfortably find a 2080 at the same price as a Radeon VII, why would you even consider the higher prices?
Heck, even the shittiest 2080s are usually better than Radeon VII in terms of noise and cooling.


----------



## INSTG8R (Mar 16, 2019)

M2B said:


> If you can comfortably find a 2080 at the same price as a Radeon VII, why would you even consider the higher prices?
> Heck, even the shittiest 2080s are usually better than Radeon VII in terms of noise and cooling.


Horses for courses. Let’s just get back to CryEngine bringing Ray Tracing to the masses.


----------



## Vayra86 (Mar 16, 2019)

bug said:


> Oh, sure there are many ways to go about it. And it obviously won't take off without more widespread support. I'm going to make a rather out of place comparison here, the Turing is like Gagarin's flight. It didn't mean that's how we're going to be flying from then on, but it was the moment when the idea was no longer something that would happen at some point, but it was there and then all of a sudden.



I can totally agree with that view on RTX, for sure!



Recus said:


> No DXR or fallback layer. So this "ray tracing" method will be locked on Cryengine. While rest will use industry standard DXR.
> 
> Crytek convulsions...



Why would you need a software falback for a software solution? Also industry standard is a bit of a stretch when it is tied to an API with low adoption and is implemented in half a handful of games...


----------



## lexluthermiester (Mar 16, 2019)

GinoLatino said:


> unless the engine ha RT implementation from the get go of course.


Which likely will. They'd be stupid not to use a resource that readily available.


----------



## AmioriK (Mar 17, 2019)

I just thought I'd copy something I posted in another thread as it's relevant to this discussion Regarding the cryengine RT approach.



> While that is nice, honestly I don't expect it to perform anywhere near what Turing 20-series can do with the ASIC RT cores. It runs well on Vega because those GPUs have a lot of shader processors that aren't doing a whole lot when gaming (Vega SP's are underutilised due to waiting for other parts of the pipeline to finish, i.e geometry). Filling them up with RT ops while they wait will result in Vega doing quite well with GPGPU RT on the shaders. Vega also can potentially use Rapid Packed Math FP16X2 to accelerate that process. I don't think it will run amazingly well on Pascal or the baby Turings (TU116) because these GPUs are already close to peak shader utilisation, I think.
> 
> Well optimised RT code running on GPGPU shaders is great of course, as its vendor and API agnostic, but dedicated HW is going to perform better at the same IQ or have superior IQ at the same frame rate, I think. NVIDIA will be working pretty hard to optimise the driver and code for the RT cores, too. Also: NVIDIA bet a lot on dedicated fixed function units to do the BVH part of ray tracing. I think, if the same or better performance could be achieved with throwing more GPGPU CUDA cores at the problem and running that code on those instead, they would have done it. Just my thoughts.



These are just my 2 cents on the ray tracing. Maybe I'm wrong and big Turing implementation is innately inferior to a GPU approach... Time will tell.

I didn't, and won't, invest in 20 series GPU because I feel it isn't worth it yet.  But I do think Nvidia will improve dedicated rtx cores.


----------



## cucker tarlson (Mar 17, 2019)

we'll see the results and we'll judge it then.
so far all we can do with old cards in real time is tray racing


----------



## Xuper (Mar 17, 2019)

OMG!!! What the hell are you doing ?  This Crytek demo is not big GUN that you want to defend RTX! *GET OVER IT! *Two Pages wasted for nonsense posts ! MOD Please Remove All non-related Posts , Thanks


----------



## Nxodus (Mar 17, 2019)

Xuper said:


> OMG!!! What the hell are you doing ?  This Crytek demo is not big GUN that you want to defend RTX! *GET OVER IT! *Two Pages wasted for nonsense posts ! MOD Please Remove All non-related Posts , Thanks



Mods are not your personal army.


----------



## Stefem (Mar 17, 2019)

Nxodus said:


> I'm one of those idiots who bought an RTX card for RT. I don't care about the future, I wanted to enjoy RT now! And Metro Exodus, man, I gotta say, RTX blew my mind away. I can't even find words for the exceptional beauty of it.. I mean, all those screenshots did no justice to RT at all. Turning RT off in Metro Exodus made it look like a 2009 game. The realism, the ambience of RTX justified every cent I paid for my 2060. I only play 6 or so games a year, I wanted the best visual experience and it was well worth it.
> 
> I understand your points, but It's important to note, there is a tiny minority of people like me, who fell in love with the RTX line of cards and my motto is: Once you RTX you never go back


I don agree with him on anything to be honest, I'm not yet an RTX owner but I wouldn't worry, CryTek itself pointed out that once integrated in their engine they will leverage the advantage  of the newer hardware: 

"However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12."

We don't know almost anything on the CryTek implementation, they may be using SDF volumetric representation in place of standard geometry for example but it's know by almost a full year that some developers was working to use raytracing on old hardware (even console), that's what Sebastian Aaltonen said last summer for example.

__ https://twitter.com/i/web/status/1032283494670577664


AmioriK said:


> I just thought I'd copy something I posted in another thread as it's relevant to this discussion Regarding the cryengine RT approach.
> 
> 
> 
> ...


FP16 can be used to compute BVH traversal (something even Vega or smaller Turing without dedicated hardware can benefit of) but RT cores are still much faster and they also offload the shader core that can then work on other stuff, even CryTek aim to take advantage of the newer hardware once they start to actually implement the tech in their engine.
We don't know exactly their implementation as they refraining from giving any details and circumventing any technical question they are given but they are planning to slowly release details, I guess that is to draw attention and generate hype, sadly CryTek had been suffering for years and advertising may help them, I loved how they pushed things forward with Crysis.


----------



## rtwjunkie (Mar 17, 2019)

Xuper said:


> OMG!!! What the hell are you doing ?  This Crytek demo is not big GUN that you want to defend RTX! *GET OVER IT! *Two Pages wasted for nonsense posts ! MOD Please Remove All non-related Posts , Thanks


“RTX” is an Nvidia term for their implementation of ray tracing.  So this thread is exactly about ray tracing being done by someone other than Nvidia.  Keep in mind, this is not Nvidia exclusive technology.


----------



## Xuper (Mar 17, 2019)

rtwjunkie said:


> “RTX” is an Nvidia term for their implementation of ray tracing.  So this thread is exactly about ray tracing being done by someone other than Nvidia.  Keep in mind, this is not Nvidia exclusive technology.


Whatever but Topic about Radeon VII vs 2080 needs to be Off.


----------



## medi01 (Mar 17, 2019)

Soo, if generic CUs can do that, why waste silicon on dedicated?



Recus said:


> While rest will use *industry standard* DXR.


----------



## Vayra86 (Mar 17, 2019)

Xuper said:


> Whatever but Topic about Radeon VII vs 2080 needs to be Off.



I think we're past that, so stop digging it up if you want it like that. There is a Report button, use it. Posting about it is just as offtopic as the things you complain about.


----------



## Stefem (Mar 17, 2019)

medi01 said:


> Soo, if generic CUs can do that, why waste silicon on dedicated?


Hem... being much faster could be a reason, perhaps? that's why graphic cards were born although everything was possible in software. Why mine on asics if can be done on a CPU, why use dedicated encoding and decoding for video, why tessellator , why texture mapping unit, why ROP.... there are plenty of example and all have the same answer.


----------



## cucker tarlson (Mar 17, 2019)

medi01 said:


> Soo, if generic CUs can do that, why waste silicon on dedicated?


you mean why didn't they just make a 6000 cuda card on that 750mm2 ?
maybe extra cuda cores draw more power than tensor/rt cores,or they'd have more production issues with such a card.they'd need a complete die redesign too.with 2080ti it's the same 88 rop/11gb configuration,with tensor and rt cores added to them.
that is a good question I'd like to know the answer for too.
In the end I think they decided that economically they're better off with rt-specific hardware and software rather than brute force and more cuda.
maybe they just wanted to use a proprietary solution (dxr) cause they're nvidia.


----------



## medi01 (Mar 17, 2019)

Stefem said:


> Hem... being much faster could be a reason, perhaps?


Well, perhaps, but do you see that "much faster" missing in the OP demo?




cucker tarlson said:


> maybe they just wanted to use a proprietary solution (dxr) cause they're nvidia.


Knowing Huang's habbits, hell yeah, on the other hand DXR made it into DX12. 
Makes sense only if he hoped competitors would not bother implementing it like that/it would take them long to catch up (years of research are behind it).
Notably, this very demo doesn't use it, does it?


----------



## cucker tarlson (Mar 17, 2019)

medi01 said:


> Knowing Huang's habbits, hell yeah, on the other hand DXR made it into DX12.
> Makes sense only if he hoped competitors would not bother implementing it like that/it would take them long to catch up (years of research are behind it).
> Notably, this very demo doesn't use it, does it?


seems to me it  doesn't,at least now.










look at this video,titan v only delivers 27 fps where 2080ti delivers 42. seems like the more cuda instead of decicated rt/tensor cores approach would still be a way less efficient way


----------



## Stefem (Mar 17, 2019)

medi01 said:


> Well, perhaps, but do you see that "much faster" missing in the OP demo?
> 
> 
> 
> ...


I don't see how the demo disprove what I said, they didn't compare performance and (as I've already posted above) Crytek said the integration of this tech in their engine will benefit from the enhancement delivered by newer hardware using DX12 and Vulkan.
We know almost nothing on what they've done, they carefully avoided to give out any detail and they are dodging technical question but they said they will gradually release some info, the only thing to do is to wait for actual details.
As I've said in a precedent post, there are other developer working to use RT on old hardware and console and there are even game out now, look at Claybook for example it uses raytracing for primary + AO + shadow rays and no one talk about it, I think that CryTek is way better at generating flames debates .


----------



## Vya Domus (Mar 17, 2019)

Ray traced elements have been used for years for approximating all sorts of effects such as global illumination, there was never the case of needing a particular hardware/software framework for these things to work.

GPUs these days are no longer GPUs, they are compute accelerators with some dedicated graphics hardware strapped on. Learning GPGPU/OpenCL/CUDA made me realize how many hardware capabilities and features have been stuffed inside these things that have none or very little relevance to graphics workloads. There is a reason why Microsoft made no particular hardware requirements to DXR, it may very well be the case that future GPUs will go down the path of doing RTRT under the form of generic compute workloads rather than strapping yet another dedicated ASIC on these already clogged architectures.


----------



## bigfurrymonster (Mar 18, 2019)

Vayra86 said:


> Yes. Been saying since day one. .



I think the killer solution would be a multi chip GPU design like what they use for Ryzen3 (io controller+zen cores)

You could have a small "RT" coprocessor and a GPU on seperate dies connected by infinity fabric.
The small RT coprocessor cost would be negligible compared to Nvidia's monolithic approach.


----------



## medi01 (Mar 18, 2019)

cucker tarlson said:


> look at this video,titan v only delivers 27 fps where 2080ti delivers 42. seems like the more cuda instead of decicated rt/tensor cores approach would still be a way less efficient way


In that particular way of doing RT rendering, yes.
Doesn't necessarily mean anything about what Crytek did. 



Stefem said:


> I don't see how the demo disprove what I said, they didn't compare performance and (as I've already posted above) Crytek said the integration of this tech in their engine will benefit from the enhancement delivered by newer hardware using DX12 and Vulkan.


They didn't have to compare performance, they are selling the engine, not non-RTX GPUs. 

Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).



Stefem said:


> We know almost nothing on what they've done


Well, why, they said that they used SVOGI or Sparse voxel octree global illumination


----------



## PanicLake (Mar 18, 2019)

I just realized: RTX is the new "G-Sync"...


----------



## bug (Mar 18, 2019)

medi01 said:


> Soo, if generic CUs can do that, why waste silicon on dedicated?


Gee, I don't know. Why do we waste silicon on 3D in general if we can do 3D in software?
Just look at what the tensor cores do for some compute workloads, that's why specialized silicon is needed.


----------



## INSTG8R (Mar 18, 2019)

GinoLatino said:


> I just realized: RTX is the new "G-Sync"...


PhysX 2.0


----------



## londiste (Mar 18, 2019)

Recus said:


> No DXR or fallback layer. So this "ray tracing" method will be locked on Cryengine. While rest will use industry standard DXR.


https://www.cryengine.com/news/cryt...-time-ray-tracing-demonstration-for-cryengine
Total Illumination is their voxel AO solution, it is in its principle halfway towards raytracing and they have obviously expanded the feature by quite a bit.
They said they will use hardware acceleration if they can. Vulkan and DX12 imply using VK_RT extensions and DXR which today means it does include RTX support. Or, technically the other way around - RTX does support the APIs that CryTek uses.





			
				https://www.cryengine.com/news/crytek-releases-neon-noir-a-real-time-ray-tracing-demonstration-for-cryengine said:
			
		

> However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.





medi01 said:


> Soo, if generic CUs can do that, why waste silicon on dedicated?


Efficiency. Dedicated hardware can do BVH traversal faster. Less resources, less power.


medi01 said:


> In that particular way of doing RT rendering, yes.
> Doesn't necessarily mean anything about what Crytek did.
> ...
> Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).


Vulkan has VK_NVX_raytracing extensions. Both DXR and these extensions provide access to RT cores that do BVH traversal. This is fairly central operation to most raytracing implementations.


----------



## Vayra86 (Mar 18, 2019)

GinoLatino said:


> I just realized: RTX is the new "G-Sync"...



That may turn out to be very accurate indeed. It will do it 'a little better' at a tremendous cost.


----------



## medi01 (Mar 18, 2019)

bug said:


> Why do we waste silicon on 3D in general if we can do 3D in software?


Because it is vastly faster. Something that is apparently not the case here with ray tracing, is it?
NV went with "look, you need (my) specialized hardware for RT reflections/shadows!".
Crytek called BS.
Let's twist it somehow, shall we?



londiste said:


> Vulkan has VK_NVX_raytracing extensions.


Good for whatever NVX stands for. Oh wait, isn't it the thing that killed OpenGL? Hmm...



Vayra86 said:


> It will do it 'a little better'


Good that you put it into quotes. I hope you also meant it.


----------



## bug (Mar 18, 2019)

Vayra86 said:


> That may turn out to be very accurate indeed. It will do it 'a little better' at a tremendous cost.


The thing is, the DXR part of RTX is in DX now. So there are a few parties at least behind that. Then again, who knows what DXR 2.0 or DXR 3.0 will look like?



medi01 said:


> Because it is vastly faster. _Something that is apparently not the case here with ray tracing,_ is it?


How do you figure? I have seen no numbers or technical detalis about what Crytek did, yet you're assuming performance is about the same?


----------



## medi01 (Mar 18, 2019)

bug said:


> How do you figure? I have seen no numbers or technical detalis


You see I didn't need checking third party tests to figure hardware rendering *wiped the floor* with software rendering back when GPUs became a thing.


----------



## bug (Mar 18, 2019)

medi01 said:


> You see I didn't need checking third party tests to figure hardware rendering *wiped the floor* with software rendering back when GPUs became a thing.


Maybe so, but now you're assuming a solution without dedicated hardware performs about the same as a solution having said hardware. Which is quite the opposite.


----------



## londiste (Mar 18, 2019)

Vayra86 said:


> Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were _more efficient. _Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.
> 
> This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the _entire GPU_ instead of just a part of it, while the _entire GPU is also available_ should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is _viable on a marketplace._
> 
> ...


Sorry for digging up an old post but I think you are off base with this.
- Die space cost for RT cores is 10-15%, probably less. I am not sure if that is exactly massive.
- RTX is proprietary, DXR is not, Vulkan extensions may or may not turn out to be proprietary depending on what route the other IHVs take.
- Software-based implementation - or in this case, implementation running on general-purpose hardware - is simply not as efficient as dedicated hardware. So far everything points at this being the case here, whether you take Nvidia's inflated marketing numbers or actual tests by users. This shows even with production applications and Turing vs Titan V. RT cores simply do make a big difference in performance.
- Quality of demo is a different topic, CryTek is selling the engine so it needs to look beautiful. This one is probably best compared to the Star Wars demo. Metro is an artistic problem rather than technical one.

CryTek said this is on the release roadmap in 2019 so all the performance aspects should be testable eventually. I would expect them to talk more about it during GDC as well.


----------



## medi01 (Mar 18, 2019)

bug said:


> ...now you're assuming a solution without dedicated hardware performs about the same...



I'm not assuming, I'm seeing it.



londiste said:


> This one is probably best compared to the Star Wars demo.


Why would the mentioned demo, focusing on the RTX, would not be made to look good?


----------



## Stefem (Mar 18, 2019)

medi01 said:


> They didn't have to compare performance, they are selling the engine, not non-RTX GPUs.


I've probably misunderstood you question, didn't you asked if dedicated hardware being faster was missing in the CryTek demo?



medi01 said:


> Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).


Yes it does, NVIDIA proposed several extension for raytracing that have since been integrated into the API and that with the some contribution of both Intel and AMD.
https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VK_NV_ray_tracing



medi01 said:


> Well, why, they said that they used SVOGI or Sparse voxel octree global illumination


That's not raytracing and it's nothing new for them nor for others as there are many games out by years that make use of voxel for GI and, BTW, it's not the solution used to render reflection (the only thing they claim is raytraced).


----------



## bug (Mar 18, 2019)

medi01 said:


> I'm not assuming, I'm seeing it.



You're seeing what? A running demo that has no counterpart running on said dedicated hardware?


----------



## medi01 (Mar 18, 2019)

Stefem said:


> I've probably misunderstood you question, didn't you asked if dedicated hardware being faster was missing in the CryTek demo?


No. Puzzled by your interpretation.



Stefem said:


> NVIDIA proposed several extension for raytracing


Vendors are free to create extensions.



Stefem said:


> that have since been integrated into the API


Would you mind to link it? And "integrated" should not be something with vendor keyword in it.




Stefem said:


> That's not raytracing


It is ray tracing.
Voxels are much easier to check for intersections than triangles, obviously that's the trick.



Stefem said:


> games out by years that make use of voxel


Not for RT purposes.



bug said:


> A running demo that has no counterpart running on said dedicated hardware?


It is using different algorithm to do RT.
You cannot have "counterpart" as in direct copy of exact same demo running utilizing voxels.

Hardware accelerated 3D was hands down faster, easy to spot, one didn't need "exact same engine" to see it.
That is not the case here.


----------



## Stefem (Mar 18, 2019)

medi01 said:


> I'm not assuming, I'm seeing it.


I'm sorry but you can't as you don't even know what and how the engine is doing, you don't know how deep the reflection goes, you don't know what kind of primitive they are using...
For clarity (even if it's well know fron day one) RT cores can't accelerate the traversal with custom primitives (works with triangles) but to be honest they don't need to, raytracing on GPUs have problem only with triangles as the horrible resulting access pattern skyrocket the cache misses. Here you can read what an engineer (a quite famous one) has done, creating a scene with a complex fractal defined structures made of 48 million spheres (perfect sphere, ther's no triangle) with shadows and reflection that goes 5 levels deep (reflectio of a reflection...) that run at 173 FPS on a RTX 2080 Ti, the demo is opensource and can be downloaded by everyone https://devblogs.nvidia.com/my-first-ray-tracing-demo/



medi01 said:


> Why would the mentioned demo, focusing on the RTX, would not be made to look good?


That's exactly what he said, compare it to the Star Wars demo (demo vs demo) and not to a game



medi01 said:


> Vendors are free to create extensions.
> 
> Would you mind to link it? And "integrated" should not be something with vendor keyword in it.


Are you trolling or have problems reading? the link was there but you deleted from the quoted text... and there you can also read the name of the AMD and Intel respective engineer.



medi01 said:


> It is ray tracing.
> Voxels are much easier to check for intersections than triangles, obviously that's the trick.


Tecnically SVOGI trace cone, not rays and has its own limitation that's why it's considered a different thing from raytracing but perhaps we should first define what we mean when we use that word as the process of casting rays is used for a plethora of purpose, from physics simulation to bullet trajectory and hit detection and even for screen space effects.


----------



## medi01 (Mar 18, 2019)

Stefem said:


> I'm sorry but you can't as you don't even know what and how the engine is doing


I need to know how it is working inside, to see reflections? Good one.



Stefem said:


> the demo is opensource


That's interesting, although, not relevant.



Stefem said:


> That's exactly what he said, compare it to the Star Wars demo (demo vs demo) and not to a game


That's what I compared it too. I do not

play or even own that Star Wars game.



Stefem said:


> Are you trolling or have problems reading?


No, ARE YOU TROLLING OR HAVE PROBLEMS F*CKING READING?
Maybe it's easier if it is in red? Mkay, let me try:

*And "integrated" should not be something with vendor keyword in it. *



Stefem said:


> trace cone, not rays





Stefem said:


> perhaps we should first define what we mean when we use that word


It is pretty clear what *you* mean and it has a name: *No True Scottsman.*


----------



## bug (Mar 18, 2019)

medi01 said:


> It is using different algorithm to do RT.
> You cannot have "counterpart" as in direct copy of exact same demo running utilizing voxels.



And once again. Crytek managed to do RTRT using just general purpose hardware. We don't know how they did it, whether they hacked something or not, we don't details like poly count or the number of rays simulated. And somehow from all that you drew the conclusion that specialized is not needed.
And I'm not saying what Nvidia did was the Holy Grail of RTRT and anything different is doomed to fail. I'm just amazed at how your mind works.


----------



## medi01 (Mar 18, 2019)

@bug


bug said:


> details like poly coun


Poly count of what? Just how ridiculous can green BH-n get, Dear God...

Right here right now Crytek has rolled out demo with all the RT goodness that nVidia claimed needed specialized cores.
This alone is enough for RTX, that had already halved NVDA stock price, to shift to "it's dead, Jim" area.

"But muh NVDA''s stuff could be fasta!" - it sure could be, but if one can't literally* see the difference with naked eyes*, it's game over.


----------



## Vayra86 (Mar 18, 2019)

londiste said:


> Sorry for digging up an old post but I think you are off base with this.
> - Die space cost for RT cores is 10-15%, probably less. I am not sure if that is exactly massive.
> - RTX is proprietary, DXR is not, Vulkan extensions may or may not turn out to be proprietary depending on what route the other IHVs take.
> - Software-based implementation - or in this case, implementation running on general-purpose hardware - is simply not as efficient as dedicated hardware. So far everything points at this being the case here, whether you take Nvidia's inflated marketing numbers or actual tests by users. This shows even with production applications and Turing vs Titan V. RT cores simply do make a big difference in performance.
> ...



Understood and I know just as little about what RT will look like in the future. What I do know is that there are multiple ways to do it architecturally. Right now, Nvidia is _already _using those RT cores as an add-on to the shader itself. I'd like to see it go one step further: integration in a way that the hardware is completely programmable for many things that might be RT or might be something else. In the end that is what has happened before and given the way RT needs to be integral to almost everything that happens graphically, I think that is a sensible approach. Perhaps these cores can double up for physics calculations, for example? There are quite a few other, predictable workloads to put there, like post processing steps.

The only reason I don't believe in Turing's approach is the economy of it. This is a lot of die space to reserve and it sure is more than 15%. Because to accommodate the new featureset, L2 cache has also been expanded, for example. You can see in the relative performance to Pascal that this is also a sacrifice and a bonus depending entirely on the game in question. Turing cards hop around their Pascal equivalents everywhere by quite a margin in quite a few games. And realistically what you need to do is look at relative performance to Pascal related to die sizes. Thát is truly what you need in terms of space. Don't forget this was also a little shrink.

Let's take the 1080 vs 2060; Realistically (performance wise) we should be using a 1070ti; but let's cross that off versus the new node to get the ballpark idea.
314 vs 445mm²
29,43%; let's make it 25%.

Now count that versus what they are actually doing with this space: a few limited effects that _still_ harm FPS.


----------



## Fluffmeister (Mar 18, 2019)

bug said:


> And once again. Crytek managed to do RTRT using just general purpose hardware. We don't know how they did it, whether they hacked something or not, we don't details like poly count or the number of rays simulated. And somehow from all that you drew the conclusion that specialized is not needed.
> And I'm not saying what Nvidia did was the Holy Grail of RTRT and anything different is doomed to fail. I'm just amazed at how your mind works.



Yeah it's not perfect for sure, the LOD takes a nose dive on the reflections:





So they are cheating too, hey ho, looks pretty enough anyway.


----------



## bug (Mar 18, 2019)

Fluffmeister said:


> Yeah it's not perfect for sure, the LOD takes a nose dive on the reflections:
> 
> 
> 
> ...


I believe Nvidia just extended DXR support to Pascal. According to them (to be taken with a grain of salt), without RT cores you get 3x less performance. It's probably what Vega has to deal with as well.

Though to be fair, I will reiterate what i said in defense of DLSS: do you really have the time to look at those details up close when playing a game? Sure we want games to look as lifelike as possible, but is it really worth it to use twice the computation power in exchange for 5 or 10% more realism?


----------



## Fluffmeister (Mar 18, 2019)

Agreed, fact is with realistic lighting they might actually look worse. We are pissing in the rain either way, CryEngine doesn't get a lot of love from devs as it stands anyway.


----------



## londiste (Mar 19, 2019)

Vayra86 said:


> And realistically what you need to do is look at relative performance to Pascal related to die sizes. Thát is truly what you need in terms of space. Don't forget this was also a little shrink.
> 
> Let's take the 1080 vs 2060; Realistically (performance wise) we should be using a 1070ti; but let's cross that off versus the new node to get the ballpark idea.
> 314 vs 445mm²
> ...


It is more than 25-30%, GP104 has 4 more SMs - 2 Pascal SMs, equal to 4 Turing SMs. However, most of it is taken up by stuff added in Volta, not RT Cores and couple other minor additions in Turing. I tried theorycrafting on this and reduced to the same SP count Volta SM is about 45% bigger than Pascal, Turing's is 10% bigger than Volta and 15% bigger that Pascal. A lot of the extra compute stuff that came in with Volta takes up a lot of space. Unsure how much of this is Tensor cores - that may be the real space hog here. I would still say RT Cores and supporting logic is 10-15% to Pascal, probably at the lower end of that scale.


----------



## Super XP (Mar 19, 2019)

This is great. 
Another Ray Tracing solution.


----------



## AmioriK (Mar 19, 2019)

londiste said:


> Sorry for digging up an old post but I think you are off base with this.
> - Die space cost for RT cores is 10-15%, probably less. I am not sure if that is exactly massive.


the RT cores are possibly quite small, as they can be optimised EXACTLY for BVH.

Look here: https://misdake.github.io/ChipAnnotationViewer/view.html?map=TU106&commentId=470099401

Take with Grain of Salt. It is not confirmed but If that is the RT core, they are not huge at all. It is actually smaller PHYSICALLY than a single Turing SM containing 64 CUDA cores, but will most certainly delivery significantly more performance for the area used in Ray Tracing. Keep in mind that block seems to also contain other functions too.

edit screencap for convience for mobile users, chip is TU106


----------



## Super XP (Mar 26, 2019)

Fluffmeister said:


> Yeah it's not perfect for sure, the LOD takes a nose dive on the reflections:
> 
> 
> 
> ...


For slightly moving water, due to the Drones Fans, that is actually not bad.


----------



## goodeedidid (Mar 29, 2019)

Warsaw said:


> ++++++++1 I agree 99999%! I want graphics to push hardware again and be in awe with it! I even wrote a thank you message to Crytek when I noticed they responded to one of the comments in the YouTube video. I want a new Crysis and SWEET SWEET graphics to make our machines crawl. Never before and since then has a game made me just want to look at and appreciate the visuals the same that Crysis did. Just blew me away


You know that people have to work to make those games, get salaries and holidays, teams, managers. Games don't just appear out of nowhere...


----------



## KrachB00Mente (Mar 29, 2019)

Tomgang said:


> That is all great for developers and such. But come on Crytek, give us the game that started it all and showed us what the cryengine cut really do. I am off cause talking about Crysis. I want a Crysis 4 that shows the Cryengine newest advantage with ray-tracing, DX12 and of cause the glories graphics the games are so known for and make the GPU´s sweet once again.
> 
> Please Crytek. Im tired of demos after Demos.  I want crysis


Crysis  cut a lot of corners like faking volumetric lightning etc.  to look like it did. Thats why the performance scales really bad on modern machines.


----------



## Warsaw (Apr 5, 2019)

goodeedidid said:


> You know that people have to work to make those games, get salaries and holidays, teams, managers. Games don't just appear out of nowhere...


Huh? Kind of an odd thing to comment. Never said anything about demanding a game or discussing as to why they are not if they are not financially situated. I'm just so excited for them to make another game when/if they do. That's all


----------



## Deleted member 24505 (Apr 14, 2019)

Tomgang said:


> That is all great for developers and such. But come on Crytek, give us the game that started it all and showed us what the cryengine cut really do. I am off cause talking about Crysis. I want a Crysis 4 that shows the Cryengine newest advantage with ray-tracing, DX12 and of cause the glories graphics the games are so known for and make the GPU´s sweet once again.
> 
> Please Crytek. Im tired of demos after Demos.  I want crysis



Actually, the game that started it all was I robot, which was the first game to use polygon graphics, which all games use now.


----------



## londiste (May 8, 2019)

Cryengine has published an interview about this demo:








						CRYENGINE | How we made Neon Noir - Ray Traced Reflections in CRYENGINE and more!
					

Ahead of GDC 2019, we revealed Neon Noir, a research and development project showcasing real-time mesh ray traced reflections and refractions created with an advanced new version of CRYENGINE’s Total Illumination real-time lighting solution. Needless to say, we did receive a lot of questions...




					www.cryengine.com


----------



## Vayra86 (May 8, 2019)

londiste said:


> Cryengine has published an interview about this demo:
> 
> 
> 
> ...



Very interesting read. He also touches on the bullet casings:


----------



## londiste (May 8, 2019)

Short version: Lower LOD as well as cutoff from RT to Voxel reflections based on material roughness.


----------



## Vayra86 (May 8, 2019)

What's more interesting about that is the amount of customization they can apply to a scene to scale performance. This is a hundred times better than those rigid RTX quality levels (that need a tweak for each game as well). Especially because its integral to the rest of the scene build; there is really no hard distinction between 'RT or not'.


----------



## londiste (May 8, 2019)

Vayra86 said:


> What's more interesting about that is the amount of customization they can apply to a scene to scale performance. This is a hundred times better than those rigid RTX quality levels (that need a tweak for each game as well). Especially because its integral to the rest of the scene build; there is really no hard distinction between 'RT or not'.


RTX does not have rigid quality levels. Indeed, Battlefield V interviews show pretty similar customization options and performance scaling choices. The guys working on Unreal Engine also went through similar process.

I have a feeling you have a bit incorrect understanding of what RTX is or does. The only part of RTX relevant here is RT Cores. What RT Cores deal with is ray tracing itself, casting rays and calculating intersections. That is it. Optimizations like preparing the structures, setting up materials/surfaces and even placing rays are generic and have little to do with RT Cores.

What CryEngine guys are showing and talking about shows the progress of effects and solutions used. CryEngine does lighting and reflections with voxels (SVOGI). Voxels have been researched and used in other methods as well like VXAO or VXGI (in this case, both from Nvidia). The way how the voxel data structures are built and handled puts them halfway towards ray-tracing in principle. Voxels are simpler to handle and less accurate but are also much faster to work with - performance hit is considerable compared to more common methods (like SSAO or HBAO in case of ambient occlusion) but compared to raytracing it is very fast.

In case of this demo, this ends up being very nice for CryEngine - their cutoff from RT is not a clearly visible cutoff but fallback to their existing voxel-based solution. It is not as accurate but when used creatively - looks like this was used for example in reflections on rougher concrete in this case - it is good enough. Both DICE and Epic have said that semi-reflective surfaces are more complex to handle with RT so not doing RT on these is a performance benefit. I wonder if this is something they can automate to a certain degree in the engine.


----------



## bug (May 8, 2019)

Vayra86 said:


> What's more interesting about that is the amount of customization they can apply to a scene to scale performance. This is a hundred times better than those rigid RTX quality levels (that need a tweak for each game as well). Especially because its integral to the rest of the scene build; there is really no hard distinction between 'RT or not'.


Yes, what Nvidia did was showing that RTRT is borderline possible. It doesn't mean the way Nvidia did it is the best way or the only way to do RTRT. That's why I keep saying, the sooner devs stop talking thrash about RTRT and put their minds to figuring out ways to do it, the sooner we'll have nicer graphics overall.


----------



## Vayra86 (May 8, 2019)

londiste said:


> RTX does not have rigid quality levels. Indeed, Battlefield V interviews show pretty similar customization options and performance scaling choices. The guys working on Unreal Engine also went through similar process.
> 
> I have a feeling you have a bit incorrect understanding of what RTX is or does. The only part of RTX relevant here is RT Cores. What RT Cores deal with is ray tracing itself, casting rays and calculating intersections. That is it. Optimizations like preparing the structures, setting up materials/surfaces and even placing rays are generic and have little to do with RT Cores.
> 
> ...



You guessed right and thank you for connecting those dots


----------

