• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

Joined
May 15, 2014
Messages
235 (0.06/day)
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.

Did you read the qualifier:
Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference.

Do you really want sites to pick "balanced" games only for testing? Think carefully.

Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire

With the advent of modern game code/shading/post processing techniques, classic SLI/Xfire has to be built into the engines from the ground up. It's just a coding/profiling nightmare. DX12 mGPU is theoretically doable but tends to have performance regression & very little scales well.
 
Joined
Feb 3, 2017
Messages
3,759 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:
Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it. By the way, Wolfenstein Youngblood was announced to come with real-time raytracing effects, probably the first new game using these NV_RT extensions.

According to their own roadmap we will see CryTek's implementation live in version 5.7 of the engine in early 2020. They have said DXR etc are being considered and likely to be implemented for performance reasons.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
 
Joined
Sep 17, 2014
Messages
22,475 (6.03/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I remember that, my point still stands.
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:


But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic. What you're linking is their updated CryEngine and what it can do, and it has nothing to do with RTX, or DXR. But DXR will still potentially expand the possibilities of the tech they use in CryEngine, and it will do that, again, regardless of GPU; the question is how the GPU will make use of what DXR has to offer.
 
Joined
May 15, 2014
Messages
235 (0.06/day)
What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.

My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)
 
D

Deleted member 158293

Guest
What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.
 
Joined
Sep 17, 2014
Messages
22,475 (6.03/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)

At least half of them, so they don't get their ass kicked in every random comparison. :)
 
Joined
May 15, 2014
Messages
235 (0.06/day)
Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions
Better than cap bits.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
Isn't that like saying my car has four wheels it must be a Ferrari?

At least half of them, so they don't get their ass kicked in every random comparison. :)
Trick Q. :) No $b = no on-site engineers, or at least no dev evangelists to other than a few AAA studios. Their fault totally ofc. They've even had both consoles stitched up.
 
Last edited:
Joined
Oct 2, 2015
Messages
3,146 (0.94/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
 
Joined
Jun 28, 2018
Messages
299 (0.13/day)
What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.

Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!
 
Joined
Feb 3, 2017
Messages
3,759 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
AMD is definitely more in the picture with game development these days. While I am not sure how much help either IHV actually provides to developers, AMD is much-much more visible right now with situation being largely reversed from TWIMTBP days.
 

bug

Joined
May 22, 2015
Messages
13,786 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
 
D

Deleted member 158293

Guest
Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!

Not about performance...

It's about guiding the development of the entire gaming industry across all gaming studios with Microsoft, Sony, Apple's upcoming gaming service, and now Google too. AMD is tailoring everything to themselves as a hub calling the shots.

Business wise that is very impressive.
 
Joined
Apr 30, 2011
Messages
2,704 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.
 
Last edited:
Joined
Aug 27, 2015
Messages
41 (0.01/day)
Processor Core i5-4440
Motherboard Gigabyte G1.Sniper Z87
Memory 8 GB DDR3-2400 CL11
Video Card(s) GTX 760 2GB
Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
Can you read the charts or I have to read it for you? performance difference is nearly the same. 9% difference for Techspot vs 10% in TPU. where did you get that "TWO TIMES SMALLER than TPU"?
relative-performance_3840-2160.png
 
Joined
Oct 2, 2015
Messages
3,146 (0.94/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
Yeah, let's see how that turns out on release drivers.
I also hope that we can finally get a proper OpenGL driver.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Do you really want sites to pick "balanced" games only for testing? Think carefully.
I've stated it 2 times, yet you literally miss the point.

A - picks a handful of games, does test, arrives at X%
B - picks a handful of games, does test, arrives at 2*X%
C - picks A LOT of games, does test, arrives at X%

Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
Well, if there is no diff between wider set / subset, subset is good, I stand corrected. (did criss cross resolution comparison, values are different)

But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic.
Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.


Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
In other words "see, if it gets adopted first", which kinda makes sense, doesn't it?
 
Joined
Feb 18, 2017
Messages
688 (0.24/day)
Looking forward for the pricing. It would be nice to see undercutting NV prices (opposite of initial Vega pricing).

If this prices was true, It would be too expensive.
AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
High end AMD GPU's
RX5700=RTX 2060+%5-10 for 400 Dollars
RX5800=RTX 2070 for 500 Dollars
Med-Low tier GPU's
RX3060=GTX 1650
RX3070=GTX1660
RX3080=GTX 1660-GTX 1660 Ti
(Most games)
What?

Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Med-Low tier GPU's
RX3060=GTX 1650

You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
 
Joined
Mar 10, 2014
Messages
1,793 (0.46/day)
I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.

One of the RX 5700 -series sku, note the plural. So in translation there likely be couple of skus out of that series, i.e. RX 5770 and RX 5750 or RX 5700XT and RX5700pro.
 
Joined
Oct 10, 2018
Messages
147 (0.07/day)
I didn't say lie but this is strategy for selling. They only use 3 games and they said Radeon VII is same with RTX 2080. Well, what about other games?

@medi01 Most of the benchmarks, RTX 2080 is faster than Radeon VII. Yes, it depends on games
You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
GTX 1650 is fast card for entry level. RX 570's normal sale price is 169 Dollars. Is GTX 1650 overpriced? Yes. It should be 119 Dollars because it is Nvidia's entry level card.
Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
Oh well.
the-witcher-3_1920_1080.png


VEGA 56 doesn't match with GTX 1070 Ti. It has performance between GTX 1070 and GTX 1070 Ti. It depends on what you are playing games.

All in all, I'm not Nvidia's Fanboy and AMD's Fanboy. I am expecting AMD for more performance with price but people are lying about AMD(R7 3000 series have 12 cores or RTX 2070 performance for 250 Dollars). I'm confused due to rumours.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
You chose a card that performs a few % better than most Turing cards per watt, which also shifts AMD's averages down, which is kind of odd when you say you're not interested in looking at specific cards. Even with that, the Vega 56 was at 62%. 62*1,5=93. That's pretty darn close. Of course the V64 was slower at 54%, for which a 50% increase would be 81%. That's a lot worse for a very small difference in the baseline. If we look at one of the more average (and similar in performance) Turing cards, like the 2070, the result for the V56 is 99%. Which is why talking about multiples of percentages is a minefield. Unless you are very explicitly clear about your baseline, test conditions, and what you're comparing, you're going to confuse people more than clarify anything.
Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.
That is only true if bandwidth is already maxed out, leading to a bottleneck. Other than that, increasing bandwidth does not necessarily relate to latency whatsoever. The cars on your highway don't go faster if you add more lanes but keep the speed limit the same. Now, I haven't read the PCIe 4.0 spec, so I don't know if they're also reducing latency, but of course they might. It still doesn't relate to bandwidth, though.
 
Joined
Sep 17, 2014
Messages
22,475 (6.03/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.

Hey look, you and me agree on this, I'm no fan either of large GPU die percentages dedicated to just RT performance; but with the facts available to us now, we also have a few things to deal with...

- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.

End result, Nvidia with that weirdo thought wins again.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of potential. But you can't sell potential.

I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.

- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense,
We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
 

M2B

Joined
Jun 2, 2017
Messages
284 (0.10/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
Ray-Tracing is beyond simplifying game development in the way you like to believe.
It takes ages and ages to achieve similar level of accuracy with traditional rendering techniques; especially in open world and more complex games thus in reality, you're never going to see RT level of realism and accuracy in actual games without RT in use.

And also Crytek stated that they're gonna use the RT cores on turing cards for better performance in the future.

One day in the future 70%~ of the PC users will have an RTX card, GTX is going to die sooner or later; that's when developers would think twice about considering or not considering the RT implementation in general; and of course you must be stupid to not use the relative free performance that RT cores offer.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,475 (6.03/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.


No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.


We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.

Okay buddy, whatever you want to disagree on, I'll agree to :D I suppose you know better than what sources have shown thus far.

Also, why always so mad?
 
Last edited:
Top