Monday, May 27th 2019
AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
202 Comments on AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
According to their own roadmap we will see CryTek's implementation live in version 5.7 of the engine in early 2020. They have said DXR etc are being considered and likely to be implemented for performance reasons.
Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.
Well, 6 years later here we are and AMD is still struggling to keep up!
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
It's about guiding the development of the entire gaming industry across all gaming studios with Microsoft, Sony, Apple's upcoming gaming service, and now Google too. AMD is tailoring everything to themselves as a hub calling the shots.
Business wise that is very impressive.
I also hope that we can finally get a proper OpenGL driver.
A - picks a handful of games, does test, arrives at X%
B - picks a handful of games, does test, arrives at 2*X%
C - picks A LOT of games, does test, arrives at X% Well, if there is no diff between wider set / subset, subset is good, I stand corrected. (did criss cross resolution comparison, values are different) Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech. In other words "see, if it gets adopted first", which kinda makes sense, doesn't it?
www.guru3d.com/index.php?ct=articles&action=file&id=43327
Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
@medi01 Most of the benchmarks, RTX 2080 is faster than Radeon VII. Yes, it depends on games GTX 1650 is fast card for entry level. RX 570's normal sale price is 169 Dollars. Is GTX 1650 overpriced? Yes. It should be 119 Dollars because it is Nvidia's entry level card. Oh well.
VEGA 56 doesn't match with GTX 1070 Ti. It has performance between GTX 1070 and GTX 1070 Ti. It depends on what you are playing games.
All in all, I'm not Nvidia's Fanboy and AMD's Fanboy. I am expecting AMD for more performance with price but people are lying about AMD(R7 3000 series have 12 cores or RTX 2070 performance for 250 Dollars). I'm confused due to rumours.
- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.
End result, Nvidia with that weirdo thought wins again.
The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of potential. But you can't sell potential.
I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.
E.g .AES decryption. No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible. We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.
For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
It takes ages and ages to achieve similar level of accuracy with traditional rendering techniques; especially in open world and more complex games thus in reality, you're never going to see RT level of realism and accuracy in actual games without RT in use.
And also Crytek stated that they're gonna use the RT cores on turing cards for better performance in the future.
One day in the future 70%~ of the PC users will have an RTX card, GTX is going to die sooner or later; that's when developers would think twice about considering or not considering the RT implementation in general; and of course you must be stupid to not use the relative free performance that RT cores offer.
Also, why always so mad?