Monday, May 27th 2019

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline.

In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
Add your own comment

202 Comments on AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

#76
londiste
medi01I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:
Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it. By the way, Wolfenstein Youngblood was announced to come with real-time raytracing effects, probably the first new game using these NV_RT extensions.

According to their own roadmap we will see CryTek's implementation live in version 5.7 of the engine in early 2020. They have said DXR etc are being considered and likely to be implemented for performance reasons.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
Posted on Reply
#77
Vayra86
medi01I remember that, my point still stands.
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:

But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic. What you're linking is their updated CryEngine and what it can do, and it has nothing to do with RTX, or DXR. But DXR will still potentially expand the possibilities of the tech they use in CryEngine, and it will do that, again, regardless of GPU; the question is how the GPU will make use of what DXR has to offer.
Posted on Reply
#78
steen
Vayra86What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.
My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)
Posted on Reply
#79
Unregistered
What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.
#80
Vayra86
steenMy DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)
At least half of them, so they don't get their ass kicked in every random comparison. :)
Posted on Reply
#81
steen
londisteBecause Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions
Better than cap bits.
Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
Isn't that like saying my car has four wheels it must be a Ferrari?
Vayra86At least half of them, so they don't get their ass kicked in every random comparison. :)
Trick Q. :) No $b = no on-site engineers, or at least no dev evangelists to other than a few AAA studios. Their fault totally ofc. They've even had both consoles stitched up.
Posted on Reply
#82
GoldenX
So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
Posted on Reply
#83
kings
yakkWhat I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.
Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!
Posted on Reply
#84
londiste
AMD is definitely more in the picture with game development these days. While I am not sure how much help either IHV actually provides to developers, AMD is much-much more visible right now with situation being largely reversed from TWIMTBP days.
Posted on Reply
#85
bug
GoldenXSo we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
Posted on Reply
#86
Unregistered
kingsControl what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!
Not about performance...

It's about guiding the development of the entire gaming industry across all gaming studios with Microsoft, Sony, Apple's upcoming gaming service, and now Google too. AMD is tailoring everything to themselves as a hub calling the shots.

Business wise that is very impressive.
Posted on Edit | Reply
#87
HD64G
I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.
Posted on Reply
#88
Nima
medi01Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
Can you read the charts or I have to read it for you? performance difference is nearly the same. 9% difference for Techspot vs 10% in TPU. where did you get that "TWO TIMES SMALLER than TPU"?
Posted on Reply
#89
GoldenX
bugAbout that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
Yeah, let's see how that turns out on release drivers.
I also hope that we can finally get a proper OpenGL driver.
Posted on Reply
#90
medi01
steenDo you really want sites to pick "balanced" games only for testing? Think carefully.
I've stated it 2 times, yet you literally miss the point.

A - picks a handful of games, does test, arrives at X%
B - picks a handful of games, does test, arrives at 2*X%
C - picks A LOT of games, does test, arrives at X%
londisteBecause Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
Well, if there is no diff between wider set / subset, subset is good, I stand corrected. (did criss cross resolution comparison, values are different)
Vayra86But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic.
Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.
londisteBecause Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
In other words "see, if it gets adopted first", which kinda makes sense, doesn't it?
Posted on Reply
#91
B-Real
Looking forward for the pricing. It would be nice to see undercutting NV prices (opposite of initial Vega pricing).
Ibotibo01If this prices was true, It would be too expensive.
AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
High end AMD GPU's
RX5700=RTX 2060+%5-10 for 400 Dollars
RX5800=RTX 2070 for 500 Dollars
Med-Low tier GPU's
RX3060=GTX 1650
RX3070=GTX1660
RX3080=GTX 1660-GTX 1660 Ti
(Most games)
What?
www.guru3d.com/index.php?ct=articles&action=file&id=43327

Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
Posted on Reply
#92
medi01
Ibotibo01Med-Low tier GPU's
RX3060=GTX 1650
You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
Posted on Reply
#93
jabbadap
HD64GI hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.
One of the RX 5700 -series sku, note the plural. So in translation there likely be couple of skus out of that series, i.e. RX 5770 and RX 5750 or RX 5700XT and RX5700pro.
Posted on Reply
#94
Ibotibo01
ChomiqLet me help you with that wine:
tpucdn.com/reviews/AMD/Radeon_VII/images/battlefield-5_3840-2160.png
tpucdn.com/reviews/AMD/Radeon_VII/images/far-cry-5-3840-2160.png
tpucdn.com/reviews/AMD/Radeon_VII/images/strange-brigade-3840-2160.png

Did they lie?
I didn't say lie but this is strategy for selling. They only use 3 games and they said Radeon VII is same with RTX 2080. Well, what about other games?

@medi01 Most of the benchmarks, RTX 2080 is faster than Radeon VII. Yes, it depends on games
medi01You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
GTX 1650 is fast card for entry level. RX 570's normal sale price is 169 Dollars. Is GTX 1650 overpriced? Yes. It should be 119 Dollars because it is Nvidia's entry level card.
B-RealVega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
Oh well.


VEGA 56 doesn't match with GTX 1070 Ti. It has performance between GTX 1070 and GTX 1070 Ti. It depends on what you are playing games.

All in all, I'm not Nvidia's Fanboy and AMD's Fanboy. I am expecting AMD for more performance with price but people are lying about AMD(R7 3000 series have 12 cores or RTX 2070 performance for 250 Dollars). I'm confused due to rumours.
Posted on Reply
#95
Valantar
bugI'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
You chose a card that performs a few % better than most Turing cards per watt, which also shifts AMD's averages down, which is kind of odd when you say you're not interested in looking at specific cards. Even with that, the Vega 56 was at 62%. 62*1,5=93. That's pretty darn close. Of course the V64 was slower at 54%, for which a 50% increase would be 81%. That's a lot worse for a very small difference in the baseline. If we look at one of the more average (and similar in performance) Turing cards, like the 2070, the result for the V56 is 99%. Which is why talking about multiples of percentages is a minefield. Unless you are very explicitly clear about your baseline, test conditions, and what you're comparing, you're going to confuse people more than clarify anything.
InVasManiLatency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.
That is only true if bandwidth is already maxed out, leading to a bottleneck. Other than that, increasing bandwidth does not necessarily relate to latency whatsoever. The cars on your highway don't go faster if you add more lanes but keep the speed limit the same. Now, I haven't read the PCIe 4.0 spec, so I don't know if they're also reducing latency, but of course they might. It still doesn't relate to bandwidth, though.
Posted on Reply
#96
Vayra86
medi01Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.
Hey look, you and me agree on this, I'm no fan either of large GPU die percentages dedicated to just RT performance; but with the facts available to us now, we also have a few things to deal with...

- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.

End result, Nvidia with that weirdo thought wins again.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of potential. But you can't sell potential.

I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.
Posted on Reply
#97
medi01
Vayra86- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.
Vayra86- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.
Vayra86The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense,
We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
Posted on Reply
#98
M2B
Ray-Tracing is beyond simplifying game development in the way you like to believe.
It takes ages and ages to achieve similar level of accuracy with traditional rendering techniques; especially in open world and more complex games thus in reality, you're never going to see RT level of realism and accuracy in actual games without RT in use.

And also Crytek stated that they're gonna use the RT cores on turing cards for better performance in the future.

One day in the future 70%~ of the PC users will have an RTX card, GTX is going to die sooner or later; that's when developers would think twice about considering or not considering the RT implementation in general; and of course you must be stupid to not use the relative free performance that RT cores offer.
Posted on Reply
#99
Vayra86
medi01That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.


No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.


We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
Okay buddy, whatever you want to disagree on, I'll agree to :D I suppose you know better than what sources have shown thus far.

Also, why always so mad?
Posted on Reply
#100
Minus Infinity
FordGT90ConceptIt was announced in January, too late to put into Navi. Arcturus might have it.


I want to know how many transistors it has.
Then why do the LG 2019 C9 TV's have HDMI 2.1. How did they manage to get that done for an already released product? They even announced 2.1 support 6 months ago.
Posted on Reply
Add your own comment
Nov 26th, 2024 06:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts