Monday, May 27th 2019
AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
202 Comments on AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
Perf/watt is Division 2 at 1440p Ultra settings.
www.amd.com/en/press-releases/2019-05-26-amd-announces-next-generation-leadership-products-computex-2019-keynote
there is a gap, but it's smaller than one thinks (especially when checking for it on sites favoring green games, like TPU does).
When you are late to the party, better bring more stuff. If RX5700 matches RTX line with performance it better be priced well, otherwise the lacking of feature set will hurt them in the eyes of general public.
You're comparing Graphics Cards to Graphics Cards, not a GPU with another one.
Computerbase got one of the good ones, it would seem. There have been far worse examples in both review sites and retail.
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.
If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
Why would Crossfire suddenly be better than it has been so far? Bandwidth is not the main problem and even then the increase from PCI-e 3.0 to 4.0 would not alleviate the need for communication that much. At the other side bidirectional 100GB/s did really not make that noticeable of a difference either.
You're being generous. :) Your definition is fine ofc. (Or multiple queues). Not really directed at you anyway. I kept seeing it in other threads where concurrent int/fp=async compute. Exactly correct, nor is it defined by the ability to pack int/fp in the graphics pipeline.
There's another interesting "fine wine" effect for Vega. With Win10 (1803 IIRC) MS started promoting DX 11.0 games on GCN to DX12 feature level 11.1 that enabled the HW schedulers so should result in better perf than release under Win7/8.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.
Oh, hold on...
Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.
At this point it is obvious who's chips are going to rock the next gen of major consoles (historically "it's not about graphics" Nintendo opting for NVDA's dead mobile platform chip is almost an insult in this context, with even multiplat games mostly avoiding porting to it).
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.
Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick: