Monday, May 27th 2019
AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.
202 Comments on AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6
One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.
There's simply WAY too little information to go by, @ this point in time.
that's what a generational increase is,old vs new
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.
Real Benchmarks
When is there enough information? When the Youtubers come out of the woodwork with wild performance claims and exotic tweaked results?
Come on buddy, 1+1=2. More like AMD dropped the ball for só many years they can never catch up again, even with Nvidia slowing down. People said this in 2015-16 already, but none of that was true and AMD had a revolution coming.
Like I said it depends on the game suite :) AMD showed the benchmarks with 3 games. If you want to compare maybe you should consider only these 3 games from TPU to be more accurate not relative performance across entire game suite? It always depends on the games picked. Of course AMD picked games at which their products are better. NV does the same thing and any company would do this that way. That's just obvious. Well that really is a good point. RDNA is nothing like GCN. Therefore we don't know how will it act in the games.
( yes yes , I know what happens if NVidia takes 7nm !!)
I'm more excited for this than I thought I would be.
Edit: Had to look back, Vega NCU were promised to be 2x perfomance per clock and 4x perfomance per Watt(the devil is in the detail). So take it as you wish, I'm waiting more concrete evidence.
Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.
If it was across the board / averaged over many games we would have seen those many games. Wishful thinking vs realism... take your pick ;)
These slides are meaningless. Read between the lines.
tpucdn.com/reviews/AMD/Radeon_VII/images/battlefield-5_3840-2160.png
tpucdn.com/reviews/AMD/Radeon_VII/images/far-cry-5-3840-2160.png
tpucdn.com/reviews/AMD/Radeon_VII/images/strange-brigade-3840-2160.png
Did they lie?
TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.
May I ask something about the choice of games by TPU?
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.
But then somebody does 35 games comparison:
and results match computerbase's results, but not TPUs.
35 is quite a list. Is it time, perhaps, to re-think the choice of games to test? Of a different set of games.
Nice try.
What a nice name. :)
AMD is almost half as efficient as Nvidia today. +50% will not close that gap.