• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

If this prices was true, It would be too expensive.
AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
High end AMD GPU's
RX5700=RTX 2060+%5-10 for 400 Dollars
RX5800=RTX 2070 for 500 Dollars
Med-Low tier GPU's
RX3060=GTX 1650
RX3070=GTX1660
RX3080=GTX 1660-GTX 1660 Ti
(Most games)
Yeah, they used just one small, non popular game which heavily favors AMD for showing Navi's strength and even then it can barely beat the competition, it's clear, Navi will be a disappointment. it does not even have RT cores and it will be on next gen consoles. that's a disaster for us PC gamers since we are Stuck with ported games from these consoles with dated technology for at least the next 6 years.
 
I think those "RDNA" perf/wat gains are 14nm-7nm largely.Note how they did not compare R7 but Vega.


more like DARN :laugh:
It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
 
I think you're giving too much credit to this new name, sure RDNA sounds cool but it can't be a 180° from previous (gen) GCN uarch. I'd be slightly surprised if it was a major departure from GCN, also wasn't Raja the architect of Navi?
 
It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
:confused:
that's what a generational increase is,old vs new
 
25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software; same thing can happen to AMD's claims.
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.
 
AMD's benchmarks
radeon5822.jpg

Real Benchmarks
relative-performance_3840-2160.png
relative-performance_1920-1080.png
relative-performance_2560-1440.png
 
It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.

Just because you name something a fancy new bunch of letters doesn't magically make it a different piece of kit, and the use of Strange Brigade only confirms we're looking at another GCN / Polaris.

When is there enough information? When the Youtubers come out of the woodwork with wild performance claims and exotic tweaked results?

Come on buddy, 1+1=2.

25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.

More like AMD dropped the ball for só many years they can never catch up again, even with Nvidia slowing down. People said this in 2015-16 already, but none of that was true and AMD had a revolution coming.
 
Last edited:
I don't think so. RX 5700 is %10 faster than RTX 2070 in Strange Brigade but Radeon 7 is %20 faster than RTX 2080. So it won't be same with RTX 2070. I think it will be same with RTX 2060. RX 5800 will be same with RTX 2070.
View attachment 123831
and I do think so
relative-performance_1920-1080.png
relative-performance_2560-1440.png
relative-performance_3840-2160.png


Like I said it depends on the game suite :)

AMD showed the benchmarks with 3 games. If you want to compare maybe you should consider only these 3 games from TPU to be more accurate not relative performance across entire game suite? It always depends on the games picked. Of course AMD picked games at which their products are better. NV does the same thing and any company would do this that way. That's just obvious.

It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.
Well that really is a good point. RDNA is nothing like GCN. Therefore we don't know how will it act in the games.
 
Last edited:
AMD claims up to 50% Perf per watt , so Vega 64 consumes 292w, RTX 2060 = 165w , RTX2070 = 195w , Vega 64 with 7nm and redesign arch will be around 145w but in reality probably between 160w to 190w , also take it ( 25% Perf per clock ) account.so in term of Perf per Wat , RX5700 will be around Turning Arch, then AMD Card 7nm probably will be match Nvidia Card 12 nm.

( yes yes , I know what happens if NVidia takes 7nm !!)
 
That can't be right or at least for AMD's sake it ought to be Vega VII i.e. if they want to get anywhere in the mid range segment. The IPC is the same for all Vegas I guess, it mostly boils down to efficiency & if Navi is barely efficient to the level of VII then AMD might as well shut the RTG division for the time being!
Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.
 
Last edited:
Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...
 
what a waste of card, bether didn't make card at all
 
Hm. This doesn't align all that well with previous rumors. AMD is saying this will be the basis for gaming for the coming decade. In other words, Arcturus (if that's even a thing) can clearly not be a major architectural overhaul. Then again, if they deliver a 25% IPC increase, it won't be needed anyhow.

I'm more excited for this than I thought I would be.
 
Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.
Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"

Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...
pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.
 
Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"


pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.
If so they'd need to use a cut-down 2080 die. That won't be cheap, and Nvidia's margins will still hurt.
 
Sadly, +50% perf/Wdoesn't close the gap to Nvidia :(
 
Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"

pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.

Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.

Edit: Had to look back, Vega NCU were promised to be 2x perfomance per clock and 4x perfomance per Watt(the devil is in the detail). So take it as you wish, I'm waiting more concrete evidence.
123848
 
Last edited:
Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.

25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.

Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.

If it was across the board / averaged over many games we would have seen those many games. Wishful thinking vs realism... take your pick ;)

These slides are meaningless. Read between the lines.
 
Last edited:
Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"

Yeah, except uarch. Commits for gaming Navi are just as valid for improving the perf of a compute Navi. I don't see AMD changing the structure of their CUs from 64 alus. It would break scheduling/wavefront. They mention improved CUs & a new cache hierarchy, but apart from L0 tied to CUs & more L2, I don't know what's different to Vega.

Sadly, +50% perf/Wdoesn't close the gap to Nvidia :(
By definition it does. Of course we don't know what this means in practice. Is it the chip, whole card TDP, clk-clk, etc. The 7nm node isn't all beer & skittles given the increased density/smaller die. That's why Nv pulled the trigger on the optimized 12nm & large dies. 7N+ will help, but density, electron migration, etc is still there.

Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.

Didn't you know? That's now called "async compute". ;)

TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.
 
Wait and see how well this actually performs in a full, hands on review though.
I don't think so. RX 5700 is %10 faster than RTX 2070

@btarunr
May I ask something about the choice of games by TPU?
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.

But then somebody does 35 games comparison:


and results match computerbase's results, but not TPUs.

35 is quite a list. Is it time, perhaps, to re-think the choice of games to test?


Real Benchmarks
Of a different set of games.
Nice try.
 
Back
Top