# AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6



## btarunr (May 27, 2019)

AMD at its 2019 Computex keynote today unveiled the Radeon RX 5000 family of graphics cards that leverage its new Navi graphics architecture and 7 nm silicon fabrication process. Navi isn't just an incremental upgrade over Vega with a handful new technologies, but the biggest overhaul to AMD's GPU SIMD design since Graphics CoreNext, circa 2011. Called RDNA or Radeon DNA, the new compute unit by AMD is a clean-slate SIMD design with a 1.25X IPC uplift over Vega, an overhauled on-chip cache hierarchy, and a more streamlined graphics pipeline. 

In addition, the architecture is designed to increase performance-per-Watt by 50 percent over Vega. The first part to leverage Navi is the Radeon RX 5700. AMD ran a side-by-side demo of the RX 5700 versus the GeForce RTX 2070 at Strange Brigade, where NVIDIA's $500 card was beaten. "Strange Brigade" is one game where AMD fares generally well as it is heavily optimized for asynchonous compute. Navi also ticks two big technology check-boxes, PCI-Express gen 4.0, and GDDR6 memory. AMD has planned a July availability for the RX 5700, and did not disclose pricing.



 

 

 

 



*View at TechPowerUp Main Site*


----------



## xkm1948 (May 27, 2019)

no real time ray tracing tho


----------



## Metroid (May 27, 2019)

xkm1948 said:


> no real time ray tracing tho



As is it really mattered at this time, although i have to agree with you here as it will power playstation 5 and probably other consoles. So it was a disappointment nothing was said about ray tracing.


----------



## Divide Overflow (May 27, 2019)

AMD has a chance to shake things up with aggressive pricing.
Wait and see how well this actually performs in a full, hands on review though.


----------



## EarthDog (May 27, 2019)

xkm1948 said:


> no real time ray tracing tho


I think more people will care more about the typo (to = two) in the OP than they will about RT hardware not being on board Navi. Maybe next gen, if they think its worth it. Right now, NVIDIA is swimming in a lonely pond. 

Anyway, it all matters about price and performance overall, not just this title. It looks like power use will go down to so overall it seems solid (price tho?). It would be great if it was 2070 or better for a cheaper price. Perhaps NVIDIA will drop their prices.


----------



## Metroid (May 27, 2019)

I'm happy they are moving from gnc 125% to rdna 150% as they said. It was time to.


----------



## ZoneDymo (May 27, 2019)

once again, I say its all about the price, they cherry picked some obscure title that just happen to favor AMD so realistically this RX5700 will probably in general trade blows with the RTX2070 which is fine as long as the price is nice and low.


----------



## tfdsaf (May 27, 2019)

AMD's Navi will support ray tracing, but won't have dedicated hardware for it in these first gen desktop graphic cards. This is actually a positive as barely any game features ray tracing and even those that do are very limited in scope and offer terrible performance.

So for AMD its smarter to offer a smaller die size at competitive prices and better power consumption, rather than adding dedicated ray tracing hardware and increasing their die size for no real benefits in actual games. 

I expect their RX 3600 to debut at $350, their RX 3700 at $450, their RX 3800(later down the line) for $650, their RX 3500 for $250


----------



## Windyson (May 27, 2019)

<Strange Brigade>, Pro A game.

RX5700, 2560sp, 2GHz, 195W (compared to Vega64)
4096sp × 1.55GHz ÷ 2560sp ÷ 1.25 ≈ 2GHz
295W ÷ 1.5 ≈ 195W


----------



## Metroid (May 27, 2019)

tfdsaf said:


> I expect their RX 3600 to debut at $350, their RX 3700 at $450, their RX 3800(later down the line) for $650, their RX 3500 for $250



Nomenclature changed, as per the title, RX 5700. Lisa said was amd's 50's years anniversary.


----------



## Countryside (May 27, 2019)

xkm1948 said:


> no real time ray tracing tho



It just works


----------



## ratirt (May 27, 2019)

Yeah it is always about the price. 2070RTX performance. Wonder what power consumption that rx5700 has. Also the alias RX 5700, is it like an equivalency of the RX 570? If so, AMD is kicking things up a notch. Wonder if there will be any other RX5000 series we can expect? Like RX 5800 maybe with the 2080RTX performance mark? That would be great.


----------



## LocutusH (May 27, 2019)

I dont see HDMI 2.1? Why?


----------



## Ibotibo01 (May 27, 2019)

tfdsaf said:


> I expect their RX 3600 to debut at $350, their RX 3700 at $450, their RX 3800(later down the line) for $650, their RX 3500 for $250


If this prices was true, It would be too expensive.
AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
High end AMD GPU's
RX5700=RTX 2060+%5-10 for 400 Dollars
RX5800=RTX 2070 for 500 Dollars 
Med-Low tier GPU's
RX3060=GTX 1650
RX3070=GTX1660
RX3080=GTX 1660-GTX 1660 Ti
(Most games)


----------



## ratirt (May 27, 2019)

Ibotibo01 said:


> If this prices was true, It would be too expensive.
> AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
> I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
> High end AMD GPU's
> ...


I don't think your statement is entirely correct. It is said that the Navi RX5700 is faster and more energy efficient than Vega. Of course we don't know which Vega but presumably Vega 64. Vega 56 beats 1660 TI in the performance summary. Vega 64 beats 2060 TI in performance summary for 1440p and 4k except 1080p which is 2% behind. (All depends on the game suite I looked at TPU's). So if RX5700 is faster than Vega 64 it is obvious we are talking about a card faster than 2060 and not just with "strange brigade".


----------



## cucker tarlson (May 27, 2019)

strange brigade only? it must be really,really bad.


----------



## R0H1T (May 27, 2019)

ratirt said:


> I don't think your statement is entirely correct. It is said that the Navi RX5700 is faster and *more energy efficient than *Vega. Of course we don't know which Vega but presumably* Vega 64*.


That can't be right or at least for AMD's sake it ought to be Vega VII i.e. if they want to get anywhere in the mid range segment. The IPC is the same for all Vegas I guess, it mostly boils down to efficiency & if Navi is barely efficient to the level of VII then AMD might as well shut the RTG division for the time being!


----------



## FordGT90Concept (May 27, 2019)

LocutusH said:


> I dont see HDMI 2.1? Why?


It was announced in January, too late to put into Navi.  Arcturus might have it.


I want to know how many transistors it has.


----------



## londiste (May 27, 2019)

Strange Brigade for comparison is meaningless. The game is known to lean anywhere from 10-20% towards AMD GPUs. We will have to wait for July to see how they really stack up.
Both 1.25x "IPC" as well as 1.5x power efficiency sound really good, that should bring Navi up to par with Turing, hopefully a little ahead considering it is on 7nm.


----------



## HTC (May 27, 2019)

londiste said:


> Strange Brigade for comparison is meaningless. *The game is known to lean anywhere from 10-20% towards AMD GPUs.* We will have to wait for July to see how they really stack up.
> Both 1.25x "IPC" as well as 1.5x power efficiency sound really good, that should bring Navi up to par with Turing, hopefully a little ahead considering it is on 7nm.



Not necessarily: we know if favors AMD's GCN family, like you said, but it's not yet know if the same is true with this new arch.

We'll have to wait and see.


----------



## FordGT90Concept (May 27, 2019)

Strange Brigade is probably the only game QA'd to run on RDNA (terrible name).


----------



## Vayra86 (May 27, 2019)

cucker tarlson said:


> strange brigade only? it must be really,really bad.



It is. This is a PR- release. RDNA... to get rid of the complaints about 'old GCN'. 50% perf/watt... on 7nm and versus Vega (lol!), remember Radeon VII? Big naming scheme numbers to hide another shrink with some tweaks. By the way, I can already see the new slogan that will make you cringe...'AMD, gamer's DNA, with Radeon'... pass the bucket pls

And the price point is retarded.

As for the no RT... that to me is the most interesting bit of it all, and mostly when thinking about the next consoles. There's a good chance we won't see a same or similar Navi in there, surely they will have to come up with _something._


----------



## cucker tarlson (May 27, 2019)

Vayra86 said:


> It is. This is a PR- release. RDNA... to get rid of the complaints about 'old GCN'. 50% perf/watt... on 7nm and versus Vega (lol!), remember Radeon VII? Big naming scheme numbers to hide another shrink with some tweaks.
> 
> And the price point is retarded.


I think those "RDNA" perf/wat gains are 14nm-7nm largely.Note how they did not compare R7 but Vega.



FordGT90Concept said:


> Strange Brigade is probably the only card QA'd to run on RDNA (terrible name).


more like DARN


----------



## Ibotibo01 (May 27, 2019)

ratirt said:


> So if RX5700 is faster than Vega 64 it is obvious we are talking about a card faster than 2060 and not just with "strange brigade".


I don't think so. RX 5700 is %10 faster than RTX 2070 in Strange Brigade but Radeon 7 is %20 faster than RTX 2080. So it won't be same with RTX 2070. I think it will be same with RTX 2060. RX 5800 will be same with RTX 2070.


----------



## R0H1T (May 27, 2019)

I don't see the VII in there?


----------



## Nima (May 27, 2019)

Ibotibo01 said:


> If this prices was true, It would be too expensive.
> AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
> I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
> High end AMD GPU's
> ...


Yeah, they used just one small, non popular game which heavily favors AMD for showing Navi's strength and even then it can barely beat the competition, it's clear, Navi will be a disappointment. it does not even have RT cores and it will be on next gen consoles. that's a disaster for us PC gamers since we are Stuck with ported games from these consoles with dated technology for at least the next 6 years.


----------



## HTC (May 27, 2019)

cucker tarlson said:


> *I think those "RDNA" perf/wat gains are 14nm-7nm largely.*Note how they did not compare R7 but Vega.
> 
> 
> more like DARN


It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.

One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.

There's simply WAY too little information to go by, @ this point in time.


----------



## R0H1T (May 27, 2019)

I think you're giving too much credit to this new name, sure RDNA sounds cool but it can't be a 180° from previous (gen) GCN uarch. I'd be slightly surprised if it was a major departure from GCN, also wasn't Raja the architect of Navi?


----------



## cucker tarlson (May 27, 2019)

HTC said:


> It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.
> 
> One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.
> 
> There's simply WAY too little information to go by, @ this point in time.



that's what a generational increase is,old vs new


----------



## M2B (May 27, 2019)

25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software; same thing can happen to AMD's claims.
Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.


----------



## Ibotibo01 (May 27, 2019)

AMD's benchmarks




Real Benchmarks


----------



## Vayra86 (May 27, 2019)

HTC said:


> It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.
> 
> One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.
> 
> There's simply WAY too little information to go by, @ this point in time.



Just because you name something a fancy new bunch of letters doesn't magically make it a different piece of kit, and the use of Strange Brigade only confirms we're looking at another GCN / Polaris.

When is there enough information? When the Youtubers come out of the woodwork with wild performance claims and exotic tweaked results?

Come on buddy, 1+1=2.



M2B said:


> 25% IPC increase over Vega is more than what Nvidia achieved with Turing over Pascal But keep in mind Nvidia did claim turing shaders are 50% faster which turned out to be bullshit, at least for today's software
> Nvidia is so architecturally ahead that even a significantly improved, new architecture on a much better node doesn't seem impressive to people.



More like AMD dropped the ball for só many years they can never catch up again, even with Nvidia slowing down. People said this in 2015-16 already, but none of that was true and AMD had a revolution coming.


----------



## Ibotibo01 (May 27, 2019)

Vayra86 said:


> AMD had a revolution coming.


I think it is CPU revolution but Intel will join Med tier GPU's in 2020. AMD will be forced. Nvidia's Ampere 7NM will come in 2020-2021. It will be exciting.


----------



## ratirt (May 27, 2019)

Ibotibo01 said:


> I don't think so. RX 5700 is %10 faster than RTX 2070 in Strange Brigade but Radeon 7 is %20 faster than RTX 2080. So it won't be same with RTX 2070. I think it will be same with RTX 2060. RX 5800 will be same with RTX 2070.
> View attachment 123831


and I do think so











Like I said it depends on the game suite 



Ibotibo01 said:


> AMD's benchmarks
> View attachment 123832
> Real Benchmarks
> View attachment 123833View attachment 123834View attachment 123835


AMD showed the benchmarks with 3 games. If you want to compare maybe you should consider only these 3 games from TPU to be more accurate not relative performance across entire game suite? It always depends on the games picked. Of course AMD picked games at which their products are better. NV does the same thing and any company would do this that way. That's just obvious.



HTC said:


> It's impossible to compare because there's no RDNA 14nm card. If there were, then your statement would most likely be correct.
> 
> One game is too small of a sample size. How do we know if this RDNA arch favors the same games as GCN? For all we know, it could be the opposite and this game could be one of it's worst performers with the new arch. Unlikely, but not impossible.
> 
> There's simply WAY too little information to go by, @ this point in time.


Well that really is a good point. RDNA is nothing like GCN. Therefore we don't know how will it act in the games.


----------



## Xuper (May 27, 2019)

AMD claims up to 50% Perf per watt , so Vega 64 consumes 292w, RTX 2060 = 165w , RTX2070 = 195w , Vega 64 with 7nm and redesign arch will be around 145w but in reality probably between 160w to 190w , also take it ( 25% Perf per clock ) account.so in term of Perf per Wat , RX5700 will be around Turning Arch, then AMD Card 7nm probably will be match Nvidia Card 12 nm.

( yes yes , I know what happens if NVidia takes 7nm  !!)


----------



## steen (May 27, 2019)

R0H1T said:


> That can't be right or at least for AMD's sake it ought to be Vega VII i.e. if they want to get anywhere in the mid range segment. The IPC is the same for all Vegas I guess, it mostly boils down to efficiency & if Navi is barely efficient to the level of VII then AMD might as well shut the RTG division for the time being!


Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.


----------



## kings (May 27, 2019)

Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...


----------



## Manoa (May 27, 2019)

what a waste of card, bether didn't make card at all


----------



## Valantar (May 27, 2019)

Hm. This doesn't align all that well with previous rumors. AMD is saying this will be the basis for gaming for the coming decade. In other words, Arcturus (if that's even a thing) can clearly not be a major architectural overhaul. Then again, if they deliver a 25% IPC increase, it won't be needed anyhow.

I'm more excited for this than I thought I would be.


----------



## Xuper (May 27, 2019)

steen said:


> Vega=GFX9, Navi=GFX10. They've ditched some Vega IP & re-used previous blocks. Until we see tech details, I'm not entirely convinced how different it really is. Work distribution is likely changed but I just can't see a ground up RTL rewrite for this gen. They might bifurcate their product line to graphics as a compute service (Vega) & a more fixed function (Navi), but it doesn't make sense given the gains made by recent console titles that are finally coding to the compute paradigm. I presume they're comparing IPC to Vega20, else node change alone mostly explains the gains. As mentioned above, transistor count/die size will be interesting.


Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"



kings said:


> Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall between RTX 2060 and RTX 2070...


pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.


----------



## Valantar (May 27, 2019)

Xuper said:


> Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"
> 
> 
> pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.


If so they'd need to use a cut-down 2080 die. That won't be cheap, and Nvidia's margins will still hurt.


----------



## bug (May 27, 2019)

Sadly, +50% perf/Wdoesn't close the gap to Nvidia


----------



## Valantar (May 27, 2019)

bug said:


> Sadly, +50% perf/Wdoesn't close the gap to Nvidia


Depends on your benchmark. If the Vega 56 is the starting point, it would bring AMD to 99% of the efficiency of the RTX 2070. 66*1,5=99. On the other hand, if the V64 was the benchmark, that's just 84%.


----------



## jabbadap (May 27, 2019)

Xuper said:


> Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"
> 
> pretty much yes.but i hear rumor about Nvidia releases new card like RTX2070 Ti , I think i read somewhere.



Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.

Edit: Had to look back, Vega NCU were promised to be 2x perfomance per clock and 4x perfomance per Watt(the devil is in the detail). So take it as you wish, I'm waiting more concrete evidence.


Spoiler: Vega NCU promises.


----------



## Vayra86 (May 27, 2019)

jabbadap said:


> Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from concurrent int32 fp32 math. All in all that figure made Navi as a mach tad more interesting.



25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.

Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.

If it was across the board / averaged over many games we would have seen those many games. Wishful thinking vs realism... take your pick 

These slides are meaningless. Read between the lines.


----------



## Chomiq (May 27, 2019)

Ibotibo01 said:


> AMD's benchmarks
> View attachment 123832
> Real Benchmarks
> View attachment 123833View attachment 123834View attachment 123835


Let me help you with that wine:


			https://tpucdn.com/reviews/AMD/Radeon_VII/images/battlefield-5_3840-2160.png
		



			https://tpucdn.com/reviews/AMD/Radeon_VII/images/far-cry-5-3840-2160.png
		



			https://tpucdn.com/reviews/AMD/Radeon_VII/images/strange-brigade-3840-2160.png
		


Did they lie?


----------



## steen (May 27, 2019)

Xuper said:


> Navi Probably uses the GCN ISA but with different arch.Lisa said "RDNA is a ground up redesign from GCN"



Yeah, except uarch. Commits for gaming Navi are just as valid for improving the perf of a compute Navi. I don't see AMD changing the structure of their CUs from 64 alus. It would break scheduling/wavefront. They mention improved CUs & a new cache hierarchy, but apart from L0 tied to CUs & more L2, I don't know what's different to Vega.



bug said:


> Sadly, +50% perf/Wdoesn't close the gap to Nvidia


By definition it does. Of course we don't know what this means in practice. Is it the chip, whole card TDP, clk-clk, etc. The 7nm node isn't all beer & skittles given the increased density/smaller die. That's why Nv pulled the trigger on the optimized 12nm & large dies. 7N+ will help, but density, electron migration, etc is still there.



jabbadap said:


> Well yeah, Vega was called NCU and was still GCN. But other than that 1.25 clock to clock performance increase sounds very promising. Pascal to Turing clock to clock performance increase were mostly inherit from *concurrent int32 fp32 math*. All in all that figure made Navi as a mach tad more interesting.



Didn't you know? That's now called "async compute". 

TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.


----------



## medi01 (May 27, 2019)

Divide Overflow said:


> Wait and see how well this actually performs in a full, hands on review though.





Ibotibo01 said:


> I don't think so. RX 5700 is %10 faster than RTX 2070



@btarunr
May I ask something about the *choice of games by TPU?*
So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
TPU states nearly 20% diff, computerbase states it's half of that.
Oh well, I think, different games, different results.

But then somebody does 35 games comparison:










*and results match computerbase's results, but not TPUs.*

35 is quite a list. Is it time, perhaps, to re-think the choice of games to test?




Ibotibo01 said:


> Real Benchmarks


Of a different set of games.
Nice try.


----------



## EarthDog (May 27, 2019)

medi01 said:


> Is it time, perhaps, to re-think the choice of games to test?


That time has gone...there was even a thread on it too a week or so back from wizard.


----------



## Mats (May 27, 2019)

Radeon DeoxyriboNucleic Acid? 

What a nice name.


----------



## bug (May 27, 2019)

Valantar said:


> Depends on your benchmark. If the Vega 56 is the starting point, it would bring AMD to 99% of the efficiency of the RTX 2070. 66*1,5=99. On the other hand, if the V64 was the benchmark, that's just 84%.











						MSI GeForce GTX 1660 Ventus XS 6 GB Review
					

The MSI GTX 1660 Ventus XS is MSI's answer for people looking to maximize cost efficiency. Priced at the NVIDIA MSRP of $220, the card offers much better price/performance than AMD's RX 590 and even RX 580. Also included is an overclock out of the box and a backplate.




					www.techpowerup.com
				




AMD is almost half as efficient as Nvidia today. +50% will not close that gap.


----------



## Valantar (May 27, 2019)

bug said:


> MSI GeForce GTX 1660 Ventus XS 6 GB Review
> 
> 
> The MSI GTX 1660 Ventus XS is MSI's answer for people looking to maximize cost efficiency. Priced at the NVIDIA MSRP of $220, the card offers much better price/performance than AMD's RX 590 and even RX 580. Also included is an overclock out of the box and a backplate.
> ...


My numbers were from the 2070 review. The 1660 is an odd comparison for a card that's meant to compete with much more powerful cards.


----------



## Aldain (May 27, 2019)

xkm1948 said:


> no real time ray tracing tho


Truly..who cares??


----------



## bug (May 27, 2019)

Valantar said:


> My numbers were from the 2070 review. The 1660 is an odd comparison for a card that's meant to compete with much more powerful cards.


I wasn't looking at a specific card, just at numbers put out by Nvidia vs AMD.


----------



## Valantar (May 27, 2019)

Vayra86 said:


> 25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.


That sentence makes no sense unless you're implying that they're comparing numbers from different benchmarks, which ... well, would be bonkers. Vega (up until now) is no worst-case scenario for efficiency for AMD - it's entirely on par with Polaris if not a tad better.



Vayra86 said:


> Also, the other twist here is the shader itself. Sure, it may get a lot faster, but if you get a lower count of them, all you really have is some reshuffling that leads to no performance gain. Turing is a good example of that. Perf per shader is up, but you get less shaders and the end result is that for example a TU106 with 2304 shaders ends up alongside a GP104 that rocks 2560 shaders. It gets better, if you then defend your perf/watt figure by saying 'perf/watt per shader', its not all too hard after all.


But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.


----------



## jabbadap (May 27, 2019)

steen said:


> Didn't you know? That's now called "async compute".
> 
> TU concurrent int & fp is more flexible than just 32bit data types. Half floats & lower precision int ops can also be packed. Conceptually works well with VRS.



Well kind of true, async compute is capability of using graphics queue and compute queue at the same time. It really does not matter what precision are we speaking.


----------



## londiste (May 27, 2019)

Vayra86 said:


> 25% IPC and 50% perf/watt is probably in the best-case Strange Brigade scenario versus the worst-case Vega scenario.


Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
Perf/watt is Division 2 at 1440p Ultra settings.


			https://www.amd.com/en/press-releases/2019-05-26-amd-announces-next-generation-leadership-products-computex-2019-keynote
		



> AMD unveiled RDNA, the next foundational gaming architecture that was designed to drive the future of PC gaming, console, and cloud for years to come. With a new compute unit [10] design, RDNA is expected to deliver incredible performance, power and memory efficiency in a smaller package compared to the previous generation Graphics Core Next (GCN) architecture. It is projected to provide up to 1.25X higher performance-per-clock [11] and up to 1.5X higher performance-per-watt over GCN[12], enabling better gaming performance at lower power and reduced latency.
> ...
> 10. AMD APUs and GPUs based on the Graphics Core Next and RDNA architectures contain GPU Cores comprised of compute units, which are defined as 64 shaders (or stream processors) working together. GD-142
> 11. Testing done by AMD performance labs 5/23/19, showing a geomean of 1.25x per/clock across 30 different games @ 4K Ultra, 4xAA settings. Performance may vary based on use of latest drivers. RX-327
> 12. Testing done by AMD performance labs 5/23/19, using the Division 2 @ 25x14 Ultra settings. Performance may vary based on use of latest drivers. RX-325


----------



## Valantar (May 27, 2019)

bug said:


> I wasn't looking at a specific card, just at numbers put out by Nvidia vs AMD.


If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.


----------



## jabbadap (May 27, 2019)

londiste said:


> Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
> Perf/watt is Division 2 at 1440p Ultra settings.
> 
> 
> https://www.amd.com/en/press-releases/2019-05-26-amd-announces-next-generation-leadership-products-computex-2019-keynote



4K Ultra with 4xAA 14nm vega class? gpu, wonder what kind of FPS numbers are they getting...


----------



## medi01 (May 27, 2019)

bug said:


> +50% will not close that gap.


Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:






there is a gap, but it's smaller than one thinks (especially when checking for it on sites favoring green games, like TPU does).



EarthDog said:


> That time has gone...there was even a thread on it too a week or so back from wizard.


----------



## xkm1948 (May 27, 2019)

Aldain said:


> Truly..who cares??



A lot, well except AMD fanboiz.

When you are late to the party, better bring more stuff. If RX5700 matches RTX line with performance it better be priced well, otherwise the lacking of feature set will hurt them in the eyes of general public.


----------



## londiste (May 27, 2019)

jabbadap said:


> 4K Ultra with 4xAA 14nm vega class? gpu, wonder what kind of FPS numbers are they getting...


"Previous generation GCN" might not even be Vega considering this will be successor to Polaris


----------



## M2B (May 27, 2019)

The Radeon VII is using HBM2 which is so much more efficient than the GDDR6 memory on Nvidia cards .(Uses around 30-35W less power if i'm not mistaken)
You're comparing Graphics Cards to Graphics Cards, not a GPU with another one.


----------



## londiste (May 27, 2019)

medi01 said:


> Simply undervolting VII without losing any perf beats a bunch of NVDA cards, including 2080:


*YMMV
Computerbase got one of the good ones, it would seem. There have been far worse examples in both review sites and retail.


----------



## bug (May 27, 2019)

Valantar said:


> If so, then my numbers are just as valid as yours. That's the danger of dealing with relative percentages - you can get big changes when the underlying numbers change just a little. I have no doubt AMD wants to present themselves in as positive a light as possible, but you seem to be going the diametrically opposite route.


I'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.


----------



## Darmok N Jalad (May 27, 2019)

I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?


----------



## InVasMani (May 27, 2019)

londiste said:


> Strange Brigade for comparison is meaningless. The game is known to lean anywhere from 10-20% towards AMD GPUs. We will have to wait for July to see how they really stack up.
> Both 1.25x "IPC" as well as 1.5x power efficiency sound really good, that should bring Navi up to par with Turing, hopefully a little ahead considering it is on 7nm.


 Well if it's comparable to RTX2070 at a bit lower price point that's not bad. The real question is how (NAVI/RDNA) setups with Zen2/X570 and crossfire? If a more cut down cheaper version  of the RX5700 in crossfire is a lot more cost effective than a RTX2080 for example that would shake things up. I'd like to hope that most of the negative aspects to crossfire is mostly eliminated with PCIE 4.0 for a two card or even 3 card setup, but who knows. I'd certainly hope so. Time will tell how these things pan out.



Darmok N Jalad said:


> I wonder how much PCIe 4.0 is at play here, and is RX 5700 the best they have, or is the most efficient? It seems like there could be a 5800, but then why wouldn’t they lead off with that?


 Perhaps it needs more binning 7nm is still relatively new give it some time. Why wouldn't they lead with it perhaps TDP is a bit steep once you push frequency higher than they've already set it at.


----------



## londiste (May 27, 2019)

We do not know the price point. Leaks/rumors put it at $499.

Why would Crossfire suddenly be better than it has been so far? Bandwidth is not the main problem and even then the increase from PCI-e 3.0 to 4.0 would not alleviate the need for communication that much. At the other side bidirectional 100GB/s did really not make that noticeable of a difference either.


----------



## steen (May 27, 2019)

medi01 said:


> @btarunr
> May I ask something about the *choice of games by TPU?*
> So I check "average gaming" diff between VII and 2080 on TPU and computerbase.
> TPU states nearly 20% diff, computerbase states it's half of that.
> ...



It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference. Of course, TU104 tends to be more effective than Vega20 in the chart below.









jabbadap said:


> *Well kind of true*, async compute is capability of using graphics queue and compute queue at the same time.



You're being generous.  Your definition is fine ofc. (Or multiple queues). Not really directed at you anyway. I kept seeing it in other threads where concurrent int/fp=async compute.



> It really does not matter what precision are we speaking.



Exactly correct, nor is it defined by the ability to pack int/fp in the graphics pipeline.

There's another interesting "fine wine" effect for Vega. With Win10 (1803 IIRC) MS started promoting DX 11.0 games on GCN to DX12 feature level 11.1 that enabled the HW schedulers so should result in better perf than release under Win7/8.


----------



## medi01 (May 27, 2019)

steen said:


> It's a simple hierarchy. Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want.


Thanks for linking a chart showing perf difference *TWO TIMES SMALLER* than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.


----------



## Vayra86 (May 27, 2019)

Valantar said:


> But you're ignoring market segmentation and product pricing here. Less shaders with more performance/w/shader means cheaper dies and cheaper cards at lower power and equivalent performance or higher performance at equivalent power. Overall Turing gives you a significant increase in shaders per product segment - they just cranked up the pricing to 11 to match, sadly.



Yes... and AMD is going to follow suit, so the net gain is zero for a consumer.



londiste said:


> Perf/clock is 30 games at 4K Ultra settings with 4xAA (geomean?).
> Perf/watt is Division 2 at 1440p Ultra settings.
> 
> 
> https://www.amd.com/en/press-releases/2019-05-26-amd-announces-next-generation-leadership-products-computex-2019-keynote



That's nice but this is still AMD's little black box we're looking at, and based on history I'm using truckloads of salt with that. Especially when it comes to their GPUs. Still... there is hope, then, I guess 



medi01 said:


> Thanks for linking a chart showing perf difference *TWO TIMES SMALLER* than TPU.
> Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.



The relative number of games optimized towards Nvidia cards is way higher, so any 'representative' benchmark suite; as in, representative wrt the engines and games _on the marketplace_, is always going to favor Nvidia. But that still provides the most informative review/result, because gamers don't buy games based on the brand of their GPU.

What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.


----------



## InVasMani (May 27, 2019)

Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.


----------



## medi01 (May 27, 2019)

xkm1948 said:


> ...lacking of feature set will hurt them...


Such as G-Sync
Oh, hold on...

Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.

At this point it is obvious who's chips are going to rock the next gen of major consoles (historically "it's not about graphics" Nintendo opting for NVDA's dead mobile platform chip is almost an insult in this context, with even multiplat games mostly avoiding porting to it).


----------



## londiste (May 27, 2019)

@InVasMani latency and bandwidth are not necessarily tied together.



medi01 said:


> Nobody cares about yet another NVDA "only me" solution, it needs to get major support across the board to get to anything, but gimmicks developed in a handful of games just because NVDA paid them for it.


You mean something standard like, say... DXR?


----------



## medi01 (May 27, 2019)

londiste said:


> You mean something standard like, say... DXR?


I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:


----------



## steen (May 27, 2019)

medi01 said:


> Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.



Did you read the qualifier: 





steen said:


> Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. *Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference.*



Do you really want sites to pick "balanced" games only for testing? Think carefully.



InVasMani said:


> Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire



With the advent of modern game code/shading/post processing techniques, classic SLI/Xfire has to be built into the engines from the ground up. It's just a coding/profiling nightmare. DX12 mGPU is theoretically doable but tends to have performance regression & very little scales well.


----------



## londiste (May 27, 2019)

medi01 said:


> I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
> NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
> Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.
> 
> Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:


Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it. By the way, Wolfenstein Youngblood was announced to come with real-time raytracing effects, probably the first new game using these NV_RT extensions.

According to their own roadmap we will see CryTek's implementation live in version 5.7 of the engine in early 2020. They have said DXR etc are being considered and likely to be implemented for performance reasons.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.


----------



## Vayra86 (May 27, 2019)

medi01 said:


> I remember that, my point still stands.
> NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
> Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.
> 
> Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:



But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic. What you're linking is their updated CryEngine and what it can do, and it has _nothing to do with RTX,_ or DXR. But DXR will still potentially expand the possibilities of the tech they use in CryEngine, and it will do that, again, regardless of GPU; the question is how the GPU will make use of what DXR has to offer.


----------



## steen (May 27, 2019)

Vayra86 said:


> What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.



My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for?


----------



## Deleted member 158293 (May 27, 2019)

What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.


----------



## Vayra86 (May 27, 2019)

steen said:


> My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for?



At least half of them, so they don't get their ass kicked in every random comparison.


----------



## steen (May 27, 2019)

londiste said:


> Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions


Better than cap bits.



> Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.


Isn't that like saying my car has four wheels it must be a Ferrari?



Vayra86 said:


> At least half of them, so they don't get their ass kicked in every random comparison.


Trick Q.  No $b = no on-site engineers, or at least no dev evangelists to other than a few AAA studios. Their fault totally ofc. They've even had both consoles stitched up.


----------



## GoldenX (May 27, 2019)

So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.


----------



## kings (May 27, 2019)

yakk said:


> What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.



Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!


----------



## londiste (May 27, 2019)

AMD is definitely more in the picture with game development these days. While I am not sure how much help either IHV actually provides to developers, AMD is much-much more visible right now with situation being largely reversed from TWIMTBP days.


----------



## bug (May 27, 2019)

GoldenX said:


> So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
> Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.


About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.


----------



## Deleted member 158293 (May 27, 2019)

kings said:


> Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).
> 
> Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.
> 
> Well, 6 years later here we are and AMD is still struggling to keep up!



Not about performance...

It's about guiding the development of the entire gaming industry across all gaming studios with Microsoft, Sony, Apple's upcoming gaming service, and now Google too.  AMD is tailoring everything to themselves as a hub calling the shots.

Business wise that is very impressive.


----------



## HD64G (May 27, 2019)

I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.


----------



## Nima (May 27, 2019)

medi01 said:


> Thanks for linking a chart showing perf difference *TWO TIMES SMALLER* than TPU.
> Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.


Can you read the charts or I have to read it for you? performance difference is nearly the same. 9% difference for Techspot vs 10% in TPU. where did you get that "*TWO TIMES SMALLER than TPU"?*


----------



## GoldenX (May 27, 2019)

bug said:


> About that overhead: when you go async-heavy, overhead goes up.
> That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.


Yeah, let's see how that turns out on release drivers. 
I also hope that we can finally get a proper OpenGL driver.


----------



## medi01 (May 27, 2019)

steen said:


> Do you really want sites to pick "balanced" games only for testing? Think carefully.


I've stated it 2 times, yet you literally miss the point.

A - picks a handful of games, does test, arrives at X%
B - picks a handful of games, does test, arrives at 2*X%
C - picks A LOT of games, does test, arrives at X%



londiste said:


> Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.


Well, if there is no diff between wider set / subset, subset is good, I stand corrected. (did criss cross resolution comparison, values are different)



Vayra86 said:


> But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic.


Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.




londiste said:


> Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.


In other words "see, if it gets adopted first", which kinda makes sense, doesn't it?


----------



## B-Real (May 27, 2019)

Looking forward for the pricing. It would be nice to see undercutting NV prices (opposite of initial Vega pricing).



Ibotibo01 said:


> If this prices was true, It would be too expensive.
> AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
> I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
> High end AMD GPU's
> ...


What? 


			https://www.guru3d.com/index.php?ct=articles&action=file&id=43327
		


Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.


----------



## medi01 (May 27, 2019)

Ibotibo01 said:


> Med-Low tier GPU's
> RX3060=GTX 1650



You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.


----------



## jabbadap (May 27, 2019)

HD64G said:


> I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.



One of the RX 5700 -series sku, note the plural. So in translation there likely be couple of skus out of that series, i.e. RX 5770 and RX 5750 or RX 5700XT and RX5700pro.


----------



## Ibotibo01 (May 27, 2019)

Chomiq said:


> Let me help you with that wine:
> 
> 
> https://tpucdn.com/reviews/AMD/Radeon_VII/images/battlefield-5_3840-2160.png
> ...


I didn't say lie but this is strategy for selling. They only use 3 games and they said Radeon VII is same with RTX 2080. Well, what about other games?

@medi01 Most of the benchmarks, RTX 2080 is faster than Radeon VII.* Yes, it depends on games*


medi01 said:


> You realize even 2 years old 570 wipes the floor with 1650, don't you?
> If you don't, do not worry, neither do millions of 1050/1050Ti's users.


GTX 1650 is fast card for entry level. RX 570's normal sale price is 169 Dollars. Is GTX 1650 overpriced? Yes. It should be 119 Dollars because it is Nvidia's entry level card.


B-Real said:


> Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.


Oh well.




VEGA 56 doesn't match with GTX 1070 Ti. It has performance between GTX 1070 and GTX 1070 Ti. It depends on what you are playing games.

All in all, I'm not Nvidia's Fanboy and AMD's Fanboy. I am expecting AMD for more performance with price but people are lying about AMD(R7 3000 series have 12 cores or RTX 2070 performance for 250 Dollars). I'm confused due to rumours.


----------



## Valantar (May 27, 2019)

bug said:


> I'm not sure how you read that graph, but this is how I do it:
> 1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
> 2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.
> 
> If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.


You chose a card that performs a few % better than most Turing cards per watt, which also shifts AMD's averages down, which is kind of odd when you say you're not interested in looking at specific cards. Even with that, the Vega 56 was at 62%. 62*1,5=93. That's pretty darn close. Of course the V64 was slower at 54%, for which a 50% increase would be 81%. That's a lot worse for a very small difference in the baseline. If we look at one of the more average (and similar in performance) Turing cards, like the 2070, the result for the V56 is 99%. Which is why talking about multiples of percentages is a minefield. Unless you are _very_ explicitly clear about your baseline, test conditions, and what you're comparing, you're going to confuse people more than clarify anything.


InVasMani said:


> Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.


That is only true if bandwidth is already maxed out, leading to a bottleneck. Other than that, increasing bandwidth does not necessarily relate to latency whatsoever. The cars on your highway don't go faster if you add more lanes but keep the speed limit the same. Now, I haven't read the PCIe 4.0 spec, so I don't know if they're also reducing latency, but of course they might. It still doesn't relate to bandwidth, though.


----------



## Vayra86 (May 27, 2019)

medi01 said:


> Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
> Crytek demoed we can have RT gimmick right there, with current tech.



Hey look, you and me agree on this, I'm no fan either of large GPU die percentages dedicated to just RT performance; but with the facts available to us now, we also have a few things to deal with...

- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to _speed up the things you see in the Crytek demo._
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.

End result, Nvidia with that weirdo thought wins again.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of _potential._ But you can't sell potential.

I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.


----------



## medi01 (May 27, 2019)

Vayra86 said:


> - RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.


That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.



Vayra86 said:


> - RTX / DXR will and can be used to _speed up the things you see in the Crytek demo._


No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.



Vayra86 said:


> The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense,


We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.


----------



## M2B (May 27, 2019)

Ray-Tracing is beyond simplifying game development in the way you like to believe.
It takes ages and ages to achieve similar level of accuracy with traditional rendering techniques; especially in open world and more complex games thus in reality, you're never going to see RT level of realism and accuracy in actual games without RT in use.

And also Crytek stated that they're gonna use the RT cores on turing cards for better performance in the future.

One day in the future 70%~ of the PC users will have an RTX card, GTX is going to die sooner or later; that's when developers would think twice about considering or not considering the RT implementation in general; and of course you must be stupid to not use the relative free performance that RT cores offer.


----------



## Vayra86 (May 27, 2019)

medi01 said:


> That's a generic "specialized hardware does things faster" statement, and, well, yes.
> E.g .AES decryption.
> 
> 
> ...



Okay buddy, whatever you want to disagree on, I'll agree to  I suppose you know better than what sources have shown thus far.

Also, why always so mad?


----------



## Minus Infinity (May 28, 2019)

FordGT90Concept said:


> It was announced in January, too late to put into Navi.  Arcturus might have it.
> 
> 
> I want to know how many transistors it has.



Then why do the LG 2019 C9 TV's have HDMI 2.1. How did they manage to get that done for an already released product? They even announced 2.1 support 6 months ago.


----------



## GoldenX (May 28, 2019)

It might come in a future update.


----------



## FordGT90Concept (May 28, 2019)

Minus Infinity said:


> Then why do the LG 2019 C9 TV's have HDMI 2.1. How did they manage to get that done for an already released product? They even announced 2.1 support 6 months ago.


New GPU architectures take about 5 years to go from design to commercial product.  HDMI 2.1 has a lot of significant changes in it that were too much to accommodate too late in development.  The most important features (like VRR) AMD already does over HDMI 2 to displays that support it.

The decoder chips in TVs are vastly more simple than GPUs.


Edit: If this is the TV you're talking about it looks like the only HDMI 2.1 feature they added was eARC...which is simple.


----------



## StudMuffin (May 28, 2019)

LocutusH said:


> I dont see HDMI 2.1? Why?


I don't think it really matters as long as the HDTV is a half-way decent HDTV with HDR,etc..most of these 2018/2019 4K HDTV's that are mid range and up do HDR and true 120hz, what people want and like is 4K resolution at 120Hz, and maintains your 4:4:4 subsampling and is still plenty smooth.... If that is the case..You'll be able to get all those important benefits of HDMI 2.1 using the current HDMI 2.0 . You can get these important "gaming features" through current HDMI 2.0, again, as long as the 4K HDTV is decent. The only reason a gamer should care about having a GPU with HDMI 2.1 is if they are wanting to run their games at 8K at 60fpz/120fps and thats a pipe dream right now lol.

So if a gamer wants to have their PC connected to their 4K HDTV and run their PC games at 4K rez at a maximum of 120Hz, and maintains your 4:4:4 subsampling and is still silky smooth, this can be done with HDMI 2.0, as long as the 4k HDTV is decent, that has a 120hz true processor inside and can do that while keeping the 4 4 4 subsamp, HDR,etc,etc, Some of these 4k Sets claim it can do 120hz but honestly its not true 120hz, its 60hz but most mid-range 4k sets, that have come out in the last few years or for sure this year..can do this.


----------



## TechLord (May 28, 2019)

Navi 5700 will be 10%-15% faster than Rtx 2070

Navi 5800 will be 10%-15% faster than Rtx 2080

Navi 5900 will be 10%-15% faster than Rtx 2080ti and the 5900 series will have (7nm+ Arcturus hardware accelerated Ray Tracing) features, and that specific Gpu will launch mid 2020


Navi 5700 will be their entry level Navi mid range Gpu which will beat a 2070 launching this July, and then in 2020 we'll get the 5800 & 5900 Gpu's which will beat the 2080 & 2080ti respectively


The 5700 & 5800 based Pc parts will be standard off the shelf Gpu's without any specific hardware accelerated Ray Tracing features baked in, however the next Gen consoles will get fully custom Navi based Gpu's with the 5900 hardware accelerated 7nm+ Arcturus Ray Tracing features built into the silicon as these next Gen console's will be the real deal Holyfield people.


Xbox Anaconda will use the full custom 5900 Gpu with advanced Ray Tracing Arcturus features....


Ps5 will use a custom 5800 Gpu with advanced Ray Tracing Arcturus Features....


Xbox Lockhart will use a custom 5700 Gpu with advanced Ray Tracing Arcturus Features....


These consoles will perform far higher than the +10% figures of the Pc parts above due to the way consoles work with deep console level optimization, and closed loop ultra low level to the Metal console Api's......


These console versions of Navi will be custom parts which are far more advanced than the Pc parts........


Next Gen console's Xbox Anaconda & Ps5 will absolutely murder the performance of an Rtx 2070 level Pc as Anaconda will be faster than an Rtx 2080ti, and ps5 will be faster than an Rtx 2080, and Xbox Lockhart will be faster than an Rtx 2070


----------



## londiste (May 28, 2019)

Vayra86 said:


> I'm no fan either of large GPU die percentages dedicated to just RT performance


That percentage is surprisingly small.


Vayra86 said:


> The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer.


As I noted above, what Vega56 does in Neon Noir, GTX1080 can do in Battlefield V. This is the level of performance expected today from RT running on shaders.


Vayra86 said:


> I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.


DLSS is not RT at all. Nvidia tried denoising on Tensor cores which is similar but different from DLSS and they have been suspiciously quiet on whether actual games are using it this far.


medi01 said:


> DXR works with different structures, Crytek is voxel based, DXR is not.
> So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.


BHV traversal can be run on RT cores for voxels. Intersection would probably need meshing of the voxels.


medi01 said:


> We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.


I think you underestimate how much smaller this effort is.


medi01 said:


> For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.


You do realize that "Nvidia's approach" will give a considerable speed boost to "Crytek's take"? The truth of the matter is that there are no different approaches as such. This is all ray tracing.


StudMuffin said:


> I don't think it really matters as long as the HDTV is a half-way decent HDTV with HDR,etc..most of these 2018/2019 4K HDTV's that are mid range and up do HDR and true 120hz, what people want and like is 4K resolution at 120Hz, and maintains your 4:4:4 subsampling and is still plenty smooth.... If that is the case..You'll be able to get all those important benefits of HDMI 2.1 using the current HDMI 2.0 . You can get these important "gaming features" through current HDMI 2.0, again, as long as the 4K HDTV is decent. The only reason a gamer should care about having a GPU with HDMI 2.1 is if they are wanting to run their games at 8K at 60fpz/120fps and thats a pipe dream right now lol.


HDMI 2.0 cannot do 4k@120Hz, much less at 4:4:4 and VRR.


----------



## Prima.Vera (May 28, 2019)

Still cannot beat the 1080Ti?? /facepalm


----------



## FordGT90Concept (May 28, 2019)

It's a mid-range chip.  It was never intended to be top dog.  Radeon VII remains AMD's top product until Arcturus debuts.

I suspect Navi only has 8-10 billion transistors compared to 18.5 billion in RTX 2080 Ti and 13.5 billion in Radeon VII.  The fact it is knocking on Radeon VII's door with GDDR6 and much fewer transistors is a testament to RDNA's design.


----------



## Valantar (May 28, 2019)

TechLord said:


> Navi 5700 will be 10%-15% faster than Rtx 2070
> 
> Navi 5800 will be 10%-15% faster than Rtx 2080
> 
> ...


Gotta love it when brand-new accounts come in and spew heaps of incredibly optimistic and entirely unsourced speculation. The most trustworthy information there is.

I'm hopeful for Navi, but this is just silly.


----------



## Vayra86 (May 28, 2019)

TechLord said:


> Navi 5700 will be 10%-15% faster than Rtx 2070
> 
> Navi 5800 will be 10%-15% faster than Rtx 2080
> 
> ...



The Lord has spoken!


----------



## Ibotibo01 (May 28, 2019)

TechLord said:


> Navi 5700 will be 10%-15% faster than Rtx 2070
> 
> Navi 5800 will be 10%-15% faster than Rtx 2080
> 
> ...


No, it won't be happening. Sapphire leaks Navi's price (5700 and 5800). AMD uses Radeon VII which is top GPU. Navi 5700 is %10 faster than RTX 2070* in Strange Brigade. Already, Radeon VII is %20-25 faster than RTX 2080 in Strange Brigade but RTX 2080 is faster than Radeon VII in other games like Witcher 3, AC Odyssey and Rainbow Six Siege. *
My Expections
RX 5700= RTX 2060+%7
RX 5800= RTX 2070
RX 5900= RTX 2070+%10
RX 5900 XT= RTX 2080


----------



## Valantar (May 28, 2019)

Vayra86 said:


> The Lord has spoken!


The Lord _hath_ spoken.

Can't have inaccuracies on the forums, jeez.


----------



## medi01 (May 28, 2019)

londiste said:


> You do realize that "Nvidia's approach" will give a considerable speed boost to "Crytek's take"?



I need to see the receipts.



Vayra86 said:


> I suppose you know better than what sources have shown thus far.


I suppose you have sources that others don't, don't hesitate to show them.

PS
And why always so outraged?


----------



## Frick (May 28, 2019)

Valantar said:


> The Lord _hath_ spoken.
> 
> Can't have inaccuracies on the forums, jeez.



This really depends on if you're quoting the Bible or Händel.

Anyway. Is this brand new or not? I see many conflicting arguments. Me I'm cautiously optimistic, in this context defined as "might perform almost as good as they say in a best case scenario".


----------



## Vayra86 (May 28, 2019)

medi01 said:


> I suppose you have sources that others don't, don't hesitate to show them.



Sure thing.









						CRYENGINE | How we made Neon Noir - Ray Traced Reflections in CRYENGINE and more!
					

Ahead of GDC 2019, we revealed Neon Noir, a research and development project showcasing real-time mesh ray traced reflections and refractions created with an advanced new version of CRYENGINE’s Total Illumination real-time lighting solution. Needless to say, we did receive a lot of questions...




					www.cryengine.com
				




_"However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking, RTX will not allow new features in CRYENGINE, but it will enable better performance and more details. "_

As for 'being outraged'... I think that's more up to your misguided interpretation of the world around you than anything else. The fact you're not aware of the above source speaks volumes. There is no outrage here.


----------



## Manoa (May 28, 2019)

when crysis doing refleractions in half resolution, you know the game is over


----------



## medi01 (May 28, 2019)

Vayra86 said:


> "However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will *probably *allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations. Broadly speaking,* RTX will not allow new features in CRYENGINE*, *but it will enable better performance and more details*. "



Nice (emphasis mine).


As for 'being mad'... I think that's more up to your misguided interpretation of the world around you than anything else. There is no "being mad" here.



Manoa said:


> when crysis doing refleractions in half resolution, you know the game is over


Because RTX gimmick is... cough.






RTX shadows are actually way crappier than what is shown above. That's why heavy denoising is an inherent part of it.


----------



## Vayra86 (May 28, 2019)

medi01 said:


> Nice (emphasis mine).



Indeed






> There is no "being mad" here.



Indeed





If you're not mad, stop swearing. Beyond that, its clear you're just looking for things to disagree on, have fun with that.


----------



## medi01 (May 28, 2019)

Vayra86 said:


> If you're not mad, stop swearing


The post to which you replied with stinky "mad" comment doesn't contain a single uncensored word, besides one could be using them without "being mad".
Why did yo react like that? BH? Sigh.

And for the other part, perhaps RTX cores are not as "specialized" as someone wants us to believe.


----------



## Vayra86 (May 28, 2019)

medi01 said:


> The post to which you replied with stinky "mad" comment doesn't contain a single uncensored word, besides one could be using them without "being mad".
> Why did yo react like that? BH? Sigh.
> 
> And for the other part, perhaps RTX cores are not as "specialized" as someone wants us to believe.



Scroll through your general tone of voice on this forum and maybe the answer will reveal itself to you. Nice way to deflect the core of your incorrect statements of the last few pages though. I'm done with you.


----------



## medi01 (May 28, 2019)

Vayra86 said:


> Scroll through your general tone of voice on this forum and maybe the answer will reveal itself to you


You complained about "mad" comment, and "swearing" in the context of a post that had neither nor.
It is your misguided interpretation of the world around you multiplied by some tribalism more than anything.

I don't have any feelings (in either direction) about anyone posting on this forum.



Vayra86 said:


> I'm done with you.


I don't think I care enough to remember you tomorrow. So, uh, well, just don't reply.


----------



## londiste (May 28, 2019)

medi01 said:


> And for the other part, perhaps RTX cores are not as "specialized" as someone wants us to believe.


RT Cores do BVH Traversal and intersection testing. Specialized in what they do but what they do are operations useful to practically all raytracing implementations.


----------



## ShurikN (May 28, 2019)

I'm really liking the look of this Taichi prototype (minus the RGB)


----------



## bug (May 28, 2019)

londiste said:


> RT Cores do BVH Traversal and intersection testing. Specialized in what they do but what they do are operations useful to practically all raytracing implementations.


Let him be. He's grasping at straws, in denial that AMD is missing on yet another GFX advancement.


----------



## jabbadap (May 28, 2019)

ShurikN said:


> I'm really liking the look of this Taichi prototype (minus the RGB)
> 
> View attachment 123927



Spinning ring of fire, or is my eyes deceiving me? Tad long card though, that size of cooling could be on Radeon VII class TDP product. 

Are they finally doing their own design though,  their RX 500 -series and Vegas were mostly subpar products.


----------



## Vindicator (May 28, 2019)

FordGT90Concept said:


> It was announced in January, too late to put into Navi.  Arcturus might have it.
> 
> I want to know how many transistors it has.











						HDMI 2.1 Announced: Supports 8Kp60, Dynamic HDR, New Color Spaces, New 48G Cable
					






					www.anandtech.com
				



It was announced publicly in January 2017, nearly 2½ years ago, and that's to the public.  Who knows how long it was developed and discussed behind the scenes prior to this.  I am extremely bummed HDMI 2.1 is apparently not supported by these cards.  I expected the PCIE 4.0 announcement to be the perfect time for AMD to really jump ahead with their GPU feature support.  Such a missed opportunity imo.


----------



## Casecutter (May 28, 2019)

So let me get this straight this RX 5700 could be the spiritual predecessor to the AMD "70" Series (Aka 570) that normally built from the gelding of the full-mainstream silicon?

Sure Strange Brigade is a "ringer" and AMD architecture poster child, but also a good all around projection of the capabilities of Vulkcan/DX12 "API Overhead" and I'm sure why AMD leads with it.  They're promoting the obvious in that there's gaming engines out there that can unleash their particular architecture design.  Nothing wrong about that...

It is interesting that what's been AMD second tier mainstream offering, is working over (sure the one title) a part Nvidia has promoted more-or-less entry enthusiast while such AMD "70-Series" have been task as "entry mainstream" more a-kin today to the GTX 1660.  If the RX 5700 actually is 20% behind a 2070 that still has it like somewhere between the Vega 56/64, and I would consider there still a RX 5800 (full-part) still out there.      

I figured Computex was just the top-level design and architectural keynotes, and honestly *not* taking the "marketing jargon" or any of this at face value.  That said, I think AMD has a good blueprint and executed this Navi release in fairly clear-cut strategy, while holding to the schedule (we'll wait to see how well they can fill the channel).  They'll have more information to drop at E3 (June 13-11), but I don't think we'll learn a-lot with 3 weeks until the actual NDA release July 7th.


----------



## GoldenX (May 28, 2019)

Looks like it's a "GCN compatible" arch, so, it's GCN, with new stuff.


----------



## medi01 (May 28, 2019)

GoldenX said:


> Looks like it's a "GCN compatible" arch, so, it's GCN, with new stuff.


One should be crazy to dump GCN as ISA.


----------



## Vayra86 (May 28, 2019)

Casecutter said:


> So let me get this straight this RX 5700 could be the spiritual predecessor to the AMD "70" Series (Aka 570) that normally built from the gelding of the full-mainstream silicon?
> 
> Sure Strange Brigade is a "ringer" and AMD architecture poster child, but also a good all around projection of the capabilities of Vulkcan/DX12 "API Overhead" and I'm sure why AMD leads with it.  They're promoting the obvious in that there's gaming engines out there that can unleash their particular architecture design.  Nothing wrong about that...
> 
> ...



No, there is no AMD '70' series as you see it with Nvidia. AMD released _Polaris _and then started incremental updates to that design, mostly in terms of more power > more perf. Polaris is designed as a midrange chip from the get-go. _Vega_ fed their upper half of the stack, development wise its disconnected from what happens with Polaris. Of course features exist on both product (lines) but the design is not the same; Vega has HBM, Polaris does not. Vega got other improvements, Polaris did not.

This also means your interpretation of the AMD naming scheme is not correct. Since the release of RX480, there have been new names but most of that has been rebranded or very minor improvements. Navi's new naming has no relation whatsoever to performance or place in the stack really, its just taking a look at Nvidia and slotting in on the right number. There will be a bigger Navi, but what it will do is a mystery, and AMD no longer has a structure you can rely on with their product stack. Gone are the HD-xx50 / xx70 days.

The key point being, we have no idea what the bigger chip will perform like.

With Nvidia, prior to Turing (but even now, really), while they do use multiple SKUs, these are all almost straight scaled versions of each other. Sometimes some trickery is applied (asymmetrical VRAM setup, usually found on midrange, not just GTX 970; Fermi and Kepler had them too) but you won't see a split halfway the stack using radically different tech. Turing is the exception with its RT components.


----------



## HenrySomeone (May 28, 2019)

cucker tarlson said:


> strange brigade only? it must be really,really bad.


Yup - it'll be way behind 2070 in real life, probably behind 2060 in many games as well and almost always when both are OCed while costing more and having a lot higher power draw too, on 7nm no less, lmao!  AMD RTG living up to its name once again - Another Massive Disappointment in Real Time Graphics


----------



## GoldenX (May 28, 2019)

medi01 said:


> One should be crazy to dump GCN as ISA.





			https://gpuopen.com/compute-product/amd-gcn3-isa-architecture-manual/
		

It would throw a lot of work out of the window.


----------



## HenrySomeone (May 28, 2019)

Well, all their work on the gpus in the last couple of years is realistically only fit to throw out the window anyway, so no big loss there


----------



## Casecutter (May 28, 2019)

Vayra86 said:


> No, there is no AMD '70' series as you see it with Nvidia.


I was asking that as a question...

I agree the Nvidia 70 Series has always been in a completely different Product stack.  While sure we can't say that the RX 5700 is more "a-kin" to what has been the a mainstream gelding size chip/offering; aka RX 570, R7 270, 7850, but what if that what is?  

I believe we have no idea where the "RX 5700" aligns in AMD's product stack, or if it's suppose to be a part that actually contests Nvidia's "entry enthusiast" offering.   The number means nothing it just a place holder that some version of a Navi that scrimmages a 2070 in Strange Bridge... It means nothing until we know.


----------



## Vayra86 (May 28, 2019)

Casecutter said:


> I was asking that as a question...
> 
> I agree the Nvidia 70 Series has always been in a completely different Product stack.  While sure we can't say that the RX 5700 is more "a-kin" to what has been the a mainstream gelding size chip/offering; aka RX 570, R7 270, 7850, but what if that what is?
> 
> I believe we have no idea where the "RX 5700" aligns in AMD's product stack, or if it's suppose to be a part that actually contests Nvidia's "entry enthusiast" offering.   The number means nothing it just a place holder that some version of a Navi that scrimmages a 2070 in Strange Bridge... It means nothing until we know.



Yes, I think the same, and sorry for misinterpreting your question as a conclusion 

There is really no telling. I'm quite sure they can pull a bigger Navi 20 out of this node that performs a good margin above this one - but how big of a margin? 30%? 50%? And even an optimistic scenario would give them only slightly under or over 2080ti performance. But on the other hand, we haven't seen any AMD GPU surpass GTX 1080ti performance and that card is out there for quite some time now. So far, even Navi stalls completely at the same-ish perf level as Vega 56. In reality, all we've really seen thus far, is rebadged Vega performance - even Radeon VII is just a Vega shrink. Navi's biggest achievement is the move to GDDR6.

This is the pessimistic version of AMD's roadmap though, and its based on what we've seen the past few years. Given Zen's success, who knows, things may get better.


----------



## Valantar (May 28, 2019)

Frick said:


> This really depends on if you're quoting the Bible or Händel.
> 
> Anyway. Is this brand new or not? I see many conflicting arguments. Me I'm cautiously optimistic, in this context defined as "might perform almost as good as they say in a best case scenario".


Whichever one says I'm right, obviously 

For the second part, I guess we'll know in a couple of weeks? I've got my fingers crossed.


----------



## bug (May 28, 2019)

Vindicator said:


> HDMI 2.1 Announced: Supports 8Kp60, Dynamic HDR, New Color Spaces, New 48G Cable
> 
> 
> 
> ...


It was announced in January, but it wasn't set in stone until November that year. Still, it could have been implemented.
The thing is, HDMI 2.1 comes with VRR. And since that's probably incompatible with whatever magic AMD worked to implement their own VRR over HDMI, it could be a problem to implement.


----------



## Casecutter (May 28, 2019)

Casecutter said:


> It is interesting that what's been AMD second tier mainstream offering,


 I think this is where I went off course I should have said "It _would_ be interesting to see *what could be* AMD second tier mainstream offering."



Vayra86 said:


> And even an optimistic scenario would give them only slightly under or over 2080ti performance. But on the other hand, we haven't seen any AMD GPU surpass GTX 1080ti performance and that card is out there for quite some time now.


I'm not expecting any "big" Navi that is meant to contest Nvidia's top-shelf Pro-Enthusiast offerings.  I think they'll work to get market share, with at some point two Navi's, that best make use of their 7nm wafer starts and yields, the bigger the chip the worse that gets.  AMD sideline that "upper echelon" pursuit, and will bench it say another year and use Arcturus (or whatever next) to get back into that segment.


----------



## FordGT90Concept (May 28, 2019)

bug said:


> It was announced in January, but it wasn't set in stone until November that year. Still, it could have been implemented.
> The thing is, HDMI 2.1 comes with VRR. And since that's probably incompatible with whatever magic AMD worked to implement their own VRR over HDMI, it could be a problem to implement.


There's a lot of sticking points on the GPU side:
-"Ultra High Speed"--GPU has to be able to produce 48 Gbps signal (Navi doesn't target that market).
-Dynamic HDR--don't think DisplayPort supports this.  It will take a lot of R&D to implement.
-Enhanced Audio Return Channel--not sure how difficult this is to implement.  Dolby isn't exactly popular on computers: huge preference towards uncompressed PCM which is lossless.  It might require paying Dolby too which could mean NVIDIA/AMD/Intel will never be compliant here.

I think VRR, at least for AMD, is an easy one.  They can probably make it HDMI 2.1 compliant with a driver patch because GCN apparently has a lot of granularity control over its HDMI protocol.  Everything else is theoretically pretty easy (low latency) or already done (DSC).


Remember, GPUs in general are usually quite a ways behind TVs in implementing HDMI standards.  HDMI was always designed to put the burden of design on the source, not the destination.


----------



## Vindicator (May 28, 2019)

FordGT90Concept said:


> There's a lot of sticking points on the GPU side:
> -"Ultra High Speed"--GPU has to be able to produce 48 Gbps signal (Navi doesn't target that market).
> -Dynamic HDR--don't think DisplayPort supports this.  It will take a lot of R&D to implement.
> -Enhanced Audio Return Channel--not sure how difficult this is to implement.  Dolby isn't exactly popular on computers: huge preference towards uncompressed PCM which is lossless.  It might require paying Dolby too which could mean NVIDIA/AMD/Intel will never be compliant here.
> ...


I'm looking at this a very different way.  I fully believe they can do it but are holding out to justify selling cards down the road that have little to no performance increase.

Only one of 4 HDMI 2.1 connectors on the 2019 OLEDs have eARC compatibility, which should mean this doesn't have to be on the list for GPU manufacturers to support 2.1.
Worried about 48gbits per second?  Well that fancy PCIE 4.0 connector can do 256gbits/second so I hightly doubt the bandwidth is the problem holding this back.
HDR is already supported and there's no differentiation between different HDR styles in Windows yet, so this is something that, considering it's software level, would make sense that it would be a possible software update to support, but the connector itself wouldn't be held back in the meantime.

GPUs, in the past, have been far far far ahead of what the vast majority of hardware on the market is capable of.  I still remember my TNT2 Ultra that supported 240hz and that was in the late 90s.  It could also do 1920x1200.  In the 90s.  The first 1080P TVs (That I remember) came out just before PS3 in 2006.  That means GPUs were ahead by at least 7-8 years back then when compared to where TVs were at.


----------



## FordGT90Concept (May 29, 2019)

Vindicator said:


> I'm looking at this a very different way.  I fully believe they can do it but are holding out to justify selling cards down the road that have little to no performance increase.


Look how long it took AMD to implement HDMI 2.0.  People were disappointed Fiji shipped with HDMI 1.4. Polaris (June 29, 2016) was the first to support HDMI 2.0 which was 2.75 years after the specification was released (September 4, 2013).



Vindicator said:


> Worried about 48gbits per second?  Well that fancy PCIE 4.0 connector can do 256gbits/second so I hightly doubt the bandwidth is the problem holding this back.


They are completely unrelated technologies.



Vindicator said:


> GPUs, in the past, have been far far far ahead of what the vast majority of hardware on the market is capable of.


On the DisplayPort side, yes, because DisplayPort puts GPU design first.  HDMI puts display and media manufacturers first which makes GPU support convoluted.



Vindicator said:


> I still remember my TNT2 Ultra that supported 240hz and that was in the late 90s.  It could also do 1920x1200.  In the 90s.


VGA (analog) didn't have hard limits like digital signals do today.  TVs were all built to NTSC or PAL standards which were basically 4:3 with 480 or 625 interlaced scan lines (respectively)...  It was the drive to digital ATSC/DVB that created HDMI.

Again, Arcturus will most likely support HDMI 2.1.  Navi will not.


----------



## GoldenX (May 29, 2019)

And that's why VGA is the best output.


----------



## EarthDog (May 29, 2019)

GoldenX said:


> And that's why VGA is the best output.


lol, for a potato resolution. 

I dont think it can do much over 2k (2048x1080).. pretty sure it cant reach 2560x1440 60hz?


----------



## FordGT90Concept (May 29, 2019)

GoldenX said:


> And that's why VGA is the best output.


For CRTs, yes; for LCDs, no.  The VGA-in on LCDs have to approximate everything.  A 4K analog image on LCD would lose its sharpness.  It simply can't convey that much data with the clarity necessary so...everything gets muddy.

CRTs were never digital in the first place so the signal had to be converted to analog at some point (either in the GPU or in the display).

No reason why an 8K CRT couldn't be made today that accepts a DisplayPort or HDMI connector and RAMDACs it into VGA internally.  Would look better than sending analog over VGA anyway because less noise.


----------



## GoldenX (May 29, 2019)

It was sarcasm...
Anyway, I don't expect it, but those Zen+ APUs could have Navi inside.


----------



## eldakka (May 29, 2019)

FordGT90Concept said:


> It was announced in January, too late to put into Navi.  Arcturus might have it.
> 
> 
> I want to know how many transistors it has.



What is the "it" you refer to?

If you mean HDMI 2.1, it was announced in January *2017* and released in November the same year.

HDMI 2.1 TVs are on the market right now - LG "9" series.


----------



## Valantar (May 29, 2019)

Vindicator said:


> I'm looking at this a very different way.  I fully believe they can do it but are holding out to justify selling cards down the road that have little to no performance increase.


Has anyone in the world ever bought a new GPU with 0% performance increase due to it having a new display output? That seems like a particularly silly idea.


bug said:


> It was announced in January, but it wasn't set in stone until November that year. Still, it could have been implemented.
> The thing is, HDMI 2.1 comes with VRR. And since that's probably incompatible with whatever magic AMD worked to implement their own VRR over HDMI, it could be a problem to implement.


VRR in HDMI 2.1 is AFAIK an adaptation of the VESA DP AS standard. "FS over HDMI" should already be compliant with this - at worst it'll need a driver update.

And as Ford said above, the time from a new HDMI standard is launched until it reaches PC hardware has always been very long. That seems to be how the HDMI consortium works.


GoldenX said:


> It was sarcasm...
> Anyway, I don't expect it, but those Zen+ APUs could have Navi inside.


No, they don't. They're already out in laptops. The die is known, the GPU spec is known, and it's Vega 10 with a clock bump. If they were Navi, this would show in drivers (in particular: in needing entirely bespoke drivers). Of course, the fact that they haven't launched the desktop APUs yet makes me slightly hopeful that they'll just hold off until the next generation of MCM APUs are ready some time in (very) late 2019 or early 2020 - once there's a known good Navi die that will fit the package available in sufficient quantities that it won't gimp GPU sales. Frankly, I'd prefer that over a clock-bumped 3200G/3400G. Maybe they could even bring the model names and CPU architectures in line by doing this?


----------



## FordGT90Concept (May 29, 2019)

3200G/3400G are on 12nm, yeah?  Makes sense that a Zen 2 would get a Navi GPU in either 3300G/3500G or bumping it up to 4200G/4400G on 7 nm.  PS5's CPU practically already is this without SMT (8-core with Navi) and probably with a different memory architecture (probably 16 GiB GDDR6).


----------



## Valantar (May 29, 2019)

FordGT90Concept said:


> 3200G/3400G are on 12nm, yeah?  Makes sense that a Zen 2 would get a Navi GPU in either 3300G/3500G or bumping it up to 4200G/4400G on 7 nm.  PS5's CPU practically already is this without SMT (8-core with Navi) and probably with a different memory architecture (probably 16 GiB GDDR6).


Well, technically 3200G/3400G don't exist (yet?), but the mobile 3000-series APUs are all 12nm Zen+. It would sure be interesting if they launched them as low-end options, and then surprised us with, say, an R5 3500G (6c12t + Navi 16-20?) and R7 3700G (8c16t + Navi 20-24?) later in the year. I doubt we'd see these before B550 motherboards, though, as most X570 boards seem to lack display outputs.


----------



## P4-630 (May 31, 2019)

*AMD Radeon RX 5000 is hybrid with elements of GCN - "pure" RDNA only in 2020 - Sweclockers *

__
		https://www.reddit.com/r/Amd/comments/but81o









						AMD Radeon RX 5000 är hybrid med inslag av GCN – "renodlad" RDNA först år 2020
					

Trots utfästelser om att Radeon RX 5000-serien baseras på en helt ny arkitektur blir det en hybrid mellan Radeon DNA och åldrande Graphics Core Next, i alla...




					www.sweclockers.com
				




Navi die:












						Computex: "RX 5000-serie van AMD wordt hybride van GCN en RDNA, 'pure' RDNA komt met Navi 20" - update
					

De Zweedse website SweClockers zou met onbekende bronnen gesproken hebben over de chips van de mid-range Navi-videokaarten die voor de deur staan. Volge...




					nl.hardware.info


----------



## Valantar (May 31, 2019)

P4-630 said:


> *AMD Radeon RX 5000 is hybrid with elements of GCN - "pure" RDNA only in 2020 - Sweclockers *
> 
> __
> https://www.reddit.com/r/Amd/comments/but81o
> ...


Interesting!

Worth clarifying for the non-Swedophones(?) out there: according to this, Navi 20 ("big Navi") is supposed to be "pure" RDNA, and launch in early 2020. In other words, this is not a "half now, half next generation" situation as the title might make it seem. Still odd to make a hybrid like this, but I guess the architectures are modular enough to plug-and-play the relevant blocks on a driver level as well. This also clarifies the kinda-weird mismatch between RDNA being the architecture for "gaming in the next decade" while there being a "next-gen" arch on the roadmaps for 2020.

I wonder what implications this might have for performance and driver support. One might assume that these first cards will lose driver support earlier, but then again considering how prevalent GCN is I can't see that being for another 5 years or so anyway, by which time they'll be entirely obsolete. Performance enhancements and driver tuning might taper off more quickly, though, unless the relevant parts are RDNA and not GCN.


----------



## Casecutter (May 31, 2019)

This is not surprising I think the "big Navi" was always getting the "Next-Gen" architecture it's just now they made RDNA as the "term" for the Gaming centric arrangement of the individual nucleotides (building block) that make "Next-Gen" architecture.  Next-Gen is an all encompassing idea of such building blocks, and not one architecture that fits all.   I see AMD/RTG splitting the Professional on a separate strand, that makes use of nucleotides that offer the best HPC, AI, or whatever the task requires.  These nucleotides (bits and pieces) can also go down to the console market and APU's.

Remember AMD/RTG  is setting itself up against a huge onslaught, and not just Nvidia or Enthusiast Gaming market which is just a pittance for profits.  Compared that to what they might miss-out on if Intel makes strides into all these various emerging markets, while even later unlocks consoles and delivers their own true APU's.  My hope is the Raja and all these other folks that were "brought-in" by Intel where compartmentalize, and didn't have full-understanding to the vision Lisa Sue (upper management) developed since mid 2016, but I think AMD has had such strategic overview compromised.        









						Intel’s ex-AMD and Nvidia hires show where its GPU concerns lie
					

Here are all the most important Intel hires made in the last 18 months




					www.pcgamesn.com


----------



## GoldenX (May 31, 2019)

So, RDNA is compatible with the GCN ISA. I lost all hope of a proper OpenGL driver.


----------



## John Naylor (May 31, 2019)

Yawwwwn .... based upon recent years, I have learned that until I see test results here on TPU and elsewhere, it's not real.   These announcements and single game testing never live up to the hype.


----------



## Valantar (May 31, 2019)

John Naylor said:


> Yawwwwn .... based upon recent years, I have learned that until I see test results here on TPU and elsewhere, it's not real.   These announcements and single game testing never live up to the hype.


We obviously need reviews, but there's little reason to suspect that AMD's engineering team can't come up with a more powerful and efficient architecture when aiming for a blank-slate design - it just takes time. This has been on the roadmaps for a few years already, so the arch is probably 4-5 years in the making, with efforts intensifying in the past couple. I just hope they've taken the necessary time to make it stick on the first try. Perhaps that was the reason for Navi being late? If so, I sure don't mind.


----------



## bug (May 31, 2019)

P4-630 said:


> *AMD Radeon RX 5000 is hybrid with elements of GCN - "pure" RDNA only in 2020 - Sweclockers *
> 
> __
> https://www.reddit.com/r/Amd/comments/but81o
> ...


If true, this takes Rebrandeon to the next level: different architectures not only within the same product line, but within the same chip line.


----------



## Casecutter (May 31, 2019)

Why is it we get one data point of say 90 (30 games x 3 resolutions) and we have all we need to see it as allegement of hyperbole.


----------



## bug (May 31, 2019)

Casecutter said:


> Why is it we get one data point of say 90 (30 games x 3 resolutions) and we have all we need to see it as allegement of hyperbole.


Leaked info this close to a launch, usually depicts a best case scenario, that's why. It's not foolproof, but it's an educated guess


----------



## GoldenX (May 31, 2019)

bug said:


> If true, this takes Rebrandeon to the next level: different architectures not only within the same product line, but within the same chip line.


Pascal is the same thing with Maxwell then. Turing is Pascal with RTX on top of it then.


----------



## bug (May 31, 2019)

GoldenX said:


> Pascal is the same thing with Maxwell then. Turing is Pascal with RTX on top of it then.


What?


----------



## GoldenX (May 31, 2019)

bug said:


> What?


It's not a rebrand if you keep things from the old specs to keep compatibility. Also, G92 wants it's rebrand meme back.

So, no numbers on the 5700, it sounds like it will be another boring launch.


----------



## bug (May 31, 2019)

GoldenX said:


> It's not a rebrand if you keep things from the old specs to keep compatibility. Also, G92 wants it's rebrand meme back.


Oh crap, selective memory strikes again.


----------



## GoldenX (May 31, 2019)

bug said:


> Oh crap, selective memory strikes again.


All 3 companies are rebrand masters.
AMD has the disaster of old GCN cards getting into new series, RX500 vs RX400, etc.
Nvidia has the G92 fiasco (9 cards?), the 100, 300 and 800 series, and 90% of their mobile chips.
Intel has 4 series of CPUs with exactly the same IGP, they only added an U to the start of the name.

VIA is the only good guy here.


----------



## bug (May 31, 2019)

GoldenX said:


> All 3 companies are rebrand masters.
> AMD has the disaster of old GCN cards getting into new series, RX500 vs RX400, etc.
> Nvidia has the G92 fiasco (9 cards?), the 100, 300 and 800 series, and 90% of their mobile chips.
> Intel has 4 series of CPUs with exactly the same IGP, they only added an U to the start of the name.
> ...


This was about desktop GPUs, so that's what i was talking about.
AMD routinely does stuff like: https://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_series#Chipset_table (i.e. everything from Terrascale to GCN 1.3, all under the 200 moniker).

The new ground they could be breaking is Navi chips actually being from different families _if the above rumor is true_: little Navi built with GCN blocks, big Navi without.


----------



## GoldenX (May 31, 2019)

bug said:


> This was about desktop GPUs, so that's what i was talking about.
> AMD routinely does stuff like: https://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_series#Chipset_table (i.e. everything from Terrascale to GCN 1.3, all under the 200 moniker).
> 
> The new ground they could be breaking is Navi chips actually being from different families _if the above rumor is true_: little Navi built with GCN blocks, big Navi without.


The 100 and 300 series are desktop cards... "He that is without sin among you..."


----------



## Valantar (May 31, 2019)

bug said:


> This was about desktop GPUs, so that's what i was talking about.
> AMD routinely does stuff like: https://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_series#Chipset_table (i.e. everything from Terrascale to GCN 1.3, all under the 200 moniker).
> 
> The new ground they could be breaking is Navi chips actually being from different families _if the above rumor is true_: little Navi built with GCN blocks, big Navi without.


_Some_ GCN blocks. That distinction can matter quite a lot depending on what blocks they are. Calling this a rebrand, though? That's idiocy. Even if it carries over some parts of the design, it's a brand-new die design with brand new core components. If that's a rebrand, there has really never been a new chip designed, ever.


----------



## bug (May 31, 2019)

GoldenX said:


> The 100 and 300 series are desktop cards... "He that is without sin among you..."


They may be rebrands (man, did you dig up series I never knew existed on the desktop), but they still don't mix different architectures under the same moniker.



Valantar said:


> _Some_ GCN blocks. That distinction can matter quite a lot depending on what blocks they are. Calling this a rebrand, though? That's idiocy. Even if it carries over some parts of the design, it's a brand-new die design with brand new core components. If that's a rebrand, there has really never been a new chip designed, ever.


Come on people. Is my English that bad? This is about putting the same label on unrelated products, not about rebranding.


----------



## GoldenX (May 31, 2019)

bug said:


> They may be rebrands (man, did you dig up series I never knew existed on the desktop), but they still don't mix different architectures under the same moniker.


Low end 700 series are Fermi, low end 400 are Tesla, the 750Ti is Maxwell v1 in Kepler's lineup, G92 cards are in 4 different series (8000, 9000, 100, 200).
Nvidia has been having a good conduct lately, doesn't mean they are innocent.


----------



## bug (Jun 1, 2019)

GoldenX said:


> Low end 700 series are Fermi, low end 400 are Tesla, the 750Ti is Maxwell v1 in Kepler's lineup, G92 cards are in 4 different series (8000, 9000, 100, 200).
> Nvidia has been having a good conduct lately, doesn't mean they are innocent.


So, bottom line, you're ok if little Navi turns out a Frankenstein monster and big Navi is the completely new architecture. I'm not.


----------



## GoldenX (Jun 1, 2019)

bug said:


> So, bottom line, you're ok if little Navi turns out a Frankenstein monster and big Navi is the completely new architecture. I'm not.


The 750ti was little Frankenstein monster Maxwell and no one complained.
What maters is if the product is good, internally it could be an Intel IGP for all I care.


----------



## bug (Jun 1, 2019)

GoldenX said:


> The 750ti was little Frankenstein monster Maxwell and no one complained.


Do you even understand what that article was talking about? It said the little Navi could be built with Navi and GCN blocks and only the big Navi will be entirely new. The 750Ti was nothing like that.


GoldenX said:


> What maters is if the product is good, internally it could be an Intel IGP for all I care.


In general, yes. But when you mix architectures like AMD does, you end up with missing features, depending on the model. Whether some codec isn't hardware accelerated on older parts or a new HDMI revision isn't supported, there's a lot of aspects where you can end up drawing the short straw.


----------



## GoldenX (Jun 1, 2019)

Getting AMD cards is being a beta tester for them, look at the Vega 56, beating the 64 when undervolted AND overclocked.


----------



## medi01 (Jun 1, 2019)

*Exactly what needs to happen for idiotic GCN news to stop?*

GCN is an instruction set that is not getting dropped any time soon, definitely not sooner than nVidia drops its 11 years old CUDA.
As for "microarchitectures" AMDs' own Vega is quite different to Polaris.



GoldenX said:


> So, RDNA is compatible with the GCN ISA. I lost all hope of a proper OpenGL driver.


And 2080 is compatible with 11 years old CUDA.
And, wait for it, Zen2 is compatible with 39 years old x86!!!

AMD just cannot innovate!



GoldenX said:


> Getting AMD cards is being a beta tester for them, look at the Vega 56, beating the 64 when undervolted AND overclocked


Beating 2070 when overclocked + overvolted:










What trickery is this? How dare they sell us slower cards for cheap?


----------



## bug (Jun 1, 2019)

medi01 said:


> *Exactly what needs to happen for idiotic GCN news to stop?*
> 
> GCN is an instruction set that is not getting dropped any time soon, definitely not sooner than nVidia drops its 11 years old CUDA.
> As for "microarchitectures" AMDs' own Vega is quite different to Polaris.
> ...


Probably you informing yourself: https://en.wikipedia.org/wiki/Graphics_Core_Next



> Graphics Core Next (GCN) is the codename for both a series of microarchitectures as well as for an instruction set



But I'm not holding my breath.


----------



## Aquinus (Jun 1, 2019)

btarunr said:


> Navi also ticks *to* big technology check-boxes


Typo. Should be *two* not *to*.


----------



## GoldenX (Jun 1, 2019)

medi01 said:


> And 2080 is compatible with 11 years old CUDA.
> And, wait for it, Zen2 is compatible with 39 years old x86!!!


And still AMD drivers are the worst at OpenGL since... ATI. They never solved it.

CUDA has a lot of driver optimization work, the ISA could be different between generations, Nvidia does the work to unify them in CUDA. You know, it's just another compute language.


----------



## Aquinus (Jun 1, 2019)

GoldenX said:


> And still AMD drivers are the worst at OpenGL since... ATI. They never solved it.


It's gotten a lot better in the Linux ecosystem since they went the open source route with the majority of their driver code. It has done AMD a lot of good on the Linux front. I'm astonished at how much my Vega 64 just works and how it works fairly well to be honest.


----------



## GoldenX (Jun 1, 2019)

Aquinus said:


> It's gotten a lot better in the Linux ecosystem since they went the open source route with the majority of their driver code. It has done AMD a lot of good on the Linux front. I'm astonished at how much my Vega 64 just works and how it works fairly well to be honest.


I would LOVE to have mesa's OpenGL driver on Windows.


----------



## medi01 (Jun 2, 2019)

GoldenX said:


> And still AMD drivers are the worst at OpenGL since... ATI. They never solved it.


Ties with AMD, plus, minuscule part of the market affected by it and scarcity of resources are at play, I think.



bug said:


> But I'm not holding my breath.


Yes, just merely your eyes shut.
Not that I would expect less from *someone who claimed cartels are OK, to justify nVidia.*

Let me repeat it in big easy to read letters for you: *ISA is there to stay, as 11 old CUDA, 7 years old GCN isn't going anywhere. As for microarchitecture, it is very apparently different even between Polaris and Vegas.*


----------



## FordGT90Concept (Jun 2, 2019)

CUDA isn't an ISA, PTX is: https://docs.nvidia.com/cuda/parallel-thread-execution/

Equivalent GCN ISA docs: https://rocm-documentation.readthedocs.io/en/latest/GCN_ISA_Manuals/GCN-ISA-Manuals.html

There's not a whole lot different between Polaris and Vega other than the 16-bit instructions.  It is unknown what Navi adds that Vega doesn't have.  It might be less about ISA and more about optimization of the graphics pipeline.


----------



## medi01 (Jun 2, 2019)

FordGT90Concept said:


> There's not a whole lot different between Polaris and Vega other than the 16-bit instructions


Polaris is denser. Vega is an attempt to go with sparser, higher frequency design.
There are IPC differences even between Vega 64 and VII.



FordGT90Concept said:


> CUDA isn't an ISA, PTX


Arguing about semantics. The very link you've posted is titled "CuDA toolkit"



FordGT90Concept said:


> It is unknown what Navi adds that Vega doesn't have


In terms of instruction sense, what does Zen 2 add, what Zen doesn't have?

One coudl stick with the same ISA and yet have vastly different architectures on silicon level, what is there really to argue about?


----------



## londiste (Jun 2, 2019)

As I said in another thread to the same claim from you CUDA is not an ISA, it is an API.

PTX is not really an ISA either - it is middleware and a virtual machine is probably the best description for it. Nvidia does not have a static ISA as such over generations, they use PTX to expose the microarchitecure in a somewhat static way. AMD's GCN has been a fairly static thing regardless of the microarchitecture underneath.


----------



## FordGT90Concept (Jun 2, 2019)

medi01 said:


> Polaris is denser. Vega is an attempt to go with sparser, higher frequency design.
> There are IPC differences even between Vega 64 and VII.


Vega has more stages than Polaris, hence, higher frequencies.



medi01 said:


> Arguing about semantics. The very link you've posted is titled "CuDA toolkit"


Everything GPGPU programming at NVIDIA is under "CUDA" branding.  CUDA is not an ISA though.



medi01 said:


> In terms of instruction sense, what does Zen 2 add, what Zen doesn't have?


Don't know yet because AMD hasn't given details about changes in Zen 2.



londiste said:


> PTX is not really an ISA either - it is middleware and a virtual machine is probably the best description for it. Nvidia does not have a static ISA as such over generations, they use PTX to expose the microarchitecure in a somewhat static way. AMD's GCN has been a fairly static thing regardless of the microarchitecture underneath.


All ISAs have a measure of abstraction because of the necessity to preserve backwards compatibility of high level calls.


----------



## bug (Jun 2, 2019)

medi01 said:


> Ties with AMD, plus, minuscule part of the market affected by it and scarcity of resources are at play, I think.
> 
> 
> Yes, just merely your eyes shut.
> ...


Can you write that using bigger fonts? Cause that will make you even more right


----------



## GoldenX (Jun 2, 2019)

The amount of people calling marketing names ISAs is too big.


----------



## medi01 (Jun 3, 2019)

FordGT90Concept said:


> Everything GPGPU programming at NVIDIA is under "CUDA" branding. CUDA is not an ISA though.


How old is <insert "more correct" name than CUDA>?


----------



## FordGT90Concept (Jun 3, 2019)

medi01 said:


> How old is <insert "more correct" name than CUDA>?





			https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#release-notes
		

PTX 1.0 = CUDA 1.0 = sm_{10,11}








						Nvidia Graphics IP
					

Graphics ChipDirectXShader ModelWDDMOpenGLOpenCLVulkanCUDAPureVideoVDPAUHDMIDisplayPortFP16FP64 NV11.0N/AN/AN/AN/AN/AN/AN/AN/AN/AN/AN/AN/A NV35.0N/AN/A1.0N/AN/AN/AN/AN/AN/AN/AN/AN/A NV45.0N/AN/A1.2N/AN/AN/AN/AN/AN/AN/AN/AN/A NV56.0N/AN/A1.2N/AN/AN/AN/AN/AN/AN/AN/AN/A NV10/11/157.00.5 /...




					www.techpowerup.com
				



G80 is the first GPU to use DirectX 10 (Shader Model 1.0)/CUDA 1.0 = GeForce 8800 series = first launched *November 8, 2006* with 8800 GTX and GTS

PTX ISA is 12 years, 6 months, 27 days old.


----------



## medi01 (Jun 3, 2019)

FordGT90Concept said:


> PTX ISA is 12 years, 6 months, 27 days old.



Thanks.
And, cough.


----------



## FordGT90Concept (Jun 3, 2019)

As pointed out, I think NVIDIA and AMD treat their ISAs differently.  AMD names ISAs by literally the instructions it supports where NVIDIA never discloses the machine ISA and instead runs everything through a virtual machine that accepts PTX.  AMD uses their drivers to smooth over compatibility problems between ISAs not unlike NVIDIA does with PTX.  I think one of the reasons why the open source community has problems with NVIDIA is because they never disclose the actual machine code the GPUs support; they only provide documentation on PTX which requires a good driver (which open source developers can't create) in order to function as designed.  Open source is at NVIDIA's mercy and they don't really care outside of AI/compute products.

AMD makes most of their ISAs available here:


			https://developer.amd.com/resources/developer-guides-manuals/


----------



## bug (Jun 3, 2019)

FordGT90Concept said:


> As pointed out, I think NVIDIA and AMD treat their ISAs differently.  AMD names ISAs by literally the instructions it supports where NVIDIA never discloses the machine ISA and instead runs everything through a virtual machine that accepts PTX.


And we have successfully deviated from AMD mixing GCN+Navi and pure Navi under the same GPU family to "what is an ISA". GJ medi.


----------



## FordGT90Concept (Jun 3, 2019)

All we know from driver is that it's a new compute unit (GFX10).  Until AMD gives more information, we don't know how significant the changes are.  GCN has a fairly rigid architectural layout which Vega, despite being called "Next-Generation Compute Unit," still stuck to that layout (page 9).  RDNA may be divorced from GCN but it also may not be.


----------



## bug (Jun 3, 2019)

FordGT90Concept said:


> All we know from driver is that it's a new compute unit (GFX10).  Until AMD gives more information, we don't know how significant the changes are.  GCN has a fairly rigid architectural layout which Vega, despite being called "Next-Generation Compute Unit," it still stuck to that layout.  RDNA may be divorced from GCN but it also may not be.


You're right of course. The mixing of architectures is nothing but a rumor at this point.


----------



## londiste (Jun 3, 2019)

FordGT90Concept said:


> All ISAs have a measure of abstraction because of the necessity to preserve backwards compatibility of high level calls.


The lines between what is an ISA and API are more and more difficult to draw, there is a lot of grey area here. ISA is architecture, the big picture, what the building blocks of a chip (GPU in this case) are designed for, instructions and whatnot. Underneath it is microarchitecture that implements the ISA/architecture and while the problems it solves are defined by ISA, implementation may differ completely. On the other side of things ISA is used to implement APIs.

Again, there are grey areas all around it but at a high level:
- From what we know AMD's GCN is a fairly by-the-book ISA on GCN cards. Not completely so but generally this is the case.
- Nvidia has been deliberately unclear about what their actual hardware ISA looks like for every generation. It is exposed almost exclusively via PTX that is effectively the ISA for Nvidia cards but not what the hardware itself does as PTX is a VM layer above hardware. I am sure there are drawbacks to this approach, more complex software/driver development being the obvious one.


----------



## bug (Jun 3, 2019)

londiste said:


> The lines between what is an ISA and API are more and more difficult to draw, there is a lot of grey area here.


Quite the opposite, actually: https://en.wikipedia.org/wiki/Instruction_set_architecture



> An instruction set architecture (ISA) is an abstract model of a computer. It is also referred to as architecture or computer architecture. A realization of an ISA is called an implementation.



In the GPU world, you can't do radical change of the silicon (implementation) while keeping the same instruction set. You can't use the hardware judiciously if you do that.
That hold mostly true for CPUs as well. x86 has been done to death and beyond, but ever since the inclusion of the FPU onto the CPU, CPUs have advanced not by revolutionizing the x86 implementation*, but by implementing complementary instruction sets: x87, MMX, SSE, AVX in their various incarnations.

*doesn't mean the x86 implementation hasn't been refined in the meantime, just that it wasn't the only advancement vector anymore


----------



## medi01 (Jun 3, 2019)

FordGT90Concept said:


> As pointed out, I think NVIDIA and AMD treat their ISAs differently. AMD names ISAs by literally the instructions it supports where NVIDIA *never discloses the machine ISA* and instead runs everything through a virtual machine that accepts PTX. AMD uses their drivers to smooth over compatibility problems between ISAs not unlike NVIDIA does with PTX. I think one of the reasons why the open source community has problems with NVIDIA is because they never disclose the actual machine code the GPUs support; they only provide documentation on PTX which requires a good driver (which open source developers can't create) in order to function as designed. Open source is at NVIDIA's mercy and they don't really care outside of AI/compute products.



Don't even CPUs have, cough, "micro code"?


----------



## FordGT90Concept (Jun 3, 2019)

Yup, instructions are decoded into ALU/FPU/SIMD operation codes which are executed.  The vast majority of them are not directly accessible nor would you want to because they would be like throwing wrenches in the processor.


----------



## londiste (Jun 4, 2019)

That is the definition of architecture vs microarchitecture. Architecture (and ISA) is x86 while microarchitecture (implementation) varies.


----------



## medi01 (Jun 4, 2019)

What are the reasons to believe AMD executes GCN instructions directly, instead of decoding them into micro-arch specific "opcodes", like nVidia, Intel and, hold on, AMD itself with AMD CPUs?

And if they don't, how on earth does one know what micro-arch is used by AMD?


----------



## londiste (Jun 4, 2019)

medi01 said:


> What are the reasons to believe AMD executes GCN instructions directly, instead of decoding them into micro-arch specific "opcodes", like nVidia, Intel and, hold on, AMD itself with AMD CPUs?
> And if they don't, how on earth does one know what micro-arch is used by AMD?


AMD is being rather stingy on architectural details, even more so than Nvidia. The consensus is though that ISA used on AMD GPUs is GCN (or variation of it) being called directly enough in the hardware.

Both architecture/microarchitecture and ISA/implementation are hardware things. Above that are varying layers of APIs, usually in software, sometimes in firmware. The reason this whole thing was brought up was your claim that CUDA is Nvidia's ISA which is patently incorrect. CUDA is an API (and a specialized one at that), PTX is still an API but a lower level one and while Nvidia is not being too clear about it PTX seems to be brought to life to hide ISA changes between GPU generations. Nvidia has never been forthcoming about what the ISA for their GPUs really is.


----------



## medi01 (Jun 4, 2019)

londiste said:


> AMD is being rather stingy on architectural details, even more so than Nvidia.





londiste said:


> The consensus is though that ISA used on AMD GPUs is GCN



The jump from micro-arch to ISA in a comment to a post* literally citing AMD rep clarifying that those are 2 different things* is mind boggling.



londiste said:


> PTX is still an API


----------



## londiste (Jun 4, 2019)

medi01 said:


> The jump from micro-arch to ISA in a comment to a post* literally citing AMD rep clarifying that those are 2 different things* is mind boggling.


What are you talking about?

GCN is architecture, ISA. Microarchitecture is its implementation.

Well, you can call PTX an ISA if you want. Nvidia halfway does. It is worth noting though that PTX is a virtual machine with a defined ISA (read: software). How this is mapped into actual GPU hardware is not well known (and Nvidia does not say). There is enough evidence to say the ISA underneath PTX is different between GPU generations. Nvidia drivers contain a compiler that compiles PTX code into binary code.


----------



## medi01 (Jun 4, 2019)

londiste said:


> PTX is a virtual machine


Just stop please.


----------



## londiste (Jun 4, 2019)

https://docs.nvidia.com/cuda/parallel-thread-execution/index.html said:
			
		

> PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at install time to the target hardware instruction set.


----------

