# AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA



## btarunr (Mar 6, 2020)

With its 7 nm RDNA architecture that debuted in July 2019, AMD achieved a nearly 50% gain in performance/Watt over the previous "Vega" architecture. At its 2020 Financial Analyst Day event, AMD made a big disclosure: that its upcoming RDNA2 architecture will offer a similar 50% performance/Watt jump over RDNA. The new RDNA2 graphics architecture is expected to leverage 7 nm+ (7 nm EUV), which offers up to 18% transistor-density increase over 7 nm DUV, among other process-level improvements. AMD could tap into this to increase price-performance by serving up more compute units at existing price-points, running at higher clock speeds.

AMD has two key design goals with RDNA2 that helps it close the feature-set gap with NVIDIA: real-time ray-tracing, and variable-rate shading, both of which have been standardized by Microsoft under DirectX 12 DXR and VRS APIs. AMD announced that RDNA2 will feature dedicated ray-tracing hardware on die. On the software side, the hardware will leverage industry-standard DXR 1.1 API. The company is supplying RDNA2 to next-generation game console manufacturers such as Sony and Microsoft, so it's highly likely that AMD's approach to standardized ray-tracing will have more takers than NVIDIA's RTX ecosystem that tops up DXR feature-sets with its own RTX feature-set.



 

 

 




Variable-rate shading is another key feature that has been missing on AMD GPUs. The feature allows a graphics application to apply different rates of shading detail to different areas of the 3D scene being rendered, to conserve system resources. NVIDIA and Intel already implement VRS tier-1 standardized by Microsoft, and NVIDIA "Turing" goes a step further in supporting even VRS tier-2. AMD didn't detail its VRS tier support.

AMD hopes to deploy RDNA2 on everything from desktop discrete client graphics, to professional graphics for creators, to mobile (notebook/tablet) graphics, and lastly cloud graphics (for cloud-based gaming platforms such as Stadia). Its biggest takers, however, will be the next-generation Xbox and PlayStation game consoles, who will also shepherd game developers toward standardized ray-tracing and VRS implementations.

AMD also briefly touched upon the next-generation RDNA3 graphics architecture without revealing any features. All we know about RDNA3 for now, is that it will leverage a process node more advanced than 7 nm (likely 6 nm or 5 nm, AMD won't say); and that it will come out some time between 2021 and 2022. RDNA2 will extensively power AMD client graphics products over the next 5-6 calendar quarters, at least.

*View at TechPowerUp Main Site*


----------



## medi01 (Mar 6, 2020)

@btarunr
1.5 not 2, the second slide.




btarunr said:


> and NVIDIA "Turing" goes a step further in supporting even VRS tier-2. AMD didn't detail its VRS tier support.


AMD didn't detail it, yet we "know" NV "did step further".

How does this crap get into articles please?
Are people paid for sneaking BS like that in, or is it something happening unconsciously?


----------



## R0H1T (Mar 6, 2020)

I'm still struggling to see where it says 2x perf/W over RDNA in the slides or indeed any time in their conference?


----------



## oxrufiioxo (Mar 6, 2020)

medi01 said:


> @btarunr
> 1.5 not 2, the second slide.




I think people are confused because they called it Navi 2x even though the slide clearly shows 1.5.


----------



## ratirt (Mar 6, 2020)

oxrufiioxo said:


> I think people are confused because they called it Navi 2x even though the slide clearly shows 1.5.


Maybe they meant 2x over GCN?


----------



## oxrufiioxo (Mar 6, 2020)

ratirt said:


> Maybe they meant 2x over GCN?




Here's the slide I'm talking about


----------



## ratirt (Mar 6, 2020)

oxrufiioxo said:


> Here's the slide I'm talking about
> 
> View attachment 147376


You are right. huh. This does not make much sense. Maybe the x1 or x2 and x3 doesn't represent performance uplifts but a moniker instead RDNA2 or 3 it is x2 or x3 as for generation of the chip?


----------



## oxrufiioxo (Mar 6, 2020)

ratirt said:


> You are right. huh. This does not make much sense. Maybe the x1 or x2 and x3 doesn't represent performance uplifts but a moniker instead RDNA2 or 3 it is x2 or x3 as for generation of the chip?




Yeah, I figured this slide would confuse people even though I don't think that was the intention..... They clearly stated 50% more performance per watt in the live stream.


----------



## btarunr (Mar 6, 2020)

Fixed title, sorry for the confusion.


medi01 said:


> AMD didn't detail it, yet we "know" NV "did step further".


NVIDIA went a step further than Intel in supporting not just tier-1, but also tier-2. The complete sentence was:

"NVIDIA and Intel already implement VRS tier-1 standardized by Microsoft, and NVIDIA "Turing" goes a step further in supporting even VRS tier-2 ."


----------



## medi01 (Mar 6, 2020)

Thread title is corrected now.

AMD didn't try to mislead anyone, as perf/w improvements are called out explictly:


----------



## ratirt (Mar 6, 2020)

oxrufiioxo said:


> Yeah, I figured this slide would confuse people even though I don't think that was the intention..... They clearly stated 50% more performance per watt in the live stream.


Yeah. The x2 performance uplift for the RDNA2 is in comparison to GCN. This can be confusing for some people.

EDIT: Performance/Watt to be exact.


----------



## medi01 (Mar 6, 2020)

ratirt said:


> Yeah. The x2 performance uplift for the RDNA2 is in comparison to GCN. This can be confusing for some people.
> 
> EDIT: Performance/Watt to be exact.


Performance of what?
They have called perf/watt improvements to be 1.5 (it's 2.25 over CGN).
I have read they also hinted at 18TF $999 RDNA2 chip this year, which would be about twice of 5700XT  perf.


----------



## ratirt (Mar 6, 2020)

medi01 said:


> Performance of what?
> They have called perf/watt improvements to be 1.5 (it's 2.25 over CGN).
> I have read they also hinted at 18TF $999 RDNA2 chip this year, which would be about twice of 5700XT  perf.


I mentioned performance/watt. Of what? These are graphics chips so graphics?
I haven't read that article I'm just looking at the slides.


----------



## Space Lynx (Mar 6, 2020)

Going to be an interesting match up this year. It will be a combination of price, driver stability, and availability that wins my buy. I could give a crap less about RTX or Physx. So if Nvidia can only match the raw performance, but charge a $200-300 premium just for RTX, then I will roll AMD, assuming AMD has better drivers this time.

Time will tell.


----------



## oxrufiioxo (Mar 6, 2020)

medi01 said:


> They have called perf/watt improvements to be 1.5 (it's 2.25 over CGN).
> I have read they also hinted at 18TF $999 RDNA2 chip this year, which would be about twice of 5700XT  perf.




Considering the Xbox Series X is 12TF RDNA2 I would hope so.... Would be sorta odd for them not to have a discrete gpu that was about 50% better spec wise.


----------



## medi01 (Mar 6, 2020)

ratirt said:


> I haven't read that article I'm just looking at the slides.


I can't find confirmation of "$999 18TF chip" anywhere...


----------



## R0H1T (Mar 6, 2020)

ratirt said:


> Yeah. The x2 performance uplift for the RDNA2 is in comparison to GCN. This can be confusing for some people.


It's still not 2x over GCN, 50% (perf/W) over GCN & then the same over RDNA. Looks like 125% efficiency over GCN but that's not saying much because I'm pretty sure that'd be the best case scenario.


----------



## ratirt (Mar 6, 2020)

medi01 said:


> I can't find confirmation of "$999 18TF chip" anywhere...


Honestly. If that is true 18TF chip, then good for everyone.



R0H1T said:


> It's still not 2x over GCN, 50% over GCN & then the same over RDNA. Looks like 125% over GCN but that's not saying much because I'm pretty sure that's the best case scenario.


I think it is a moniker for the 2nd and 3rd gen aka x2, 3x and it has nothing to do with performance.
The other RDNA 50% more than GCN and then RDNA2 50% over RDNA then the RDNA2 is 2.25 times faster than GCN in my eyes.


----------



## oxrufiioxo (Mar 6, 2020)

R0H1T said:


> It's still not 2x over GCN, 50% over GCN & then the same over RDNA. Looks like 125% over GCN but that's not saying much because I'm pretty sure that's the best case scenario.




Either way if they can get close to 50% per watt over a 5700XT while keeping prices sane it will be a nice card........... If its $800 it will be another fail. Well I guess that also depends on what Nvidia does they could raise prices again who knows.


----------



## Chomiq (Mar 6, 2020)

If they'll actually deliver the only thing that will prevent me from buying this would be the driver issues that we saw with almost every launch from team red's GPU camp.


----------



## ratirt (Mar 6, 2020)

Chomiq said:


> If they'll actually deliver the only thing that will prevent me from buying this would be the driver issues that we saw with almost every launch from team red's GPU camp.


I think the driver will be ok. They had time with RDNA and 5000 series. Lets hope they can get these up to speed and stable.



oxrufiioxo said:


> Either way if they can get close to 50% per watt over a 5700XT while keeping prices sane it will be a nice card........... If its $800 it will be another fail. Well I guess that also depends on what Nvidia does they could raise prices again who knows.


Yes but you have to keep in mind that the CU count would have to stay the same. They can still use more CUs and then maybe it would be even faster.


----------



## delshay (Mar 6, 2020)

I just want to see a Nano card this time around & HDMI 2.1.


----------



## bogami (Mar 6, 2020)

It is desirable to get better efficiency, but why do all AMD processors look like they were mushrooming on them?


----------



## IceShroom (Mar 6, 2020)

oxrufiioxo said:


> Here's the slide I'm talking about
> 
> View attachment 147376


This slide says that RDNA based GPU has name that stats with - Navi1X, X=0,1,2....
RDNA2 based GPUs will have name like - Navi2X, where X=0,1,2.....
And RDNA3 based GPUs will have name like - Navi3X, where X=0,1,2.....


----------



## W1zzard (Mar 6, 2020)

oxrufiioxo said:


> Here's the slide I'm talking about
> 
> View attachment 147376



Considering that currently shipping Navi is "Navi 10" and "Navi 14", which can be summarized as "Navi 1x", I would assume that the next GPUs are "Navi 20" and "Navi 30", so the x stands for "any number", not "x-times improvement"


----------



## R0H1T (Mar 6, 2020)

Oh in that case AMD should fire the wise guy who made that slide, I mean it makes little sense even now!


----------



## GeorgeMan (Mar 6, 2020)

W1zzard said:


> Considering that currently shipping Navi is "Navi 10" and "Navi 14", which can be summarized as "Navi 1x", I would assume that the next GPUs are "Navi 20" and "Navi 30", so the x stands for "any number", not "x-times improvement"



Exactly that. I don't understand why it confuses people. The slides are crystal clear.


----------



## R0H1T (Mar 6, 2020)

No they're not, the naming scheme or (internal) chip jargons shouldn't be referenced in such a way!


----------



## Valantar (Mar 6, 2020)

Sorry, but how on earth does anyone see "Navi 2X" _in friggin' quotes _without any further data and think "Oh, that must mean 2x the performance"? Sorry, but that is a rather extreme leap of the imagination. Also, x as a multiplier is generally lower case, this is upper case, which is generally X as an unknown variable. 2X = 20, 21, 22, etc. is _much_ more reasonable of an assumption than 2X = 2x performance.

2X is the generational code name for all consumer-oriented non-semi custom RDNA 2 silicon, with each piece of silicon then having a distinct second digit. End discussion.


----------



## ShurikN (Mar 6, 2020)

Hopefully the cards launch in Q3 rather than Q4. Having new gaming products in time for CP2077 would be huge.


----------



## Vya Domus (Mar 6, 2020)

R0H1T said:


> Oh in that case AMD should fire the wise guy who made that slide, I mean it makes little sense even now!



They shouldn't fire anyone, everyone's comprehension is appalling.


----------



## Bruno Vieira (Mar 6, 2020)

btarunr said:


> Fixed title, sorry for the confusion.
> 
> NVIDIA went a step further than Intel in supporting not just tier-1, but also tier-2. The complete sentence was:
> 
> "NVIDIA and Intel already implement VRS tier-1 standardized by Microsoft, and NVIDIA "Turing" goes a step further in supporting even VRS tier-2 ."



AMD also stayed that the VRS and Raytracing implementation were made in conjunction with microsoft, so it has to have the highest tier available


----------



## droopyRO (Mar 6, 2020)

I hope that the "plague" dose not slow the production of this chips too much.


----------



## HD64G (Mar 6, 2020)

droopyRO said:


> I hope that the "plague" dose not slow the production of this chips too much.


Maybe the delay in next-gen Navi GPUs is due to that. Will know for sure once they launch them.


----------



## Deleted member 67555 (Mar 6, 2020)

I feel as though the confusion here was mostly caused because English isn't everyone's first language.

...and by "everyone's" I mean everyone in this thread but me.


----------



## kapone32 (Mar 6, 2020)

50% more performance per watt. So if a 200 Watt GPU gives you 80 FPS the next gen would give you 120 FPS? Or a more real thought would be 100 FPS.


----------



## kings (Mar 6, 2020)

Talk is cheap, I believe when I see it.

Vega's architecture was also supposed to bring 4X the performance per watt than previous generation.

But I hope this time it’s what AMD says.


----------



## Valantar (Mar 6, 2020)

kings said:


> Talk is cheap, I believe when I see it.
> 
> Vega's architecture was also supposed to bring 4X the performance per watt than previous generation.
> 
> But I hope this time it’s what AMD says.


Talk isn't particularly cheap when you're talking to investors and financial analysts. Fail to meet your promises and at best your stock tanks, at worst you get sued by shareholders for lying to them. Still, 50% sounds like a lot. Fingers crossed that it turns out that way - then we'll have a real fight on our hands in the next GPU generation, and prices ought to reflect that.


----------



## EarthDog (Mar 6, 2020)

> Considering that currently shipping Navi is "Navi 10" and "Navi 14", which can be summarized as "Navi 1x", I would assume that the next GPUs are "Navi 20" and "Navi 30", so the x stands for "any number", not "x-times improvement"


There we go....logic and intelligence prevail!! 

Anyway, I can't wait to see these on the market and AMD catch up to performance per /W over the 12nm Turing parts. The put some special sauce in the 5600 XT which put it on par with Nvidia, so this should be interesting, as well as seeing an apples to apples comparison with Ampre and its increase in efficiency p /w along with the shrink to 7nm...I bet NV still holds that lead........


----------



## Slizzo (Mar 6, 2020)

EarthDog said:


> There we go....logic and intelligence prevail!!
> 
> Anyway, I can't wait to see these on the market and AMD catch up to performance per /W over the 12nm Turing parts. The put some special sauce in the 5600 XT which put it on par with Nvidia, so this should be interesting, as well as seeing an apples to apples comparison with Ampre and its increase in efficiency p /w along with the shrink to 7nm...I bet NV still holds that lead........


"special sauce"? If by special sauce you mean they freaked out and pushed out a firmware right at launch that blew initial thermal and power targets, then, yeah, "Special".


----------



## oxidized (Mar 6, 2020)

If only performance was your main problem AMD...


----------



## EarthDog (Mar 6, 2020)

Slizzo said:


> "special sauce"? If by special sauce you mean they freaked out and pushed out a firmware right at launch that blew initial thermal and power targets, then, yeah, "Special".


Ehh, it was still comparable to the RTX 2060 which it competes with... that is different than we saw with 5500 XT and 5700/5700XT.


----------



## efikkan (Mar 6, 2020)

I feel it's disappointing to see that there are no major new architecture in sight; just more iterations of Navi.



Chomiq said:


> If they'll actually deliver the only thing that will prevent me from buying this would be the driver issues that we saw with almost every launch from team red's GPU camp.


It has been a recurring subject with every release, since the underlying driver problems remains unfixed.



kings said:


> Talk is cheap, I believe when I see it.
> 
> Vega's architecture was also supposed to bring 4X the performance per watt than previous generation.


And Polaris promised 2.5x performance per watt, while it turned out that they meant if it ran at 850 MHz vs. an older GCN at a higher clock…
AMD's GPU department have a long standing tradition of over-promising and under-delivering, unfortunately.


----------



## kapone32 (Mar 6, 2020)

efikkan said:


> I feel it's disappointing to see that there are no major new architecture in sight; just more iterations of Navi.
> 
> 
> It has been a recurring subject with every release, since the underlying driver problems remains unfixed.
> ...



I am not sure about that Polaris is faster than Vega and the clocks for Vega are 1630 MHz.



kapone32 said:


> I am not sure about that Polaris is faster than Vega and the clocks for Vega are 1630 MHz.
> [
> Forgive I should have said Tahiti at 1100 Mhz but Tahiti was no joke.


----------



## Vya Domus (Mar 6, 2020)

kings said:


> Vega's architecture was also supposed to bring 4X the performance per watt than previous generation.



Are you on hallucinogenics ? Vega was never supposed to bring 4X the performance per watt, I swear you're all gonna scour the depths of the internet just to find that one fake leak to make your point.






*This was from a fake Aprils fools leak.* Come on, just how low will you fanboys go.









						There's Vega - Teaser Slides Leak Ahead of NDA
					

The age-old question "Where's Vega" has finally finally been answered; in the form of a brand new (Leaked) marketing deck.




					wccftech.com


----------



## efikkan (Mar 6, 2020)

kapone32 said:


> efikkan said:
> 
> 
> > And Polaris promised 2.5x performance per watt, while it turned out that they meant if it ran at 850 MHz vs. *an older GCN* at a higher clock…
> ...


That's not what I said. Try again


----------



## Cheeseball (Mar 6, 2020)

EarthDog said:


> There we go....logic and intelligence prevail!!
> 
> Anyway, I can't wait to see these on the market and AMD catch up to performance per /W over the 12nm Turing parts. The put some special sauce in the 5600 XT which put it on par with Nvidia, so this should be interesting, as well as seeing an apples to apples comparison with Ampre and its increase in efficiency p /w along with the shrink to 7nm...I bet NV still holds that lead........



It was more like that they wanted to easily defeat the 1660 Super and 1660 Ti, but because it was priced so close to the RTX 2060 non-Super, they decided to increase the clocks to make it competitive with it. They just changed targets at that price range, and it was a good idea.



kapone32 said:


> I am not sure about that Polaris is faster than Vega and the clocks for Vega are 1630 MHz.



You're getting mixed up there dude. Polaris and Vega are GCN (4th and 5th gen) architectures.


----------



## kapone32 (Mar 6, 2020)

Cheeseball said:


> It was more like that they wanted to easily defeat the 1660 Super and 1660 Ti, but because it was priced so close to the RTX 2060 non-Super, they decided to increase the clocks to make it competitive with it. They just changed targets at that price range, and it was a good idea.
> 
> 
> 
> You're getting mixed up there dude. Polaris and Vega are GCN (5th gen) architectures.



Yeah I was too quick with the trigger I should have said Vega vs Navi which really impressed me with the fact that the 5700XT is faster than the Vega 64 with 1/2 the ROPs.


----------



## IceShroom (Mar 6, 2020)

Vya Domus said:


> Are you on hallucinogenics ? Vega was never supposed to bring 4X the performance per watt, I swear you're all gonna scour the depths of the internet just to find that one fake leak to make your point.
> 
> View attachment 147387
> 
> ...


Looks like Nvidia guys cant tell apart which are fake and which are official AMD slide. And the fake slide is spread by WCCFTECH.

Dont worry someone will take this video as official AMD video also.(Dont click)


----------



## Cheeseball (Mar 6, 2020)

kapone32 said:


> Yeah I was too quick with the trigger I should have said Vega vs Navi which really impressed me with the fact that the 5700XT is faster than the Vega 64 with 1/2 the ROPs.



While it is technically impressive, please note that RDNA is quite different to GCN at SIMD-level, where RDNA works with SIMD32 (native Wave32!!) and single-cycle instructions.

GCN (5th gen) used SIMD16, which means it issues instructions every 4(??) cycles, where as RDNA issues it every cycle. This inherently makes a 40 CU (RX 5700 XT) cluster faster than the previous 64 CU cards (Vega 64/Radeon VII).

Depending on what you're trying to achieve (raw core performance vs. optimized IPC), GCN5 can still compete well against its younger sibling. However RDNA can do everything GCN5 can do, except beating it in raw compute loads.


----------



## kings (Mar 6, 2020)

Vya Domus said:


> Are you on hallucinogenics ? Vega was never supposed to bring 4X the performance per watt, I swear you're all gonna scour the depths of the internet just to find that one fake leak to make your point.
> 
> *This was from a fake Aprils fools leak.* Come on, just how low will you fanboys go.



My mistake then, I apologize.  I didn't know about these Aprils fools slides.

As for the fanboy part, you're wrong, but you're entitled to your opinion.


----------



## HD64G (Mar 6, 2020)

This improvement in efficiency means that for double the RX5700XT performance (80CU) it will consume close to 300W. And that is the worst Navi case in efficiency. Let's see if that is what AMD will bring to the table.


----------



## gamefoo21 (Mar 6, 2020)

Hmm... 50% more perf per watt than previous gen Vega.

That's a comparison against Vega10. Then the slides show 50% more against RDNA 1. Without any process improvements eh?

I predict we'll see a 386bit memory Navi. Navi is bandwidth starved at the moment.


----------



## R0H1T (Mar 6, 2020)

The 50% perf/W improvement includes IPC as well as process improvements. They'd  be well ahead of Nvidia if they could pull 2 gens of such improvements without process efficiency!


----------



## _larry (Mar 6, 2020)

I'm just glad AMD is getting their $hit together GPU wise again finally. They have already done VERY well with their CPUs, now if they can get closer to what Nvidia delivers, it's gonna be another game changer. (Pun intended)

When the R9's came out I was stoked. I still have my R9 290 from 2013 and it still can handle most games at 1440p with some settings turned down. I was very disappointed with the Polaris architecture. All they did was make them more power efficient with the same performance as my 290. Hell, my 290 still beats the RX580 in some benchmarks... I am looking forward to getting a 5700XT when the new cards drop though


----------



## gamefoo21 (Mar 6, 2020)

_larry said:


> I'm just glad AMD is getting their $hit together GPU wise again finally. They have already done VERY well with their CPUs, now if they can get closer to what Nvidia delivers, it's gonna be another game changer. (Pun intended)
> 
> When the R9's came out I was stoked. I still have my R9 290 from 2013 and it still can handle most games at 1440p with some settings turned down. I was very disappointed with the Polaris architecture. All they did was make them more power efficient with the same performance as my 290. Hell, my 290 still beats the RX580 in some benchmarks... I am looking forward to getting a 5700XT when the new cards drop though



The Fury X was so limited by it's vMem but it was a big GPU that fought with the 980. Then AMD just rode on Polaris and we haven't had a true high end GPU for a while. Vega 56/64 were pro GPUs forced into gaming. The V2 was the same, it is a beast of a workstation card, that plays games while arguing with the 2080.

The 5700XT was... Well a 2070 killer and a 2070 Super fighter.

It'll be nice if AMD can finally field another Radeon that can actually challenge for the performance crown again.

How long has it been since the Fury X came out? :-(



R0H1T said:


> The 50% perf/W improvement includes IPC as well as process improvements. They'd  be well ahead of Nvidia if they could pull 2 gens of such improvements without process efficiency!



I really don't see how AMD can get a 50% boost over RDNA 1 without a new and wider memory controller.

The 5700XT is desperately starved for bandwidth.

It's like my modified Fury X. Tightened up the HBM timings and at stock speed I can get over 300GB/s in OCLMembench. Stock as a rock the Fury X gets between 180-220GB/s for memory bandwidth. At 500mhz or well DDR for 1000 effective, it's theoretical is at 512GB/s.

It's hard for me to compare apples to apples because the mods also undervolted and under-clocked the core. Though it's similar with the 5700 XT... You can get nearly the same performance with less power by undervolting and mild under-clocked.

Either way a Fury X at 1000/1000 blows the doors off one at 1050/1000. On stock bios and I need to push the volts but it takes 1150/1200 to match.

It burns a lot more power. Tuned up makes a much happier Fury X that gets a significant bump to perf vs watts.

So if AMD could just not have to bring their damn architecture for every clock, it's possible to get most of the way there.

Which is why I think...

A refined 5700XT with 384bit memory that drops even 1-200mhz core from where it is now with a matching drop in vcore. That's not adding any other extra transistors to the die. Bump it to 44 CUs from 32, drop the core clocks 2-400mhz... All the way there.

Look at the 2080 Ti vs the 2080 Super. Bigger silicon, significantly less clocks, but it still performs.


----------



## moproblems99 (Mar 6, 2020)

oxrufiioxo said:


> Either way if they can get close to 50% per watt over a 5700XT while keeping prices sane it will be a nice card........... If its $800 it will be another fail. Well I guess that also depends on what Nvidia does they could raise prices again who knows.



Better not be their top.  That is not good enough.


----------



## oxrufiioxo (Mar 6, 2020)

moproblems99 said:


> Better not be their top.  That is not good enough.



well with AMD at this point it would just make me happy if they could compete with Nvidias 2nd best card and you figure whatever ampere brings the 3080 will be 10-30% faster than a 2080 ti most likely so competing with that would be a step in the right direction. oh and also not be 6-12 months late competing would be nice.


----------



## moproblems99 (Mar 6, 2020)

oxrufiioxo said:


> well with AMD at this point it would just make me happy if they could compete with Nvidias 2nd best card and you figure whatever ampere brings the 3080 will be 10-30% faster than a 2080 ti most likely so competing with that would be a step in the right direction. oh and also not be 6-12 months late competing would be nice.



Agreed, but I am not even sure 2 x 5700 would do that.


----------



## MrMilli (Mar 6, 2020)

gamefoo21 said:


> It's like my modified Fury X. Tightened up the HBM timings and at stock speed I can get over 300GB/s in OCLMembench. Stock as a rock the Fury X gets between 180-220GB/s for memory bandwidth. At 500mhz or well DDR for 1000 effective, it's theoretical is at 512GB/s.



No surprises there. Historically ATI has been terrible at making memory controllers.
Even if you go back more than a decade to the times of northbridges, ATI was the worst (while nVidia was the best at maximizing bandwidth). Nothing has changed.
Often reviewers site that nVidia designs are more memory bandwidth efficient, but while this might be true, my guess is that nVidia just gets more effective bandwidth out of the memory.


----------



## Vya Domus (Mar 6, 2020)

Cheeseball said:


> While it is technically impressive, please note that RDNA is quite different to GCN at SIMD-level, where RDNA works with SIMD32 (native Wave32!!) and single-cycle instructions.
> 
> GCN (5th gen) used SIMD16, which means it issues instructions every 4(??) cycles, where as RDNA issues it every cycle. This inherently makes a 40 CU (RX 5700 XT) cluster faster than the previous 64 CU cards (Vega 64/Radeon VII).
> 
> Depending on what you're trying to achieve (raw core performance vs. optimized IPC), GCN5 can still compete well against its younger sibling. However RDNA can do everything GCN5 can do, except beating it in raw compute loads.



There isn't really anything inherently faster about that if the workload is nontrivial, it's just a different way to schedule work. Over the span of 4 clock cycles both the GCN CU and and RDNA CU would go through the same amount of threads. To be fair there is nothing SIMD like anymore about both of these, Terrascale was the last architecture that used a real SIMD configuration, everything is now executed by scalar units in a SIMT fashion.

Instruction throughput is not indicative of performance because that's not how GPUs extract performance. Let's say you want to perform one FMA over 256 threads, with GCN5 you'd need 4 wavefronts that would take 4 clock cycles within one CU, with RDNA you'd need 8 wavefronts which would also take the same 4 clock cycles within one CU. The same work got done within the same time, it wasn't faster in either case.

Thing is, it takes more silicon and power to schedule 8 wavefronts instead of 4 so that actually makes GCN more efficient space and power wise, if you've ever wondered why AMD would always be able to fit more shaders within the same space and TDP than Nvida, that's how they did it. And that's also probably why Navi 10 wasn't as impressive power wise as some expected and why it had such a high transistor count despite it not having any RT and tensor hardware (Navi 10 and TU106 practically have the same transistor count).

But as always there's a trade off, a larger wavefront means more idle threads when a hazard occurs such as branching. Very few workloads are hazard-free, especially a complex graphics shader, so actually in practice _GCN ends up being a lot more inefficient per clock cycle on average._


----------



## Cheeseball (Mar 6, 2020)

Vya Domus said:


> Instruction throughput is not indicative of performance because that's not how GPUs extract performance. Let's say you want to perform a one FMA over 256 threads, with GCN5 you'd need 4 wavefronts that would take 4 clock cycles within one CU, with RDNA you'd need 8 wavefronts which would also take the same 4 clock cycles within one CU. The same work got done within the same time, it wasn't faster in either case.



You're correct about this though, any wavefront branching would require cycling through again until it was correctly executed, which can be inefficient.


----------



## Prime2515102 (Mar 7, 2020)

MrMilli said:


> No surprises there. Historically ATI has been terrible at making memory controllers.
> Even if you go back more than a decade to the times of northbridges, ATI was the worst (while nVidia was the best at maximizing bandwidth). Nothing has changed.
> Often reviewers site that nVidia designs are more memory bandwidth efficient, but while this might be true, my guess is that nVidia just gets more effective bandwidth out of the memory.



ATi never made northbridges.


----------



## MrMilli (Mar 7, 2020)

Prime2515102 said:


> ATi never made northbridges.








						List of ATI chipsets - Wikipedia
					






					en.wikipedia.org


----------



## eidairaman1 (Mar 7, 2020)

gamefoo21 said:


> The Fury X was so limited by it's vMem but it was a big GPU that fought with the 980. Then AMD just rode on Polaris and we haven't had a true high end GPU for a while. Vega 56/64 were pro GPUs forced into gaming. The V2 was the same, it is a beast of a workstation card, that plays games while arguing with the 2080.
> 
> The 5700XT was... Well a 2070 killer and a 2070 Super fighter.
> 
> ...



Same can be said of a 290X vs a 290.


----------



## Valantar (Mar 7, 2020)

efikkan said:


> I feel it's disappointing to see that there are no major new architecture in sight; just more iterations of Navi.


Uh... You know that Navi is the new major architecture, right? As in RDNA (1) and not GCN? Of which there has been just one generation of chips? Expecting another within even a few years is silly. First come optimizations and revisions. They are probably working on the next arch on a conceptual level already, but it'll be quite a while before we see it.


moproblems99 said:


> Better not be their top.  That is not good enough.


Why would it be? The main reason for perf/w improvements is to be able to cool a bigger/higher performing die in a PCIe form factor. Also, AMD has explicitly stated (both now and previously) that they will be competing at flagship level with this upcoming generation.


----------



## Super XP (Mar 7, 2020)

ratirt said:


> Yeah. The x2 performance uplift for the RDNA2 is in comparison to GCN. This can be confusing for some people.
> 
> EDIT: Performance/Watt to be exact.


Umm nope, 50% Performance over RDNA.



moproblems99 said:


> Agreed, but I am not even sure 2 x 5700 would do that.


RDNA2 is targeting Nvidia's next generation GPU, called Ampere or the rumoured RTX 3080 series.
Again, RDNA2 is NOT competing with Nvidia's current generation graphics. Which is why there was some patents out about a possible RX 5800XT & 5900XT based on a revamped RDNA1 as a placeholder until RDNA2 is released by the beginning of Q4 2020. Or these revamps could be RDNA2, despite that gen being called RX 6000 series.


----------



## ARF (Mar 7, 2020)

I hope these new Navi 2* based cards will receive all new features like full hardware acceleration of anything 8K video related.
They also need full support of the latest HDMI and DisplayPort interfaces - HDMI 2.1 and DP 2.0.

50% performance/watt improvement is good - it means a card at 150 W which renders a game with 100 FPS, will now render it with 150 FPS.


----------



## efikkan (Mar 7, 2020)

Valantar said:


> Uh... You know that Navi is the new major architecture, right? As in RDNA (1) and not GCN?


That's just marketing, even though many don't want to hear this. Internally in the driver Navi is still referred to as GCN, and the ISA is virtually unchanged. While there are some good improvements in Navi, these are still small compared to the pace Nvidia is innovating at.



Valantar said:


> Of which there has been just one generation of chips? Expecting another within even a few years is silly. First come optimizations and revisions.


Only minor architectures for 8 years with the GCN/RDNA family, compared to Nvidia which seems to be doing minor/major every other time or so. I'm worried that the efficiency gap with Nvidia will increase if they don't keep up.



Valantar said:


> They are probably working on the next arch on a conceptual level already, but it'll be quite a while before we see it.


They better be, Nvidia usually have three generations in various stages of development at any time, and designing a new architecture usually takes 3-6 years to market.


----------



## ARF (Mar 7, 2020)

RDNA 2.0 will be the 100% new micro-architecture.
RDNA 1.0 is just a hybrid, it keeps GCN characteristics.


----------



## Vya Domus (Mar 7, 2020)

ARF said:


> RDNA 1.0 is just a hybrid, it keeps GCN characteristics.



RDNA is already worlds apart from GCN, the only real thing in common is that RDNA supports both wavefronts of 32 and 64, that's it. Well, that comes with the caveat that GPU architectures in general aren't very different one from another. GPUs have shallow pipelines, no out of order execution, no real branch prediction, they're mostly simple vector processors, there is just not a whole lot you can tweak and change. 

In fact if you look throughout the history of GPUs you'll see that most of the performance typically comes from more shaders and higher clockspeeds, that's pretty much the number one driving factor for progress by far.


----------



## ARF (Mar 7, 2020)

RDNA 1.0 is just a heavily modified, rearranged GCN.

RDNA 2.0 will have ray-tracing hardware and variable rate shading capability which on their own should rearrange the architecture even further.

VLIW5 - VLIW4 - GCN:





Radeon HD 7870 Pitcairn GCN 1.0 original:





Radeon RX Vega GCN 1.4 vs Radeon RX 5700 XT RDNA 1.0 original:


----------



## Prime2515102 (Mar 7, 2020)

MrMilli said:


> List of ATI chipsets - Wikipedia
> 
> 
> 
> ...


I stand corrected. That is really bizarre that I have no memory of that. 

I even searched it and "Ati Chipsets" was right there and it didn't even register. lol


----------



## Super XP (Mar 7, 2020)

ARF said:


> RDNA 2.0 will be the 100% new micro-architecture.
> RDNA 1.0 is just a hybrid, it keeps GCN characteristics.


Based on all the data available today RDNA2 will be a new uArch. One major difference I heard was that RDNA2 will have a completely new redesigned cache system. I think this has to do with next generation gaming consoles because Micro$oft has been closely working with AMD on its RDNA2. This is key to the PC Gaming Market. We are talking about a significant performance uplift over GCN and RDNA1 with great efficiency.


----------



## medi01 (Mar 7, 2020)

moproblems99 said:


> Agreed, but I am not even sure 2 x 5700 would do that.


2080Ti is about 46%/55% faster than 5700XT (ref vs ref) at 1440p/4k respectively in TPU benchmarks.


----------



## sergionography (Mar 7, 2020)

Valantar said:


> Sorry, but how on earth does anyone see "Navi 2X" _in friggin' quotes _without any further data and think "Oh, that must mean 2x the performance"? Sorry, but that is a rather extreme leap of the imagination. Also, x as a multiplier is generally lower case, this is upper case, which is generally X as an unknown variable. 2X = 20, 21, 22, etc. is _much_ more reasonable of an assumption than 2X = 2x performance.
> 
> 2X is the generational code name for all consumer-oriented non-semi custom RDNA 2 silicon, with each piece of silicon then having a distinct second digit. End discussion.



Yes that's true, but 2x performance is very likely non the less. Keep in mind that Navi 10/5700xt is a small 250mm2 chip. 50% performance per watt means AMD can scale up more shaders before running into a performance/power wall. If anything, this gives credit to big Navi being twice the size of navi10, a 500+mm2 chip with double the shaders. A 5120 Radeon core chip below the 300watt pci limit all for sudden becomes a possibility


----------



## Super XP (Mar 7, 2020)

medi01 said:


> 2080Ti is about 46%/55% faster than 5700XT (ref vs ref) at 1440p/4k respectively in TPU benchmarks.


The rumored Big Navi based on RDNA2 should be about 30% to 40% faster over the current 2080Ti. (That's based on a Navi prototype) It's in direct competition over the upcoming RTX 3080 series. That's according to various sources and Geekbench. RDNA2 is so efficient, AMD has the ability to increase core clocks all while still maintaining a 250w power envelope. I'm sure they can achieve even more performance at 300w if required.


----------



## efikkan (Mar 7, 2020)

It would be wise to keep in mind that even if some of the lower clocked models achieve close to AMD's efficiency targets, this doesn't mean the entire lineup will achieve quite the same level of efficiency. These are best case scenarios to please investors, and deserves huge asterisks behind them.


----------



## Super XP (Mar 7, 2020)

Fair Enough,


----------



## Valantar (Mar 7, 2020)

sergionography said:


> Yes that's true, but 2x performance is very likely non the less. Keep in mind that Navi 10/5700xt is a small 250mm2 chip. 50% performance per watt means AMD can scale up more shaders before running into a performance/power wall. If anything, this gives credit to big Navi being twice the size of navi10, a 500+mm2 chip with double the shaders. A 5120 Radeon core chip below the 300watt pci limit all for sudden becomes a possibility


Congratulations, you just made a textbook straw man argument. I never said there wouldn't be an RDNA 2 GPU 2x as fast as the 5700XT, I said that expecting 2x perf/w based on a slide naming a series of chips "Navi 2X" is stupid.
I also said this a few posts later:


Valantar said:


> The main reason for perf/w improvements is to be able to cool a bigger/higher performing die in a PCIe form factor.


So, I don't know who it is you are arguing against, but it certainly isn't me. What you are saying bears no relation to the post you quoted when it's read in its proper context. It was commenting on something that related to an architecture and a series of chips (RDNA 1 vs 2 and Navi 1X vs 2X), not a specific chip, so talking absolute performance numbers (such as 2x 5700 XT) is meaningless in that context. AMD has said that they will be competing in the flagship space this generation, so at least close to 2x 5700XT is quite likely. But even then using such a card to say "RDNA 2 is 2x as fast as RDNA 1" would be stupid as you'd be comparing cards in different price ranges and power envelopes.


----------



## sergionography (Mar 8, 2020)

Valantar said:


> Congratulations, you just made a textbook straw man argument. I never said there wouldn't be an RDNA 2 GPU 2x as fast as the 5700XT, I said that expecting 2x perf/w based on a slide naming a series of chips "Navi 2X" is stupid.
> I also said this a few posts later:
> 
> So, I don't know who it is you are arguing against, but it certainly isn't me. What you are saying bears no relation to the post you quoted when it's read in its proper context. It was commenting on something that related to an architecture and a series of chips (RDNA 1 vs 2 and Navi 1X vs 2X), not a specific chip, so talking absolute performance numbers (such as 2x 5700 XT) is meaningless in that context. AMD has said that they will be competing in the flagship space this generation, so at least close to 2x 5700XT is quite likely. But even then using such a card to say "RDNA 2 is 2x as fast as RDNA 1" would be stupid as you'd be comparing cards in different price ranges and power envelopes.


My apologies I did not intend for my post to be "against" anybody, especially yourself. I was rather agreeing with you and adding perspective. I also agree that 2x might simply mean second gen, however it is a curious naming scheme that I don't remember from AMD before so it doesn't hurt to speculate. Last time such monikers were used they did so for dual chip cards. If we speculate based on this assumption then Navi 2X offers twice the performance of the first Navi/rx 5700xt, and NAVI 3x is triple the power. The interesting thing I noticed just now when I went back to the slides was that the architecture is RDNA 1, RDNA 2, and RDNA 3. But when specifically talking about the details of RDNA 2 they mention NAVI 2X. And we know Navi as the codename for the chips rather than the architecture. So when they describe RDNA 2 as NAVI 2X, along with the rumors we keep hearing about "big Navi", it all tends to be misleading in all sorts of way to indicate twice the top end performance.


----------



## Super XP (Mar 8, 2020)

sergionography said:


> My apologies I did not intend for my post to be "against" anybody, especially yourself. I was rather agreeing with you and adding perspective. I also agree that 2x might simply mean second gen, however it is a curious naming scheme that I don't remember from AMD before so it doesn't hurt to speculate. Last time such monikers were used they did so for dual chip cards. If we speculate based on this assumption then Navi 2X offers twice the performance of the first Navi/rx 5700xt, and NAVI 3x is triple the power. The interesting thing I noticed just now when I went back to the slides was that the architecture is RDNA 1, RDNA 2, and RDNA 3. But when specifically talking about the details of RDNA 2 they mention NAVI 2X. And we know Navi as the codename for the chips rather than the architecture. So when they describe RDNA 2 as NAVI 2X, along with the rumors we keep hearing about "big Navi", it all tends to be misleading in all sorts of way to indicate twice the top end performance.


I have a feeling AMD called it NAVI 2x on purpose to spark speculative debate. Which means that they have a product that they are very confident about and could be a potential market disruptor.

I mean they could have easily named it NAVI 2 and NAVI 3. But they choose the "X" for a reason IMO.

For me anyway I never thought that 2X or 3X meant performance increase over 1X, though I can understand why some might read it that way. My speculation, RDNA2 is going to be more than 2x the performance of RDNA1.


----------



## Valantar (Mar 8, 2020)

sergionography said:


> My apologies I did not intend for my post to be "against" anybody, especially yourself. I was rather agreeing with you and adding perspective. I also agree that 2x might simply mean second gen, however it is a curious naming scheme that I don't remember from AMD before so it doesn't hurt to speculate. Last time such monikers were used they did so for dual chip cards. If we speculate based on this assumption then Navi 2X offers twice the performance of the first Navi/rx 5700xt, and NAVI 3x is triple the power. The interesting thing I noticed just now when I went back to the slides was that the architecture is RDNA 1, RDNA 2, and RDNA 3. But when specifically talking about the details of RDNA 2 they mention NAVI 2X. And we know Navi as the codename for the chips rather than the architecture. So when they describe RDNA 2 as NAVI 2X, along with the rumors we keep hearing about "big Navi", it all tends to be misleading in all sorts of way to indicate twice the top end performance.


No problem, thanks for clearing that up  You might have a point, though I'm still lening towards the simplest solution of X=second digit in the chip's code name, i.e. Navi 1X = Navi 10, Navi 14, etc., and Navi 2X = Navi 20, 21, 22 and so on. The reason we haven't seen this before is that these code names are typically not used publicly, at least not in this manner (they may be part of the specifications of a card, but I've never seen a range of code names used to denominate a generation of chips publicly like this).


Super XP said:


> I have a feeling AMD called it NAVI 2x on purpose to spark speculative debate. Which means that they have a product that they are very confident about and could be a potential market disruptor.
> 
> I mean they could have easily named it NAVI 2 and NAVI 3. But they choose the "X" for a reason IMO.
> 
> For me anyway I never thought that 2X or 3X meant performance increase over 1X, though I can understand why some might read it that way. My speculation, RDNA2 is going to be more than 2x the performance of RDNA1.


Interesting theory. Might be true, or it might be that whoever made the slides didn't fully think this through. Either way I'm looking forward to... hopefully Computex?


----------



## ARF (Mar 8, 2020)

Radeon RX 5700 XT (Navi 10) = 219 W (average gaming consumption) = 100% performance (3840x2160)
GeForce RTX 2080 Ti = 273 W (average gaming consumption) = 156% performance (3840x2160)

50% better performance per watt in Navi 2* will mean 150% performance in the same 219 W as Navi 10.

If we assume that Navi 10 is memory bandwidth starved (only around 448 GB/s) and is overvolted at stock, then we could add additional 10-20% performance in considerably lower stock power consumption, for instance 160-170% performance in 180 W (average gaming consumption).

If Navi 21's average gaming consumption is 280 W and its performance scales linearly, then it should show around 55% higher consumption or 215-225% the performance of Navi 10.

So, around 45% higher performance than RTX 2080 Ti at the same consumption.

If, however, AMD decides to push the TDP further to 350 W, then the relative performance would be 250-260%.


----------



## sergionography (Mar 8, 2020)

Valantar said:


> No problem, thanks for clearing that up  You might have a point, though I'm still lening towards the simplest solution of X=second digit in the chip's code name, i.e. Navi 1X = Navi 10, Navi 14, etc., and Navi 2X = Navi 20, 21, 22 and so on. The reason we haven't seen this before is that these code names are typically not used publicly, at least not in this manner (they may be part of the specifications of a card, but I've never seen a range of code names used to denominate a generation of chips publicly like this).
> 
> Interesting theory. Might be true, or it might be that whoever made the slides didn't fully think this through. Either way I'm looking forward to... hopefully Computex?


Perhaps I missed your later posts or something but I think your right now that I think about it. That answers the X being referenced to the Navi name and not RDNA. And another argument to support this is that it makes no sense for AMD to publish vague performance expectations this far in advance anyways nor is it smart. And if they intended to mean performance then it actually would be X2 rather than 2X


----------



## ARF (Mar 8, 2020)

I can't know what the reason for this launch delay is, though.
They have already got working cards and as per reports are testing them right now.

Maybe a new revision to try to improve it even further?


----------



## efikkan (Mar 8, 2020)

sergionography said:


> I also agree that 2x might simply mean second gen, however it is a curious naming scheme that I don't remember from AMD before so it doesn't hurt to speculate. Last time such monikers were used they did so for dual chip cards. If we speculate based on this assumption then Navi 2X offers twice the performance of the first Navi/rx 5700xt, and NAVI 3x is triple the power. The interesting thing I noticed just now when I went back to the slides was that the architecture is RDNA 1, RDNA 2, and RDNA 3. But when specifically talking about the details of RDNA 2 they mention NAVI 2X. And we know Navi as the codename for the chips rather than the architecture. So when they describe RDNA 2 as NAVI 2X, along with the rumors we keep hearing about "big Navi", it all tends to be misleading in all sorts of way to indicate twice the top end performance.


While it might be understandable that not everyone in this thread understood the Navi terminology, but those who have been deeply engaged in the discussions for a while should have gotten that Navi 1x is Navi 10/12/14 and Navi 2x is Navi 21/22/23***, we have known this for about a year or so. Even more astounding, I noticed several of those so-called "experts" on YouTube that some of you like to cite for analysis and leaks, who can ramble on about Navi for hours, still managed to fail to know this basic information about Navi. It just goes to show how little these nobodies on YouTube actually know.

*) I only know about Navi 21/22/23 so far.



ARF said:


> I can't know what the reason for this launch delay is, though.
> They have already got working cards and as per reports are testing them right now.
> 
> Maybe a new revision to try to improve it even further?


Which delay in particular are you thinking of?


----------



## Valantar (Mar 8, 2020)

ARF said:


> I can't know what the reason for this launch delay is, though.
> They have already got working cards and as per reports are testing them right now.
> 
> Maybe a new revision to try to improve it even further?


Delay? Sorry, but what delay? Has there been a launch date announced that I somehow missed? Most vendors have "working cards" half a year or more before launch for testing purposes, based off early silicon and on unfinished/massively overspecced engineering boards. This doesn't tell us anything about mass production or designs being finished. Silicon mass production can start as early as about half a year before retail launch too, as a wafer made on an advanced node can take a month or more to process, and it then needs to be packaged, tested, binned, mounted to a board, tested again, packaged for sales/shipping, shipped (3 weeks or more on a boat typically) - that's easily half a year in that process. For an AIB/non-reference design, add another month for additional design, binning and validation. So if mass production of these GPUs started today, they'd likely be up for a late Q3 launch. Tl;dr: there being working engineering sample cards in circulation has relatively little bearing on retail availability.


----------



## ARF (Mar 8, 2020)

efikkan said:


> Which delay in particular are you thinking of?





Valantar said:


> Delay? Sorry, but what delay? Has there been a launch date announced that I somehow missed?



2017, 2018, 2019 and now one quarter of 2020 is gone and still the best that AMD has got is a mid-range RX 5700 XT.

Where is the performance, high-end, enthusiast card?


Isn't this a delay of years?


----------



## efikkan (Mar 8, 2020)

In that sense, sure. Navi 1x was supposed to launch early 2018.
But I don't know if there ever was a "big Navi" for Navi 1x, if so it was scrapped long before tapeout.


----------



## TKnockers (Mar 8, 2020)

All I know is that 5700xt was the last amd gpu I bought... they can promise and create anything they like, won't get my money again.


----------



## Valantar (Mar 8, 2020)

ARF said:


> 2017, 2018, 2019 and now one quarter of 2020 is gone and still the best that AMD has got is a mid-range RX 5700 XT.
> 
> Where is the performance, high-end, enthusiast card?
> 
> ...


I agree that it's about damn time AMD gets back into the high end GPU game, but "delay" is the entirely wrong word. A delay implies that something has been promised at a certain time and then has not appeared at that time, which isn't the case here - they simply haven't competed at all in that segment. Absence, failure to compete, sure, but not a delay.


TKnockers said:


> All I know is that 5700xt was the last amd gpu I bought... they can promise and create anything they like, won't get my money again.


Care to expand on that?


----------



## ARF (Mar 8, 2020)

Navi 10's VCN 2.0 doesn't even support 8K video playback. People complain:



> RX 5700 XT Nitro+ Special Edition here
> Chrome: laggy - GPU load at 100%
> Firefox: super extremely laggy - GPU load spiking like crazy from 0 to 90%
> Edge: smooth - GPU load at 100%
> Tested with this video (8k@60fps):












__
		https://www.reddit.com/r/Amd/comments/cy9h72



Valantar said:


> I agree that it's about damn time AMD gets back into the high end GPU game, but "delay" is the entirely wrong word. A delay implies that something has been promised at a certain time and then has not appeared at that time, which isn't the case here - they simply haven't competed at all in that segment. Absence, failure to compete, sure, but not a delay.



There is definitely a delay in Navi's launch. If it was originally supposed to launch in H1 2018, it was pushed back to H2 2019.
It's particularly interesting to hear Navi's story, was it originally intended for N14 node and then moved forward to N7, was simply N7 too late........








TKnockers said:


> All I know is that 5700xt was the last amd gpu I bought... they can promise and create anything they like, won't get my money again.



The problem is that there is no alternative of AMD.
They offer some features in their graphics that no one else can or will because their IP competence is lower.

But AMD is way too late to implement 4K gaming for the masses, way too late to introduce ray-tracing, way too late to even compete in some segments of the market.

To be honest, I would be happier if Nvidia goes only for the low-end and mid-range markets, while AMD competes only with itself at the top high-end tier.


----------



## efikkan (Mar 8, 2020)

I don't believe AMD "promised" a "high-end" Navi until fairly recently.
I believe it was early last year that Lisa Su said something along the lines of "Vega level performance" for Navi (1x). So I believe it's the hype and expectations to blame here.


----------



## Super XP (Mar 8, 2020)

TKnockers said:


> All I know is that 5700xt was the last amd gpu I bought... they can promise and create anything they like, won't get my money again.


Your statement makes no sense. Navi was released Q3 2019. What do you expect AMD to do? Keep launching more quarter after quarter? Not even Nvidia does such a thing. Navi is in a rotation. So expect a 5700XT upgrade by Q3 2020 exactly what the RDNA 2 roadmaps have stated since before 2019.



ARF said:


> Navi 10's VCN 2.0 doesn't even support 8K video playback. People complain:
> 
> 
> 
> ...


You are driving into a totally off topic situation. What people seem to not understand, in 2011 AMD took a chance on Bulldozer and it cost them. Almost all resources went into developing ZEN. 6 years later, 2017 ZEN launches and took the CPU world by storm. Only then did AMD put more resources into the RTG. While they were releasing what Radeon they can, they were quietly developing RDNA2 which would also stem to next generation gaming consoles. Microsoft had a hand in RDNA2. This spells good news for PC Gamers. 
Not sure if there was any Sony involvement though.

Too late for Ray Tracing? Currently Ray Tracing is useless. 4K gaming? Radeon 7 takes care of that for those that didn't want to buy Nvidia. The meat and potatoes for the RTG is RDNA2, that's Rumor'd to be a market disrupter. And yes we ALL can't wait for stiffer competition.


----------



## Valantar (Mar 8, 2020)

ARF said:


> Navi 10's VCN 2.0 doesn't even support 8K video playback. People complain:


How on earth is this relevant to this topic whatsoever? At this point you're just listing off things you don't like about RDNA (and seemingly by extension AMD). That is not what this thread is about. If you want a thread about that, go create one.



ARF said:


> There is definitely a delay in Navi's launch. If it was originally supposed to launch in H1 2018, it was pushed back to H2 2019.
> It's particularly interesting to hear Navi's story, was it originally intended for N14 node and then moved forward to N7, was simply N7 too late........


You have yourself gone to pains to underscore that RDNA 1 and RDNA 2 (Navi 1X and Navi 2X) are not the same thing. In this very thread, no less. So quit moving the goal posts please. When you first mentioned a delay it was very clear by your wording ("this delay" etc.) that you were saying that RDNA 2 was delayed - which is what you were asked to clarify. You haven't so far. Nobody here is disputing that Navi 1X was delayed. That doesn't mean that you can say that Navi 2X is delayed, as RDNA 2 builds heavily on RDNA 1 and could thus not have been designed before the major design elements of RDNA 1 were done. Is this a delay? No. The sins of the father and so on; you can't say that just because product 1 was delayed, product 2 that never had a launch date or timeframe or anything similar indicated was also delayed - that's faulty logic. The only time frame for RDNA 2 at the point RDNA 1 was delayed was "after RDNA 1". No delay there.



ARF said:


> But AMD is way too late to implement 4K gaming for the masses, way too late to introduce ray-tracing, way too late to even compete in some segments of the market.


You need to work on your phrasing. "4k gaming for the masses" is not a feature that can be implemented, it is a performance goal that must be reached. AMD hasn't reached it yet (at least not 4k Ultra - you can play 4k medium-high just fine on a 5700 XT), mostly due to challenges with efficiency, as that sets a hard performance ceiling on what can be cooled in a PCIe form factor. "Way too late to introduce ray tracing" is also an absurd statement. Nvidia introduced this _this generation_. A one-generation feature gap is nothing at all, especially for a feature that barely exists in games. As long as the upcoming RTRT support on AMD cards performs well they will have delivered it in time. If not, we obviously have reason to complain. And "way too late to even compete in some segments of the market" - again, a statement that falls to pieces in terms of its internal logic. AMD/ATI has historically competed across the entirety of the GPU market. They have had a period of absence from the high-end/flagship space, yes, but how does that make it "too late" for them to return there? There's nothing stopping them from doing so as long as their architecture and technology allows them to do so.



Super XP said:


> Your statement makes no sense. Navi was released Q3 2019. What do you expect AMD to do? Keep launching more quarter after quarter? Not even Nvidia does such a thing. Navi is in a rotation. So expect a 5700XT upgrade by Q3 2020 exactly what the RDNA 2 roadmaps have stated since before 2019.


I think the implication here was that they are/were very unhappy with their 5700XT. Which really needs explanation to be relevant to this thread whatsoever.


----------



## ARF (Mar 8, 2020)

Valantar said:


> Nobody here is disputing that Navi 1X was delayed



Navi 2* depends on the Navi 1* launch. If one is delayed, the other is delayed too.

But I do expect the big Navi much sooner. According to me, it must already have been launched.

It is not and I do explain it in front of myself with bizarre political decisions.

In recent interviews, Mr. Papermaster from AMD says that they try to implement only right IPC improvements. What does "right" mean and if he is the person who decides, then these are subjective and wrong decisions.

See how many times the word "right" has been said by him:









						An Interview with AMD’s CTO Mark Papermaster: ‘There’s More Room At The Top’
					






					www.anandtech.com
				







Valantar said:


> You need to work on your phrasing. "4k gaming for the masses" is not a feature that can be implemented, it is a performance goal that must be reached.



How do consoles with poor compared to the top PC hardware run 4K then and why?
Why are 4K TVs mainstream now?


----------



## r.h.p (Mar 8, 2020)

ARF said:


> Navi 10's VCN 2.0 doesn't even support 8K video playback. People complain:
> 
> 
> 
> ...



um I had no problem running  the you tube video although some of it was a bit lagggy , although im using a 1440p monitor ?

also did the 8k benchmark







ran the 8k peru video no problem , but I don use any of those other browsers except brave for personal stuff not watching videos


----------



## Valantar (Mar 8, 2020)

ARF said:


> Navi 2* depends on the Navi 1* launch. If one is delayed, the other is delayed too.


That's not how the concept of a delay works. A delay depends on something having some sort of timeframe attached to it. AMD published roadmaps showing Navi pre-2019, looking like early 2018. Navi arrived in mid-to-late 2019. Until the launch of Navi they had never published a roadmap showing RDNA 2/Navi 2(X). As such they had made _zero_ promises about when Navi 2 was to arrive. "After Navi 1" might mean 6 months after or five years after; it's too vague to actually indicate anything at all. So while it is indeed somewhat reasonable to think that developmental delays for RDNA (1) delayed the development of RDNA 2, you cannot extend that into saying the RDNA 2 launch is delayed, simply because no time frame was given.



ARF said:


> But I do expect the big Navi much sooner. According to me, it must already have been launched.


I don't know if you're trying for irony or sarcasm and a language barrier is mucking it up for you, or if this is just pure nonsense, but it comes off as the latter.



ARF said:


> It is not and I do explain it in front of myself with bizarre political decisions.


This, on the other hand, is nothing other than pure, unadulterated nonsense. You'll need to try to use coherent sentences if you want what you are saying to be understood.



ARF said:


> In recent interviews, Mr. Papermaster from AMD says that they try to implement only right IPC improvements. What does "right" mean and if he is the person who decides, then these are subjective and wrong decisions.
> 
> See how many times the word "right" has been said by him:


Uhh... The CTO of a company is supposed to be the one in charge of technological decision-making, no? It kind of makes sense that he is the one responsible for those decisions, even if the reality of the matter _obviously_ is that the decisions are made based on the work and input of the engineering teams working under him. Also, "subjective"? What else are they supposed to be? Given that objectivity is an utopian ideal that humans are entirely incapable of reaching, every decision ever made is subjective. But even beyond that, what on earth makes you think the chief engineer of a company is making unfounded judgement calls rather than making decisions based on what are the best moves in terms of developing their technology? Of course it's possible for these choices to turn out to be completely wrong (Hello, Bulldozer architecture!), but that is largely down to the fact that nobody can predict the future, and that every decision is made within the constraints of what is possible in the circumstances in which the decision is made. I would therefore assume the "right" IPC improvements therefore means some balance of a) achieveable with the available resources and within the required time frame, b) high-yield compared to the engineering effort required, c) relevant to real-world workloads, and d) suited to underpin future development. Unless you have access to information to contradict this, your arguments here don't make sense.


----------



## medi01 (Mar 8, 2020)

Looking at TSMC process chart, I simply do not see where the perf/watt jump should come from.
7N => 7NP/7N+ could give 10%/15% power savings, but the rest...
So, 35-40% improvement would come from arch updates alone?
And that following major perf/watt jump Vega=>Navi?



Vya Domus said:


> RDNA is already worlds apart from GCN, the only real thing in common is that RDNA supports both wavefronts of 32 and 64, that's it. Well, that comes with the caveat that GPU architectures in general aren't very different one from another. GPUs have shallow pipelines, no out of order execution, no real branch prediction, they're mostly simple vector processors, there is just not a whole lot you can tweak and change.
> 
> In fact if you look throughout the history of GPUs you'll see that most of the performance typically comes from more shaders and higher clockspeeds, that's pretty much the number one driving factor for progress by far.



Welp, what about Vega vs Navi? Same process, 330mm2 with faster mem barely beating 250mm2 chip from the next generation.



TKnockers said:


> 5700xt


Messages 9 (0.07/day)

Ahaha, hi there, is it you, burnt fuse?


----------



## ARF (Mar 8, 2020)

Look, Valantar , I am talking about simple thing competitiveness, you are talking about utopia and how the CTO is always right.
The same people who introduced R600, Bulldozer, Jaguar, Vega and now have two competing chips Polaris 30 and Navi 14 covering absolutely the same market segment.

Please, let's just agree to disagree with each other and stop the argument here and now.

Thanks.


----------



## Vya Domus (Mar 8, 2020)

medi01 said:


> Welp, what about Vega vs Navi? Same process, 330mm2 with faster mem barely beating 250mm2 chip from the next generation.



It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.


----------



## Valantar (Mar 8, 2020)

ARF said:


> Look, Valantar , I am talking about simple thing competitiveness, you are talking about utopia and how the CTO is always right.
> The same people who introduced R600, Bulldozer, Jaguar, Vega and now have two competing chips Polaris 30 and Navi 14 covering absolutely the same market segment.
> 
> Please, let's just agree to disagree with each other and stop the argument here and now.
> ...


Sorry, but no, I'll not agree to disagree when you aren't actually managing to formulate a coherent argument or even correctly read what I'm writing. Let's see. Did I say "the CTO is always right"? No, among other things I said


Valantar said:


> Of course it's possible for these choices to turn out to be completely wrong (Hello, Bulldozer architecture!)


Which is a rather explicit acknowledgement that mistakes can and have and will be made, no? You, on the other hand, are saying "Mark Papermaster said they made 'the right' improvements, therefore this must be subjective and wrong!" with _zero_ basis for saying so (at least that you are able to present here). Having made bad calls previously does not mean that all future calls will be poor. Besides, Papermaster wasn't the person responsible for a lot of what you're pointing out, so I don't quite understand why you're singling that specific executive out as fundamentally incapable of making sound technical decisions. Not to mention that no executive makes any sort of decision except based on the work of their team. If you want your opinion to be respected, at least show us others the respect of presenting it in a coherent and rational manner instead of just throwing out accusations and wild claims with no basis.

(And again, please don't read this as me somehow saying that "Mark Papermaster is a genius that can only make brilliant decisions" - I am not arguing _for_ something, I am arguing _against_ your brash and unfounded assertions that these decisions are necessarily wrong. They might be wrong, but given AMD's recent history they might also be right. And unless you can present some actual basis for your claims, this is just wild speculation and entirely useless anyhow.)

Polaris production is winding down, current "production" is likely just existing chip inventories being sold out (including that new China-only downclocked "RX 590" whatsitsname). They are only competing directly as far as previous-gen products are still in the channel, which is a situation that takes a while to resolve itself every generation. Remember, the RX 5500  launched less than three months ago. A couple more months and supply of new Polaris cards will be all but gone.

But beyond that, you aren't talking about competitiveness, in fact I would say you aren't presenting a coherent argument for anything specific at all. What does an imagined delay from an imagined previous (2019?) launch date of Navi 2X have to do with competitiveness as long as it launches reasonably close to Nvidia's next generation and performs competitively? What does the lack of RTRT in Navi 1X have to do with competitiveness when there are currently just a handful of RTRT titles? If you want to make an overarching point about something, please make sure what you're talking about actually relates to that point.

Also, I forgot this one:


ARF said:


> How do consoles with poor compared to the top PC hardware run 4K then and why?
> Why are 4K TVs mainstream now?


4K TVs are mainstream because TV manufacturers need to sell new products and have spent a fortune on marketing a barely perceptible (at TV sizes and viewing distances) increase in resolution as a revolutionary upgrade. TVs are also not even close to mainly used or sold for gaming, they are TVs. 4k TVs being mainstream has nothing to do with gaming whatsoever.

Consoles can run 4k games because they turn down the image quality settings dramatically, and (especially in the case of the PS4 Pro) use rendering tricks like checkerboard rendering. They also generally target 30fps, at least at 4k. Console games generally run quality settings comparable to medium-low settings in their own PC ports. Digital Foundry (part of Eurogamer) has done a lot of great analyses on this, comparing various parts of image quality across platforms for a bunch of games. Worth the read/watch! But the point is, if you set your games to equivalent quality settings and lower your FPS expectations you can match any console with a similarly specced PC GPU. Again, DF has tested this too, with comparison images and frame time plots to document everything.


medi01 said:


> Looking at TSMC process chart, I simply do not see where the perf/watt jump should come from.
> 7N => 7NP/7N+ could give 10%/15% power savings, but the rest...
> So, 35-40% improvement would come from arch updates alone?
> And that following major perf/watt jump Vega=>Navi?


That was what they said in the fin an day presentation, yeah, including specifically . This does make it seem like like RDNA (1) was a bit of a "we need to get this new arch off the ground" effort with lots of low-hanging fruit left in terms of IPC improvements. I'm mildly skeptical - it seems too good to be true - but saying stuff you aren't sure of at a presentation targeting the financial sector is generally not what risk-averse corporations tend to do. PR is BS, but what you say to your (future) shareholders you might actually be held accountable for.



medi01 said:


> Welp, what about Vega vs Navi? Same process, 330mm2 with faster mem barely beating 250mm2 chip from the next generation.


Not to mention at ~70W more power draw.


----------



## medi01 (Mar 8, 2020)

Vya Domus said:


> It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.


Hm, but VII is 35% more TFLops, claimed "game" clock is the same as for 5700XT.

Also, if it is so straightforward, why does Intel struggle to roll out a competitive GPU?


----------



## Vya Domus (Mar 8, 2020)

medi01 said:


> Hm, but VII is 35% more TFLops, claimed "game" clock is the same as for 5700XT.



And VII is faster most of the time, nothing out of the ordinary. I also pointed out above how GCN is less efficient per clock cycle than RDNA2. Shader count and clockspeed are still the primary indicators for performance.



medi01 said:


> Also, if it is so straightforward, why does Intel struggle to roll out a competitive GPU?



Because the one GPU Intel showed was a minuscule low TDP chip on a not so great of a node, it's not like I'm implying it's easy and everyone  can do it. It's not easy to make a large GPU with a lot of shaders and high clockspeed without a colossal TDP and transistor count.


----------



## sergionography (Mar 9, 2020)

efikkan said:


> While it might be understandable that not everyone in this thread understood the Navi terminology, but those who have been deeply engaged in the discussions for a while should have gotten that Navi 1x is Navi 10/12/14 and Navi 2x is Navi 21/22/23***, we have known this for about a year or so. Even more astounding, I noticed several of those so-called "experts" on YouTube that some of you like to cite for analysis and leaks, who can ramble on about Navi for hours, still managed to fail to know this basic information about Navi. It just goes to show how little these nobodies on YouTube actually know.
> 
> *) I only know about Navi 21/22/23 so far.
> 
> ...


Oh I already knew about Navi 20 etc, yet somehow I totally missed the naming reference. I think we got too optimistic with doubling performance perhaps so it was more wishful thinking


----------



## moproblems99 (Mar 9, 2020)

medi01 said:


> 2080Ti is about 46%/55% faster than 5700XT (ref vs ref) at 1440p/4k respectively in TPU benchmarks.



Yeah, bit I believe this post is spawned off the idea of two 5700s glued together.  You would have to assume everything scaled perfectly in order to come out on top by any reasonable margin.  I don't feel that will be the case.  Or if it is the case, consider power draw and heat.  Again, not likely.


----------



## Super XP (Mar 9, 2020)

Vya Domus said:


> It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.


GCN vs. RDNA1? It's a lot more than just higher clocks, if that is what you are saying.
The main differences between GCN and RDNA1 is GCN issues one instruction every 4 cycles. RDNA1 issues one instruction every 1 cycle. Also the wavefront size differs. GCN the wavefront is of 64 threads (Wave64). RDNA1 it's both 32 threads (Wave32) & 64 threads (Wave64). Even the multilevel cache has been greatly improved in RDNA1 over GCN.

*UPDATE: I just read a few more of your posts. You already know what I posted. Ignore this  .*








medi01 said:


> Looking at TSMC process chart, I simply do not see where the perf/watt jump should come from.


It comes from a refined 7nm process node over that what the 5700XT uses.
It also comes from RDNA2 being a brand new architecture. Look at RDNA1 as a placeholder, to test the GPU waters and it did quite successfully.
RDNA2 is going to be a game changer IMO.


----------



## Vayra86 (Mar 9, 2020)

AMD slides. Nuff said.

Product pls. The hype train crashed long ago.


----------



## medi01 (Mar 9, 2020)

Vayra86 said:


> AMD slides. Nuff said.


What is this supposed to mean?


----------



## r.h.p (Mar 9, 2020)

ARF said:


> Look, Valantar , I am talking about simple thing competitiveness, you are talking about utopia and how the CTO is always right.
> The same people who introduced R600, Bulldozer, Jaguar, Vega and now have two competing chips Polaris 30 and Navi 14 covering absolutely the same market segment.
> 
> Please, let's just agree to disagree with each other and stop the argument here and now.
> ...



yes I must agree with  https://www.techpowerup.com/forums/members/valantar.171585/ , ive had r9 290x , vega 64 ref , and now 5700xt strix and to be honest im not that impressed with all of them as high or mid high end GPU segments .
they all get way too hot and only give 1440p performance . THE VEGA 64 Was supposed to be a game changer , but no.... the Bulldozer was junk and was the first time I changed to intel in 10 years
for 1 series of CPU ( Haswell ) . The new Ryzen seems to be going ok , lucky for them


----------



## Valantar (Mar 9, 2020)

r.h.p said:


> yes I must agree with  https://www.techpowerup.com/forums/members/valantar.171585/ , ive had r9 290x , vega 64 ref , and now 5700xt strix and to be honest im not that impressed with all of them as high or mid high end GPU segments .
> they all get way too hot and only give 1440p performance . THE VEGA 64 Was supposed to be a game changer , but no.... the Bulldozer was junk and was the first time I changed to intel in 10 years
> for 1 series of CPU ( Haswell ) . The new Ryzen seems to be going ok , lucky for them


Yet another post that doesn't really relate to the topic of this thread. I don't see how you are agreeing with me either; none of what you say here aligns with what I've been saying. Also, a lot of what you're saying here is ... if not wrong, then at least very odd. At the time the 290X launched there was no such thing as 4k gaming, so saying it "only gives 1440p performance" is meaningless. There were barely 4k monitors available at all at that time. You're absolutely right the Vega 64 was overhyped and poorly marketed, and it ended up being way too much of a compute-focused architecture with advantages that translated poorly into gaming performance, causing it to underperform while consuming a lot of power compared to the competition. As for the 5700 strix running hot - that's a design flaw that Asus has admitted, and offers an RMA program for, with current revisions being fixed. Also, complaining that a $400 GPU only plays 1440p Ultra is ... weird. Do you expect 4k Ultra performance from a card 1/3rd the price of the competing flagship? 4k60 Ultra in AAA titles is still something that flagship GPUs struggle with (depending on the game). And sure, Bulldozer was _terrible_. AMD gambled hard on CPU performance branching off in a direction which it ultimately didn't, leaving them with an underperforming architecture and no money to make a new one for quite a few years. But Zen has now been out for ... three years now, and has performed quite well the whole time. As such I don't see how complaining about Bulldozer currently makes much sense. Should we then also be complaining about Netburst P4s? No, it's time to move on. AMD is fully back in the CPU game - arguably the technological leader now, if not actually the market leader - and are _finally _ getting around to competing in the flagship GPU space again, which they haven't really touched since 2015 even if their marketing has made a series of overblown and stupid statements about their upper midrange/high end cards in previous generations. AMD's marketing department _really_ deserves some flack for how they've handled things like Vega, and for the unrealistic claims they have made, but even with all that taken into account AMD has competed decently on value if not absolute performance. We'll see how the new cards perform (fingers crossed we'll see some actual competition bringing prices back down!), but at least they're now promising outright to return to the performance leadership fight, which is largely due to the technologies finally being in place for them to do so. Which is what this thread is actually supposed to be about.


----------



## r.h.p (Mar 9, 2020)

Valantar said:


> Yet another post that doesn't really relate to the topic of this thread. I don't see how you are agreeing with me either; none of what you say here aligns with what I've been saying. Also, a lot of what you're saying here is ... if not wrong, then at least very odd. At the time the 290X launched there was no such thing as 4k gaming, so saying it "only gives 1440p performance" is meaningless. There were barely 4k monitors available at all at that time. You're absolutely right the Vega 64 was overhyped and poorly marketed, and it ended up being way too much of a compute-focused architecture with advantages that translated poorly into gaming performance, causing it to underperform while consuming a lot of power compared to the competition. As for the 5700 strix running hot - that's a design flaw that Asus has admitted, and offers an RMA program for, with current revisions being fixed. Also, complaining that a $400 GPU only plays 1440p Ultra is ... weird. Do you expect 4k Ultra performance from a card 1/3rd the price of the competing flagship? 4k60 Ultra in AAA titles is still something that flagship GPUs struggle with (depending on the game). And sure, Bulldozer was _terrible_. AMD gambled hard on CPU performance branching off in a direction which it ultimately didn't, leaving them with an underperforming architecture and no money to make a new one for quite a few years. But Zen has now been out for ... three years now, and has performed quite well the whole time. As such I don't see how complaining about Bulldozer currently makes much sense. Should we then also be complaining about Netburst P4s? No, it's time to move on. AMD is fully back in the CPU game - arguably the technological leader now, if not actually the market leader - and are _finally _ getting around to competing in the flagship GPU space again, which they haven't really touched since 2015 even if their marketing has made a series of overblown and stupid statements about their upper midrange/high end cards in previous generations. AMD's marketing department _really_ deserves some flack for how they've handled things like Vega, and for the unrealistic claims they have made, but even with all that taken into account AMD has competed decently on value if not absolute performance. We'll see how the new cards perform (fingers crossed we'll see some actual competition bringing prices back down!), but at least they're now promising outright to return to the performance leadership fight, which is largely due to the technologies finally being in place for them to do so. Which is what this thread is actually supposed to be about.



ok you have some points ,,,,,bulldozers was inferior to intel at the time , yet i doved in and bought one OH my AMD new multi core CPU .... fail slow . money  talks pal . sold it for 40 bucks.
I not sure about ur gaming , yet i could play bf4 at 1440 p no probs . Also Civ V with my R9 290 x XFX . Frickin Civ V has the freesync turned off for anti flickering with all vega64 and 5700xt cards for my system lol , no drivers have helped and ive tried them all.....

The new Ryzen seems to be going ok , lucky for them* like I said*,  being a AMD die hard since AXIA 1000mhz days pal  when AMD were the first to reach 1000 MHz . Also in AUS my Vega 64 was $900
and my 5700xt strix was $ 860 AUS , these are not cheap GPUs pal , and on top of it VEGA WAS running AT 90C , FULL GAME LOAD.  AMD better pull there finger out for there next release or im out of there GPU segment


----------



## Valantar (Mar 9, 2020)

r.h.p said:


> ok you have some points ,,,,,bulldozers was inferior to intel at the time , yet i doved in and bought one OH my AMD new multi core CPU .... fail slow . money  talks pal . sold it for 40 bucks.
> I not sure about ur gaming , yet i could play bf4 at 1440 p no probs . Also Civ V with my R9 290 x XFX . Frickin Civ V has the freesync turned off for anti flickering with all vega64 and 5700xt cards for my system lol , no drivers have helped and ive tried them all.....


It seems like you're in the bad luck camp there - some people seem to have consistent issues with Navi, while others have none at all. I hope AMD figures this out soon.



r.h.p said:


> The new Ryzen seems to be going ok , lucky for them* like I said*,  being a AMD die hard since AXIA 1000mhz days pal  when AMD were the first to reach 1000 MHz . Also in AUS my Vega 64 was $900
> and my 5700xt strix was $ 860 AUS , these are not cheap GPUs pal , and on top of it VEGA WAS running AT 90C , FULL GAME LOAD.  AMD better pull there finger out for there next release or im out of there GPU segment


That Strix price is pretty harsh, yeah - even accounting for the 10% Australian GST and AUD-to-USD conversion that's definitely on the high side. 860 AUD is 576 USD according to DuckDuckGo, so ~USD 524 without GST, while PCPartPicker lists it at USD 460-470 (though it's USD 590 on Amazon for some reason). That's at least 10% more than US prices, which is rather sucky. 900 AUD for the Vega 64 is actually below the original USD 699 MSRP with current exchange rates, though of course I don't know when you bought the card or what exchange rates were at that time.

Still, I do hope the 50% perf/W number actually holds up, if so we should see both some seriously powerful big GPUs from AMD next go around, and likely some very attractive midrange options too.


----------



## Super XP (Mar 9, 2020)

I truly believe RDNA2 is the real deal and will set AMDs GPU department up for years.
I see RDNA2 as the ZEN2 or ZEN3 of GPUs.


----------



## Valantar (Mar 9, 2020)

Super XP said:


> I truly believe RDNA2 is the real deal and will set AMDs GPU department up for years.
> I see RDNA2 as the ZEN2 or ZEN3 of GPUs.


Fingers crossed! Though calling it the Zen 3 of GPUs is a bit odd considering we know absolutely nothing for sure about Zen 3


----------



## Fluffmeister (Mar 9, 2020)

At best, it will be good to see what Turing brought to the table back in 2018 make it into the two big consoles, then there can be no more excuses.


----------



## Super XP (Mar 10, 2020)

Valantar said:


> Fingers crossed! Though calling it the Zen 3 of GPUs is a bit odd considering we know absolutely nothing for sure about Zen 3


That is why I called it the ZEN2 of GPUs. I added the ZEN3 because ZEN3 is suppose to clobber ZEN2 in performance by a significant % clock for clock. Something that only happens mostly with new micro architectures. So who knows, RDNA2 might have that ZEN3 effect on the market.


----------



## rvalencia (Mar 10, 2020)

Vya Domus said:


> There isn't really anything inherently faster about that if the workload is nontrivial, it's just a different way to schedule work. Over the span of 4 clock cycles both the GCN CU and and RDNA CU would go through the same amount of threads. To be fair there is nothing SIMD like anymore about both of these, Terrascale was the last architecture that used a real SIMD configuration, everything is now executed by scalar units in a SIMT fashion.
> 
> Instruction throughput is not indicative of performance because that's not how GPUs extract performance. Let's say you want to perform one FMA over 256 threads, with GCN5 you'd need 4 wavefronts that would take 4 clock cycles within one CU, with RDNA you'd need 8 wavefronts which would also take the same 4 clock cycles within one CU. The same work got done within the same time, it wasn't faster in either case.
> 
> ...


Some real clock cycle numbers

From  




__
		https://www.reddit.com/r/Amd/comments/ctfbem
Figure 3 (bottom of page 5) shows 4 lines of shader instructions being executed in GCN, vs RDNA in Wave32 or “backwards compatible” Wave64.
Vega takes 12 cycles to complete the instruction on a GCN SIMD. Navi in Wave32 (optimized code) completes it in 7 cycles. 
In backward-compatible (optimized for GCN Wave64) mode, Navi completes it in 8 cycles. 
So even on code optimized for GCN, Navi is faster., but more performance can be extracted by optimizing for Navi. 
Lower latency, and no wasted clock cycles.


For GCN wave64 mode, RDNA has about 33 percent higher efficiency when compared to Vega GCN, hence 5700 XT's 9.66 TFLOPS average yields around ‭12.8478‬ TFLOPS Vega II (real SKU has 14 TFLOPS). In terms of gaming performance, RX 5700 XT is very close to RX Vega II.

According to techpowerup, 
RX 5700 XT has 219 watts average gaming while RX Vega II has 268 watts average gaming.
RX 5700 XT has 227 watts peek gaming while RX Vega II has 313 watts peek gaming.

Perf/watt improvements between RX 5700 XT and RX Vega II is about 27 percent. AMD's 50 percent perf/watt improvement between GCN to RDNA v1 is BS.

References








						AMD Radeon RX 5700 XT Review
					

The AMD Radeon RX 5700 XT is based on AMD's all-new Navi 10 GPU featuring the RDNA architecture. We thoroughly test the card's gaming performance and look at power, heat, noise, overclocking, and clock frequency stability, too, sometimes with surprising results.




					www.techpowerup.com
				











						AMD Radeon VII 16 GB Review
					

The time has come. We're finally allowed to talk about Radeon VII performance numbers. The company's new flagship graphics card is the world's first to be made using a 7 nanometer production process. Also, it has the largest VRAM size of any card below $1000: 16 GB.




					www.techpowerup.com


----------



## ratirt (Mar 10, 2020)

rvalencia said:


> RX Vega II has


Which one is Vega II? Is that the Radeon VII?
You need to keep in mind that the RX5700Xt is way smaller than RVII so not sure what you are measuring? If you go only for performance then ok but if you put power consumption vs performance then for the 5700 XT it will be lower but the performance as well due to CUs used in 5700XT compared to RVII. 2560 for 5700 Xt vs 3860 for VII. That is quite a lot in my book so it is not a BS as you said.

EDIT: Not to mention you are comparing card vs card not chip vs chip. HBM2 vs GDDR6 have also different power usage which you haven't included in your calculations.


----------



## efikkan (Mar 10, 2020)

sergionography said:


> Oh I already knew about Navi 20 etc, yet somehow I totally missed the naming reference. I think we got too optimistic with doubling performance perhaps so it was more wishful thinking


And expecting AMD to double and then triple the performance in two years wasn't a clue either? 



rvalencia said:


> According to techpowerup,
> RX 5700 XT has 219 watts average gaming while RX Vega II has 268 watts average gaming.
> RX 5700 XT has 227 watts peek gaming while RX Vega II has 313 watts peek gaming.
> 
> Perf/watt improvements between RX 5700 XT and RX Vega II is about 27 percent. AMD's 50 percent perf/watt improvement between GCN to RDNA v1 is BS.


As I mentioned earlier, claims like these are at best cherry-picked, to please investors.
It probably refers to the Navi model which have the largest gains over the previous model of similar performance or segment, whatever makes the most impressive metric. AMD, Intel, Nvidia, Apple, etc. are all guilty of doing this marketing crap.

But it doesn't mean that the whole lineup is 50% more efficient. People need to keep this in mind when they estimate Navi 2x, which is supposed to bring yet another "50%" efficiency, or rather *up to* 50% more efficiency.


----------



## Valantar (Mar 10, 2020)

efikkan said:


> And expecting AMD to double and then triple the performance in two years wasn't a clue either?
> 
> 
> As I mentioned earlier, claims like these are at best cherry-picked, to please investors.
> ...


All they need for it to be true (at least in a "not getting sued by the shareholders" way) is a single product, so yeah, up to is very likely the most correct reading. Still, up to 50% is damn impressive without a node change (remember what changed from 14nm to the tweaked "12nm"? Yeah, near nothing). Here's hoping the minimum increase (for common workloads) is well above 30%. 40% would still make for a very good ~275W card (especially if they use HBM), though obviously we all want as fast as possible


----------



## rvalencia (Mar 10, 2020)

ratirt said:


> Which one is Vega II? Is that the Radeon VII?
> You need to keep in mind that the RX5700Xt is way smaller than RVII so not sure what you are measuring? If you go only for performance then ok but if you put power consumption vs performance then for the 5700 XT it will be lower but the performance as well due to CUs used in 5700XT compared to RVII. 2560 for 5700 Xt vs 3860 for VII. That is quite a lot in my book so it is not a BS as you said.
> 
> EDIT: Not to mention you are comparing card vs card not chip vs chip. HBM2 vs GDDR6 have also different power usage which you haven't included in your calculations.


1. I was referring to Radeon VII
2. I was referring to perf/watt.
3. GDDR6 (for 16 GBps 2.5w each x 8 chips) and HBM v2 (e.g `~20 watts Vega Frontier 16 GB) power consumption difference is minor when compared to GPUs involved.

16 GB HBM v2 power consumption is lower when compared to GDDR6 16 chip 16GB  Clamshell Mode which is irrelevant for RX-5700 XT's 8 chips GDDR6-14000.


----------



## Super XP (Mar 10, 2020)

+50% efficiently is very impressive. I can see why Nvidia may be worried.


----------



## EarthDog (Mar 10, 2020)

Super XP said:


> +50% efficiently is very impressive. I can see why Nvidia may be worried.


You really think so?








						NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power
					

As we approach the release of NVIDIA's Ampere GPUs, which are rumored to launch in the second half of this year, more rumors and information about the upcoming graphics cards are appearing. Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards...




					www.techpowerup.com


----------



## Fluffmeister (Mar 10, 2020)

It's certainly interesting reading the two threads, one is haha never gonna happen leather jacket man, the other is... awesome take that leather jacket man.

Nice features though, welcome to 2018.


----------



## Super XP (Mar 10, 2020)

EarthDog said:


> You really think so?
> 
> 
> 
> ...


Going by YouTube analysis by various techies yes I think so.


----------



## EarthDog (Mar 10, 2020)

Super XP said:


> Going by YouTube analysis by various techies yes I think so.


Will you elaborate on what these YTs said to make you feel this way?

....especially in light of the link I just provided?

If we know their Navi/RDNA/7nm is less efficient than Nvidiz now...assuming both of those articles are true.... why would they be worried about maintaining their efficiency over AMD gpus?

Which is more realistic to you for the 50% increase? An new arch with a die shrink, or an update arch on the same process? I think both will get there, however nvidia isnt worried about this..


----------



## wolf (Mar 11, 2020)

Fluffmeister said:


> It's certainly interesting reading the two threads, one is haha never gonna happen leather jacket man, the other is... awesome take that leather jacket man.
> 
> Nice features though, welcome to 2018.



Of course,  it's been like that for a while here at TPU, Nvidia is the company people love to hate while AMD as the underdog gets off light. Fair few examples floating around where similar things happen or are claimed, Nvidia gets sh*t on and AMD get's excitement and praise.

I realllllly want to see AMD pull the rabbit out of the hat on this on, I want the competition to be richer and I am craving a meaningful upgrade to my GTX1080 that has RTRT and VRS. I will buy the most compelling offering from either camp, it just has to be _compelling_. Really not in the mood for another hot, loud card, with coil whine and driver issues. If I can buy a 2080Ti perf or higher card for ~$750 USD or less that ticks those boxes, happy days.

Truly AMD, I am rooting for you, do what you did with Zen!


----------



## ratirt (Mar 11, 2020)

rvalencia said:


> 1. I was referring to Radeon VII
> 2. I was referring to perf/watt.
> 3. GDDR6 (for 16 GBps 2.5w each x 8 chips) and HBM v2 (e.g `~20 watts Vega Frontier 16 GB) power consumption difference is minor when compared to GPUs involved.
> 
> 16 GB HBM v2 power consumption is lower when compared to GDDR6 16 chip 16GB  Clamshell Mode which is irrelevant for RX-5700 XT's 8 chips GDDR6-14000.


Not so sure about that. HBM2 uses half the power than GDDR6 considering same capacity. If in your eyes it is minor then fine but it is still a difference which you haven't considered. I'm saying your comparison is not accurate. Also you are not comparing chip vs chip but card vs card and that is entirely different thing.


----------



## moproblems99 (Mar 11, 2020)

EarthDog said:


> Will you elaborate on what these YTs said to make you feel this way?



I think the words 'great' and '50%' were used in the same video.


----------



## efikkan (Mar 11, 2020)

The only thing that would worry Nvidia is if their next generation somehow gets delayed, but there are no indicators of that yet.



Valantar said:


> Still, up to 50% is damn impressive without a node change (remember what changed from 14nm to the tweaked "12nm"? Yeah, near nothing). Here's hoping the minimum increase (for common workloads) is well above 30%. 40% would still make for a very good ~275W card (especially if they use HBM), though obviously we all want as fast as possible


As I pointed out, it depends how you compare. If you selectively compare with a previous chip with higher clocks, then you can get numbers like this easily.
To achieve a 50% efficiency gain in average between Navi 1x and Navi 2x would be a huge achievement, and is fairly unlikely. It's hard to predict the gains from a refined node, but we have seen in the past that refinements can do good improvements, like Intel's 14nm+/14nm++, but still far away from reaching 50%.

And as always, any node advancements will be available to Nvidia as well.


----------



## Valantar (Mar 11, 2020)

efikkan said:


> As I pointed out, it depends how you compare. If you selectively compare with a previous chip with higher clocks, then you can get numbers like this easily.


... which is why I said I hoped for relatively high _minimum_ perf/W gains also, and not just peak.


efikkan said:


> To achieve a 50% efficiency gain in average between Navi 1x and Navi 2x would be a huge achievement, and is fairly unlikely. It's hard to predict the gains from a refined node, but we have seen in the past that refinements can do good improvements, like Intel's 14nm+/14nm++, but still far away from reaching 50%.


Preaching to the choir here man. Though there haven't been any real efficiency gains on Intel 14nm since Skylake, just clock scaling improvements (and later node revisions actually sacrifice efficiency to achieve that). Still an achievement hitting those clocks, but the sacrifices involved have been many and large.


----------



## Super XP (Mar 12, 2020)

EarthDog said:


> Will you elaborate on what these YTs said to make you feel this way?
> 
> ....especially in light of the link I just provided?
> 
> ...



I'm not going to dig into all his videos to find the various quotes he mentions, but this is one YouTuber that claims this based on sources. Probably an over exaggeration but RDNA2 *IS* going to challenge Nvidia, which will affect its overall sales. So in that respect, I am sure they are curious about this Big Navi. 
Moore's Law Is Dead 








						Moore's Law Is Dead
					

I create videos containing in-depth commentary and analysis of what's going on in the Technology and Computer Hardware landscape. My opinions are often not j...




					www.youtube.com


----------



## EarthDog (Mar 12, 2020)

Super XP said:


> I'm not going to dig into all his videos to find the various quotes he mentions,


Kind of a shame. You made some claims but, put the effort on others to find them? I'll pass. 



Super XP said:


> I am sure they are curious about this Big Navi.


Curious...sure. Always. You have to keep an eye on the competition.  But that is quite a bit different than "worried".


----------



## sergionography (Mar 12, 2020)

efikkan said:


> And expecting AMD to double and then triple the performance in two years wasn't a clue either?



Well it wasn't a clue because I thought it's doable. NAVI 1x is a 250mm2 chip which is small considering you could probably go up to 750-800mm2 (unlikely tho). But then 5nm EUV should be around by that time.


----------



## efikkan (Mar 12, 2020)

Super XP said:


> I'm not going to dig into all his videos to find the various quotes he mentions, but this is one YouTuber that claims this based on sources. Probably an over exaggeration but RDNA2 IS going to challenge Nvidia, which will affect its overall sales. So in that respect, I am sure they are curious about this Big Navi.
> Moore's Law Is Dead
> 
> 
> ...


I hope you're not basing your expectations of RDNA2 on this random nobody. This guy claimed last year that AMD were holding big Navi back because they didn't need to release it (facepalm), claiming that AMD were renaming chips codenames to excuse his mispredictons (which they would never do), and that Navi 12 was coming in 2019 to crush RTX 2080 Super, and that was just from a single of his BS videos.

Don't get me wrong though, I hope RDNA2 is as good as possible. But please don't spread the nonsense these losers on YouTube are pulling out of their behinds. 



sergionography said:


> Well it wasn't a clue because I thought it's doable. NAVI 1x is a 250mm2 chip which is small considering you could probably go up to 750-800mm2 (unlikely tho). But then 5nm EUV should be around by that time.


It's also a 250mm² chip that draws ~225W 

Building big chips is not the problem, but doing big chips with high clocks though, that would require a much more efficient architecture.


----------



## Super XP (Mar 12, 2020)

EarthDog said:


> Kind of a shame. You made some claims but, put the effort on others to find them? I'll pass.
> 
> Curious...sure. Always. You have to keep an eye on the competition.  But that is quite a bit different than "worried".


When I comment with such information, you should take it as fact. I have no reason to BS. And I was watching YouTube on my big screen TV after work one day and heard the individual say what I stated. I'm not going to take a notepad and start writing down what I hear. Lol 
Would you?



efikkan said:


> I hope you're not basing your expectations of RDNA2 on this random nobody. This guy claimed last year that AMD were holding big Navi back because they didn't need to release it (facepalm), claiming that AMD were renaming chips codenames to excuse his mispredictons (which they would never do), and that Navi 12 was coming in 2019 to crush RTX 2080 Super, and that was just from a single of his BS videos.
> 
> Don't get me wrong though, I hope RDNA2 is as good as possible. But please don't spread the nonsense these losers on YouTube are pulling out of their behinds.
> 
> ...


I've also heard RedTagGaming and Gamer Meld YouTube channels that seem quite exited about RDNA2 based on what there sources have hinted. I'm keeping my expectations conservative. Though, I have a strong gut feeling RDNA2 is the real deal and not just another Vega like GPU.


----------



## EarthDog (Mar 12, 2020)

Super XP said:


> When I comment with such information, you should take it as fact.




I don't need to write anything down. Thanks for the info and bread crumb trail.


----------



## efikkan (Mar 12, 2020)

Super XP said:


> I've also heard RedTagGaming and Gamer Meld YouTube channels that seem quite exited about RDNA2 based on what there sources have hinted. I'm keeping my expectations conservative. Though, I have a strong gut feeling RDNA2 is the real deal and not just another Vega like GPU.


Which are yet more channels which fall into the bucket of less "competent" "tech" YouTube channels. I would advice to avoid such channels unless you do it for amusement or looking for sources of false rumors. These channels serve one of two purposes; serve people the "news" they want to hear (in the echo chambers), or to shape public opinion. If you listen to more than a few episodes you'll see all of these are all over the place, are inconsistent with themselves, and fail to master any deeper technical knowledge. Some of these provide their own "leaks", while others just recite pretty much everything they can scrape of the web.

Speculation is of course fine, and many of us enjoy discussing potential hardware, myself included, but speculation should be labeled as such, not be labeled as "leaks" when it's not. Whenever we see leaks we should always check if it passes some basic "smell tests";

Who is the source and does it have a good track record? Always see where the leak originates; if it's from WCCFTech, VideoCardz, FudZilla or somewhere random, then it's fairly certainly fake, random twitter/forum posts often is fake, but can occasionally be true, etc. "Leaks" from official drivers, compilers, official papers etc. are pretty solid. Some sources are also know to have a certain bias, even though they can have elements of truth to their claims.
Is the nature of the "leak" something which _can_ be known, or is likely to be known outside a few core engineers? Example: Clock speeds are never set in stone until they have the final stepping shortly ahead of a release, so when someone posts a table of clock speeds of CPUs/GPUs 6-12 monts ahead, you can know it's BS.
Is the specificity of the leak something that is sensitive? If the details is only known to a few people under NDA, then those leaking it will risk losing their job and potential lawsuits, how many are willing to do that to serve a random YouTube channel or webpage? What is their motivation?
Is the scope of the leak(s) likely at all? Some of these channels claims to have dozens of sources inside Intel/AMD/Nvidia, seriously a random guy in his basement have such good sources? Some of these claims to even have single sources who provides sensitive NDA'ed information from both Intel and AMD about products 1+ years away, there is virtually no chance this claim is true, and is an immediate red flag to me.

Unfortunately, most "leaks" are either qualified guesses or pure BS, sometimes an accumulation of both (either intentionally or not). Perhaps sometime you should look back after a product release and evaluate the accuracy and the timeline of the leaks. The general trend is usually that early leaks are usually only true about "big" features, early "specific"(clocks, TDP, shader count(GPUs)) leaks are usually fake. Then usually there is a spike in leaks around the time the first engineering samples arrives, various leaked benchmarks, etc. but clocks are still all over the place. Then there is another spike when board partners get their hands on it, then the accuracy increases a lot, but there is still some variance. Then usually a few weeks ahead of release, we get pretty much precise details.

Edit:
Rumors about Polaris, Vega, Vega 2x and Navi 1x have pretty much started out the same way; very unrealistic initially, and then pessimistic close to the actual release. Let's hope Navi 2x delivers, but please don't drive the hype too high.


----------



## Valantar (Mar 13, 2020)

efikkan said:


> It's also a 250mm² chip that draws ~225W
> 
> Building big chips is not the problem, but doing big chips with high clocks though, that would require a much more efficient architecture.


Not _that_ difficult - there's not much reason to push a big chip that far up the efficiency curve, and seeing just how much power can be saved on Navi by downclocking just a little, it's not too big a stretch of the imagination to see a 500mm² chip at, say, 200-300MHz less stay below 300W, especially if it uses HBM2. Of course AMD did say that they would be _increasing_ clocks with RDNA2 while still improving efficiency, which really makes me wonder what kind of obvious fixes they left for themselves when they designed RDNA (1). Even with a tweaked process node, that is a big ask.


----------



## Xmpere (Mar 13, 2020)

this super xp guy is just a AMD fanboy. Anyone who is a fanboy/bias towards to a company, it statements renders invalid.


----------



## Super XP (Mar 15, 2020)

Xmpere said:


> this super xp guy is just a AMD fanboy. Anyone who is a fanboy/bias towards to a company, it statements renders invalid.


You claiming I am a fanboy renders your statement invalid. Not to mention, I've been here since 2005. YOU?



efikkan said:


> Which are yet more channels which fall into the bucket of less "competent" "tech" YouTube channels. I would advice to avoid such channels unless you do it for amusement or looking for sources of false rumors. These channels serve one of two purposes; serve people the "news" they want to hear (in the echo chambers), or to shape public opinion. If you listen to more than a few episodes you'll see all of these are all over the place, are inconsistent with themselves, and fail to master any deeper technical knowledge. Some of these provide their own "leaks", while others just recite pretty much everything they can scrape of the web.
> 
> Speculation is of course fine, and many of us enjoy discussing potential hardware, myself included, but speculation should be labeled as such, not be labeled as "leaks" when it's not. Whenever we see leaks we should always check if it passes some basic "smell tests";
> 
> ...


Thanks for the information. Most of the so called Rumors from Wccftech is regurgitation off VideoCardz and most VideoCardz rumors comes from Twitter.
As for Fudzilla, I would take them a lot more serious over the 2 mentioned. Fudzilla used to be part of Mike Magee's group which wrote for The Inquirer.net (No longer around). Also Charlie Demerjian of SemiAccurate was also part of Mike Magee's group. My point was Mike had real industry sources and was well respected in the computer tech industry. I believe he's been retired for years now. So Fudzilla & SemiAccurate may not get it right all the time, they get pretty close to to the actual truth, because nothing in rumor ever comes 100% accurate. Companies always make last minute changes to products.



Valantar said:


> Not _that_ difficult - there's not much reason to push a big chip that far up the efficiency curve, and seeing just how much power can be saved on Navi by downclocking just a little, it's not too big a stretch of the imagination to see a 500mm² chip at, say, 200-300MHz less stay below 300W, especially if it uses HBM2. Of course AMD did say that they would be _increasing_ clocks with RDNA2 while still improving efficiency, which really makes me wonder what kind of obvious fixes they left for themselves when they designed RDNA (1). Even with a tweaked process node, that is a big ask.


RDNA1 was just to get a new 7nm hybrid graphics chip that competes well out the door. Testing the waters of RDNA1 design. One example is for GCN, 1 instruction is issued every 4 cycles. With this RDNA hybrid, 1 instruction is issued every 1 cycle, making it much more efficient.  
RDNA2 is the real deal according to AMD. I believe they will release a 280W max version, where they will still be able to achieve at least 25%-40% performance improvement over the RTX 2080-Ti. RDNA2 is an Ampere competitor.


----------



## EarthDog (Mar 15, 2020)

Super XP said:


> You claiming I am a fanboy renders your statement invalid. Not to mention, I've been here since 2005. YOU?
> 
> 
> Thanks for the information. Most of the so called Rumors from Wccftech is regurgitation off VideoCardz and most VideoCardz rumors comes from Twitter.
> ...


Sorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count.... 

Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship? 

You've sure got a lot of faith in this architecture with about the only thing going for it is AMD marketing... 

If ampre comes in like Turing did over kepler (25%) that's the bottom end of your goal with their new gpu performing 71% faster than it's current flagship. That's a ton, period, not to mention on the same node.


----------



## Valantar (Mar 15, 2020)

EarthDog said:


> Sorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count....
> 
> Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?
> 
> ...


The 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.


----------



## Super XP (Mar 16, 2020)

EarthDog said:


> *Sorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count....*
> 
> Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?
> 
> ...


He called me a fanboy that has absolutely no relevance to the topic at hand? Or perhaps he never knew I have a high end Intel & Nvidia gaming laptop because AMD graphics didn't cut at the time I purchased it in 2018.  

With regards to the 3080-Ti and Big Navi performance numbers, it's all up in the air speculation. Some think RDNA2 (Big Navi) is going to compete with the 2080-TI and others believe AMD is targeting the 3080-Ti. In order for AMD to target Nvidia's speculative 3080-Ti, they are probably going to compare Nvidia's performance improvements per generation to have an idea on how fast RDNA2 needs to be. I don't think AMD will push it to the limits, I think they are working more on power efficiency and performance efficiency when they designed RDNA2. I know this is marketing, but Micro-Architecture Innovation = Improved Per-per-Clock (IPC), Logic Enhancement = Reduce Complexity and Switching Power & Physical Optimizations = Increase Clock Speed. 

What does all these enhancements have in common? Gaming Consoles



Valantar said:


> The 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.


Agreed. 
I have a suspicion, what ZEN2 did to the market, RDNA2 will also have a similar effect. And it's a much needed effect, as we need better competition to help drive resonable GPU pricing once again.


----------



## Flanker (Mar 16, 2020)

Super XP said:


> I have a suspicion, what ZEN2 did to the market, RDNA2 will also have a similar effect. And it's a much needed effect, as we need better competition to help drive resonable GPU pricing once again.


If it does what the HD4870/50 did, that will be incredible


----------



## ratirt (Mar 16, 2020)

EarthDog said:


> Sorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count....
> 
> Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?
> 
> ...


pack 5700xt chip in one die  500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's  2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.


----------



## EarthDog (Mar 16, 2020)

Valantar said:


> The 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.





Super XP said:


> He called me a fanboy that has absolutely no relevance to the topic at hand? Or perhaps he never knew I have a high end Intel & Nvidia gaming laptop because AMD graphics didn't cut at the time I purchased it in 2018.
> 
> With regards to the 3080-Ti and Big Navi performance numbers, it's all up in the air speculation. Some think RDNA2 (Big Navi) is going to compete with the 2080-TI and others believe AMD is targeting the 3080-Ti. In order for AMD to target Nvidia's speculative 3080-Ti, they are probably going to compare Nvidia's performance improvements per generation to have an idea on how fast RDNA2 needs to be. I don't think AMD will push it to the limits, I think they are working more on power efficiency and performance efficiency when they designed RDNA2. I know this is marketing, but Micro-Architecture Innovation = Improved Per-per-Clock (IPC), Logic Enhancement = Reduce Complexity and Switching Power & Physical Optimizations = Increase Clock Speed.
> 
> ...





ratirt said:


> pack 5700xt chip in one die  500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's  2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.


Semantics of a flagship aside,  I see is a 225W 'flagship' 7nm part that is 2% (1440p) faster than a 175W 12nm part (rtx 2070).

The improvements they need to make to match ampre, both in raw performance and ppw (note hat is matching ampre using last generation's paltry 25% gain - remember they added ray tracing and tensor core hardware), is 71%. That's a ton. Only time will tell, and I hope your glass half full attitude pans out to reality, but I'm not holding my breath. I think they will close the gap, but will fall well short of ampre's consumer flagship. At best I see it splitting the difference between 2080ti and ampre. I think it will end up a lot closer to 2080ti than ampre. They have a lot of work to do.

Remember, both amd and nvidia touted 50% ppw gains... if both are true, how can they catch up?


----------



## Valantar (Mar 16, 2020)

ratirt said:


> pack 5700xt chip in one die  500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's  2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.


For that you'd also need a 512-bit memory bus, which ... well, is both expensive, huge, and power hungry. Not a good idea (as the 290(X)/390(X) showed us).


EarthDog said:


> Semantics of a flagship aside,  I see is a 225W 'flagship' 7nm part that is 2% (1440p) faster than a 175W 12nm part.
> 
> The improvements they need to make to match ampre, both in raw performance and ppw (note hat is matching ampre using last generation's paltry 25% gain - remember they added ray tracing and tensor core hardware), is 71%. That's a ton. Only time will tell, and I hope your glass half full attitude pans out to reality, but I'm not holding my breath. I think they will close the gap, but will fall well short of ampre's consumer flagship. At best I see it splitting the difference between 2080ti and ampre. I think it will end up a lot closer to 2080ti than ampre. They have a lot of work to do.


What GPU are you comparing to? If we go by TPU's review, the average gaming power draw of the 5700 XT is 219W, with the 2070 at 195W and the 2060S at 184W. I'm assuming you're pointing to the 2070 as it's 2% slower in the same review. Nice job slightly bumping up AMD's power draw and lowering Nvidia's by a full 10%, though. That's how you make a close race (219W-194W=24W) look much worse (225W-175W=50W).

Edit: ah, I see you edited in the 2070 as the comparison. Your power draw number is still a full 20W too low though.


----------



## ratirt (Mar 16, 2020)

Valantar said:


> For that you'd also need a 512-bit memory bus, which ... well, is both expensive, huge, and power hungry. Not a good idea (as the 290(X)/390(X) showed us).


It would have been a big chip so yes you would need it but in any case this 500mm2 chip, would do the trick tapping beyond 2080 Ti's performance. You pack a lot of cores you need to feed them so either way you need to do something with the memory interface. Power hungry, yes but not all the way. You need to remember, it all depends on the frequency used if you balance it it would be ok. There are possibilities to make it happen.


----------



## EarthDog (Mar 16, 2020)

Valantar said:


> Edit: ah, I see you edited in the 2070 as the comparison. Your power draw number is still a full 20W too low though.


i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.

I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.

EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.


----------



## Valantar (Mar 16, 2020)

EarthDog said:


> i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.
> 
> I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.
> 
> EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.


Yeah, I quoted you to remind myself to respond to that later, then went and did something else  Sorry about that. Anyhow, by not going by real-world power draw numbers you're effectively giving Nvidia an advantage due to them lowballing specs. That's ... nice of you, I guess? My general rule of thumb is to never - ever! - trust manufacturer power draw numbers, but rely on real-world measurements from reviews.  The former is okay for ballpark stuff or if no reviews exist, but should always be taken with a (huge) grain of salt.



ratirt said:


> It would have been a big chip so yes you would need it but in any case this 500mm2 chip, would do the trick tapping beyond 2080 Ti's performance. You pack a lot of cores you need to feed them so either way you need to do something with the memory interface. Power hungry, yes but not all the way. You need to remember, it all depends on the frequency used if you balance it it would be ok. There are possibilities to make it happen.


No, you would need that not due to the size of the chip, but due to the 5700 XT having a 256-bit memory interface, and doubling the compute power necessitates doubling memory bandwidth too unless you want to intentionally bottleneck the chip. How many cores you have doesn't matter if they can't get data to process quickly enough. And there's no power tuning to be done in this case - 8GB of GDDR6 on a 256-bit bus consumes somewhere around 30-35W; twice that will consume twice the power unless you downclock the memory and sacrifice performance. I'm not talking about chip power consumption but the power consumption of the memory and its interface.


----------



## EarthDog (Mar 16, 2020)

Valantar said:


> The former is okay for ballpark stuff or if no reviews exist, but should always be taken with a (huge) grain of salt.


There is nothing there for RDNA2 or Ampre, so I used what I will have with all comparison cards... what the MFG says. Once we see Ampre's flagship and big Navi, we will deal with actual numbers.

Regardless of 50W(~20%) or 24W (~10%)The high level point is unchanged... the RDNA arch on a smaller node is less efficient than Turing on a larger node. They have a lot of work to do to reclaim the performance crown and have some work to regain performance /watt. Where AMD only has an arch change, Nvidia is coming with both barrels loaded (arch and node shrink).

EDIT:


Super XP said:


> I can see why Nvidia may be worried.


My reply all started with this comment, mind you.......

I don't think they have much to worry about except for the usual price to performance ratio considering all that we know right now, including the 50% rumors from both camps...but I've said that like 3 times now to 3 different people it feels like.

EDIT2: Isn't RDNA2 also supposed to at RT capabilities as well? Won't that eat into their 'normal' power envelope? Like Nvidia, this lowered their typical GoG (generation over geneation) performance improvements.... will it do the same to AMD? 

All of these factors make me confident Nvidia isn't "worried" about 'big navi'. They have A LOT of work to do in order to catch up.


----------



## ratirt (Mar 16, 2020)

Valantar said:


> No, you would need that not due to the size of the chip, but due to the 5700 XT having a 256-bit memory interface, and doubling the compute power necessitates doubling memory bandwidth too unless you want to intentionally bottleneck the chip. How many cores you have doesn't matter if they can't get data to process quickly enough. And there's no power tuning to be done in this case - 8GB of GDDR6 on a 256-bit bus consumes somewhere around 30-35W; twice that will consume twice the power unless you downclock the memory and sacrifice performance. I'm not talking about chip power consumption but the power consumption of the memory and its interface.


I'm surprised you are still going with this. It is obvious it would be necessary to get more bandwidth but that wasn't the problem here. Making 500mm2 chip is nothing out of ordinary or an extreme and it can be done. Bandwidth is obvious and it can be done as well. Power consumption is another story. You can tweak everything and make it OK balanced. 
GDDR6 consumes 20Watts for 16GB. Same capacity HBM2 is 10W.


----------



## Valantar (Mar 16, 2020)

ratirt said:


> I'm surprised you are still going with this. It is obvious it would be necessary to get more bandwidth but that wasn't the problem here. Making 500mm2 chip is nothing out of ordinary or an extreme and it can be done. Bandwidth is obvious and it can be done as well. Power consumption is another story. You can tweak everything and make it OK balanced.
> GDDR6 consumes 20Watts for 16GB. Same capacity HBM2 is 10W.


I never said it couldn't be done, I said it would require a huge and expensive PCB and need a lot of power (which would necessitate lowering the power budget of the GPU, sacrificing performance). All of which is still true.


----------



## ratirt (Mar 16, 2020)

Valantar said:


> I never said it couldn't be done, I said it would require a huge and expensive PCB and need a lot of power (which would necessitate lowering the power budget of the GPU, sacrificing performance). All of which is still true.


And i never said it wouldn't require expensive PCB and a lot more power. That was not the point, anyway thanks for bringing this up  
It is possible and we can only assume of the outcome.


----------



## Valantar (Mar 16, 2020)

EarthDog said:


> There is nothing there for RDNA2 or Ampre, so I used what I will have with all comparison cards... what the MFG says. Once we see Ampre's flagship and big Navi, we will deal with actual numbers.
> 
> Regardless of 50W(~20%) or 24W (~10%)The high level point is unchanged... the RDNA arch on a smaller node is less efficient than Turing on a larger node. They have a lot of work to do to reclaim the performance crown and have some work to regain performance /watt. Where AMD only has an arch change, Nvidia is coming with both barrels loaded (arch and node shrink).


I didn't say there were numbers available for either of the two, but given how notoriously unreliable manufacturer specifications for power draw are, I would argue that the only reasonable thing to try to base our speculations on are _actual real-world numbers_ and not wildly inaccurate specifications.

You're right that RDNA is still slightly less efficient in an absolute sense (though that depends on the implementation; the RX 5700 XT is slightly less efficient than the 2070S, but the 5600 XT is (even with the new, boosted BIOS) better than its Nvidia competition by a few percent. Nvidia still (obviously!) has the more efficient architecture given the node disadvantage, but taking into account that AMD has historically struggled on perf/W, just launched a new arch with major perf/w improvements (not just due to 7nm, remember that the 5700 XT roughly matches the VII in performance at significantly less power draw on the same node, and with less efficient memory to boot), one might assume that there weren't major efficiency improvements to be had in the new architecture right off the bat. Apparently AMD says there are. Which is surprising to me, at least.

Now, I'm not saying "Nvidia should be worried", as that's a silly statement implying that AMD is somehow going to surpass them out of the blue, but unless Nvidia manages to pull off their fifth consecutive round of significant efficiency improvements (beyond just the node change, that is) we might see AMD come close to parity if these rumors pan out. Of course we also might not, the rumors might be entirely wrong, or Nvidia might indeed have a major improvement coming - we have no idea.

It's also worth pointing out that your initial statement is rather self-contradictory - on the one hand you're saying we don't have data so we should use manufacturer specs (for entirely different cards..?), while you also say "we will deal with actual numbers" (which I'm reading as real-world test data) once they arrive. Why not then also base ourselves on real-world numbers for currently available cards, rather than their specs (which are very often misleading if not flat out wrong)? Your latter statement implies that real-world data is better, so why not also use that for existing cards?



ratirt said:


> And i never said it wouldn't require expensive PCB and a lot more power. That was not the point, anyway thanks for bringing this up
> It is possible and we can only assume of the outcome.


Possible, yes. But AMD brought in HBM specifically as a way of increasing memory bandwidth without the massive PCBs and expensive and complex trace layouts required by 512-bit memory buses. Now, GDDR6 is much faster than GDDR5, but also more expensive, which somewhat alleviates the main pain point of HBM - cost. Add to that that GDDR6 needs even more complex traces than GDDR5, and it becomes highly unlikely that we'll ever see a GPU with a 512-bit GDDR6 bus - HBM2(E) is far more likely at that kind of performance (and thus price) level. You're welcome to disagree, but AMD's recent history doesn't.


----------



## EarthDog (Mar 16, 2020)

Valantar said:


> Now, I'm not saying "Nvidia should be worried", as that's a silly statement implying that AMD is somehow going to surpass them out of the blue, but unless Nvidia manages to pull off their fifth consecutive round of significant efficiency improvements (beyond just the node change, that is) we might see AMD come close to parity if these rumors pan out. Of course we also might not, the rumors might be entirely wrong, or Nvidia might indeed have a major improvement coming - we have no idea.





Valantar said:


> It's also worth pointing out that your initial statement is rather self-contradictory - on the one hand you're saying we don't have data so we should use manufacturer specs (for entirely different cards..?), while you also say "we will deal with actual numbers" (which I'm reading as real-world test data) once they arrive.


It was clear as Windexed glass. I am saying instead of mixing and matching actual numbers, I simplified and went with MFG listed specs. You are getting lost in the details that aren't terribly relevant to the point. Take the deets away and see the forest through the trees, please. 

Again, I wasn't really talking to you out of the gate, but to the Super XP guy who thinks Nvidia is going to be "worried". AMD has a long way to go, bud, no matter what way you slice the numbers. Nvidia has a die shrink and arch change, while AMD has an arch change while adding on RT hardware for the first time. I'm a betting man and my money is on Nvidia being able to reach these rumored goals.

But yes, we have no idea... I know/knew that going into my first reply to Super XP... may have even said it there too....this merry go round is making me dizzy. I don't give 2 shits to split hairs and semantics which don't matter to the overall point........ .

AMD is currently behind in ppw. Outside of the 5600XT which had to be tweaked the week before reviews, Navi is less efficient than Turing. At best, with 5600XT it is on par/negligible differences. However the budget 5500 XT and the (current) flagship 5700 XT are not as efficient. So there is that hurdle to overcome. Next, performance. 46% increase to reach 2080 Ti speeds from a 5700 XT. If we use Kepler to Turing and its paltry increase (25%), that means AMD needs to come close to a 71% performance increase to match Ampre. I'll call AMD's flagship 'close' to Nvidia's when it is within 10%. So let's say it needs 61% improvement over the 5700 XT.... I ask again, to all, have we ever seen a 61% performance increase from gen to gen? Maybe 8800 GTS over a decade ago??? I don't recall....

So, for the last time....... 

Nvidia is sure as hell not worried about AMD. AMD has a lot of work to match/come close to what Ampre can bring in performance, a bit less work - but work nonetheless - to take the overall PPW crown. Can anyone refute those points?


----------



## Valantar (Mar 16, 2020)

EarthDog said:


> It was clear as Windexed glass. I am saying instead of mixing and matching actual numbers, I simplified and went with MFG listed specs. You are getting lost in the details that aren't terribly relevant to the point. Take the deets away and see the forest through the trees, please.
> 
> Again, I wasn't really talking to you out of the gate, but to the Super XP guy who thinks Nvidia is going to be "worried". AMD has a long way to go, bud, no matter what way you slice the numbers. Nvidia has a die shrink and arch change, while AMD has an arch change while adding on RT hardware for the first time. I'm a betting man and my money is on Nvidia being able to reach these rumored goals.
> 
> ...


I know I wasn't the one you were responding to, the reason I keep splitting hairs with you is that you keep making mismatched comparisons or false equivalencies or otherwise presenting stuff in a clearly unequal way. The statement I pointed out rather conspicuously says "we'll see what real-world numbers for future products tell us when they arrive, but for now, let's skip real-world numbers for existing products and go with specs instead!" Which is ... odd, to say the least. Why not use today's real-world numbers when they are readily available and clearly demonstrate specs to be inaccurate? Only one reason that I can see: that real-world numbers make Nvidia's advantage look smaller than on-paper specs.

Also, saying Navi is overall less efficient than Turing ... well, that depends _massively _on the implementation. First off mentioning that the 5600 XT was tweaked just before launch is rather contrary to your argument in this context, as it was tweaked to be far _less _efficient by boosting clocks, with the pre-update bios being _by far the most efficient GPU TPU has ever tested_ at 1440p and 4k (not that it's a 4k capable GPU, but it is definitely an entry-level 1440p card). In other words, depending on the implementation Navi can be both more and less efficient than Turing. Does that mean it's a more efficient _architecture_? Obviously not - the node advantage AMD has at this point means that Nvidia still has their obvious architecture advantage. But Navi has been demonstrated to be very efficient when it's not being pushed as far as it can possibly go. That it scales well downwards is very promising in terms of a larger die being efficient at lower clocks, after all. People keep talking about "AMD just needs X times the 5700 XT to beat the 2080 Ti", yet that would be a ~440W GPU barring major efficiency improvements. 2x 5600 XT, on the other hand, would still beat the 2080 Ti handily (the latter is 60, 74 and 85% faster at 1080p, 1440p and 4k respectively), but at just ~330W. Or you could use clocks closer to the original 5600 XT BIOS, and still beat or nearly match it (2x 91 vs 160%,2x 91 vs. 174% and 2x 90 vs. 185%, assuiming perfect scaling which is of course a bit optimistic) but at _just 250W_! So yeah, don't discount the value of scaling down clocks to reach a performance target with a larger die. Just because the 5700 XT was pushed as far as it can go to compete as well as possible with the 2070 doesn't mean that AMD's next _large_ GPU will be pushed as far. They have a history of doing so, but that was with GCN which had a hard limit of 64 CUs, which meant that the _only_ way to improve performance was higher clocks. That no longer applies for RDNA.

As I said above, I completely agree that saying "Nvidia should be worried" is silly, but you on the other hand seem to be consistently skewing things in favor of Nvidia, whether consciously or not.


----------



## EarthDog (Mar 16, 2020)

Valantar said:


> seem to be consistently skewing things in favor of Nvidia, whether consciously or not.


That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.

AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.

Cheers.


----------



## Super XP (Mar 16, 2020)

EarthDog said:


> i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.
> 
> I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.
> 
> EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. *Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.*


A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.



EarthDog said:


> That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.
> 
> AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.
> 
> Cheers.


Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.


----------



## Valantar (Mar 16, 2020)

Super XP said:


> A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.


Well that's just plain wrong. RDNA 2 is still RDNA, just fully implemented RDNA (and likely including various tweaks, optimizations and improvements), while RDNA (1) is RDNA with some features omitted and some minor parts of GCN kept to ensure it could launch in a reasonable time. That of course doesn't mean RDNA 2 can't or won't be a major update - at this point I think it will be, given how AMD talks about it and the performance of the new Xbox shown off today - but it is still very much related to RDNA (1).



Super XP said:


> Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.


Experience with a node doesn't matter much unless it's a bleeding-edge node. 7nm isn't that any more, it's quite mature. TSMC can guide Nvidia through any issues they might have, in fact they have engineering teams specifically for this.


EarthDog said:


> That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.
> 
> AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.
> 
> Cheers.


Definitely don't mean to throw any stones, just pointing out what looked like a consistent slant in what you were saying. I entirely agree that AMD will have a hard time beating Ampere, but I do think there's reason to expect them to get pretty close this time around, and I don't think launching a true flagship level GPU will be an issue for them this go around, even if it would then be >=60% faster than the upper midrange "flagship" of the previous generation. We might see parity, might see a bit behind and cheaper, though I think the chance of them being outright ahead is by far the slimmest of these three, it is looking more possible than since 2015 (which on the other hand isn't saying much). It'll nonetheless be a very exciting release cycle (especially with new consoles bringing a lot of goodness to cross platform games).


----------



## EarthDog (Mar 16, 2020)

Super XP said:


> A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.
> 
> 
> Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.


In each and every post I've mentioned both have architectural improvements to be had...

And node advantage doesnt mean much here. Even if you potato your way into a lower node, there are still inherent efficiency gains to be had.  If there is a sponge where more can be squeezed out of, it seems like that is Nvidia considering node shrink on top of new arch. AMD is also adding ray tracing cores. If their addition is anything like nvidia's, it will be lucky to reach 2080ti speeds.

As I said, I'll bet it lands between a 2080ti and Ampre flagship. I believe it will fall at least 10% short of ampre on performance alone (no clue on rtx performance, likely the same idea...faster than 2080ti, slower than ampre) and slightly worse power to performance overall. Pricing on these parts, from both parties, will be paramount in choosing the right card...and amd will surely be a worthy competitor and offer viable options.


----------



## Super XP (Mar 16, 2020)

EarthDog said:


> In each and every post I've mentioned both have architectural improvements to be had...
> 
> And node advantage doesnt mean much here. Even if you potato your way into a lower node, there are still inherent efficiency gains to be had.  If there is a sponge where more can be squeezed out of, it seems like that is Nvidia considering node shrink on top of new arch. AMD is also adding ray tracing cores. If their addition is anything like nvidia's, it will be lucky to reach 2080ti speeds.
> 
> As I said, I'll bet it lands between a 2080ti and Ampre flagship. I believe it will fall at least 10% short of ampre on performance alone (no clue on rtx performance, likely the same idea...faster than 2080ti, slower than ampre) and slightly worse power to performance overall. Pricing on these parts, from both parties, will be paramount in choosing the right card...and amd will surely be a worthy competitor and offer viable options.


I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete. 

Correct me if I am wrong of course.



Valantar said:


> Well that's just plain wrong. RDNA 2 is still RDNA, just fully implemented RDNA (and likely including various tweaks, optimizations and improvements), while RDNA (1) is RDNA with some features omitted and some minor parts of GCN kept to ensure it could launch in a reasonable time. That of course doesn't mean RDNA 2 can't or won't be a major update - at this point I think it will be, given how AMD talks about it and the performance of the new Xbox shown off today - but it is still very much related to RDNA (1).
> 
> 
> Experience with a node doesn't matter much unless it's a bleeding-edge node. 7nm isn't that any more, it's quite mature. TSMC can guide Nvidia through any issues they might have, in fact they have engineering teams specifically for this.
> ...


Fully Agree.
We will definitely get more concrete details about both RDNA2 & Ampere. It's going to be a very interesting y2020. Hopefully the COVID-19 doesn't slow down both AMD & Nvidia GPU launches, because many are itching for new GPUs.


----------



## Valantar (Mar 16, 2020)

Super XP said:


> I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
> I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete.
> 
> Correct me if I am wrong of course.
> ...


Not an advantage over AMD, but an efficiency gain over their own previous GPU.


----------



## kings (Mar 16, 2020)

Super XP said:


> I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
> I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete.
> 
> Correct me if I am wrong of course.



He is saying that AMD already played the 7nm card, from here they will have to rely manly on its architecture, while Nvidia, in addition to the inherent gains of architecture, will still gain something more from the 12nm -> 7nm migration.


----------



## EarthDog (Mar 16, 2020)

Super XP said:


> ...which would render Nvidia's so called node advantage obsolete.


It doesnt though. I've said it directly....used a sponge analogy, lol...we'll just have to agree to disagree.



kings said:


> He is saying that AMD already played the 7nm card, from here they will have to rely manly on its architecture, while Nvidia, in addition to the inherent gains of architecture, will still gain something more from the 12nm -> 7nm migration.


This! Maybe after seeing it five times that point will land.


----------



## Super XP (Mar 17, 2020)

EarthDog said:


> It doesnt though. I've said it directly....used a sponge analogy, lol...we'll just have to agree to disagree.
> 
> This! Maybe after seeing it five times that point will land.






Nvidia waiting for AMDs Big Navi lol..


----------



## efikkan (Mar 17, 2020)

Nvidia have nothing to worry about unless their next-gen somehow gets delayed.
Nvidia might be holding off finalizing the timing, pricing and segmentation until they know more, but if so this is to position themselves, not due to concern. When rumors are pointing in every direction, it's usually a sign that the rumors are all speculation, and Nvidia probably don't know quite what to expect.

But I don't think Nvidia's next-gen is imminent. Everything seems to point to it being months away.


----------



## Super XP (Mar 17, 2020)

efikkan said:


> Nvidia have nothing to worry about unless their next-gen somehow gets delayed.
> *Nvidia might be holding off finalizing the timing, pricing and segmentation until they know more, but if so this is to position themselves, not due to concern.* When rumors are pointing in every direction, it's usually a sign that the rumors are all speculation, and Nvidia probably don't know quite what to expect.
> 
> But I don't think Nvidia's next-gen is imminent. Everything seems to point to it being months away.


I agree, which is why I posted that picture. Nvidia is waiting for AMD's Big Navi, because they know it's going to be very fast. What they do not know is how fast, and nobody knows this but AMD at the moment, regardless of rumors and speculation. I think AMD will release its RDNA2 GPUs first then set a price tone. If they overpriced as they've done in the past, they will probably get burned by Nvidia's Ampere pricing. Which is important for AMD not to overprice. The same goes for Nvidia, they should not overprice due to what the competition has pending. 
2020 will be a great year for new GPUs. Can't wait,


----------



## EarthDog (Mar 17, 2020)

Super XP said:


> I agree


Glad you jumped off the 'because they are worried' boat!

The waiting to finalize clocks/specs is quite normal. But it's not like they are sitting there ready to go waiting on amd to release. They, naturally, are not ready.


----------



## ARF (Mar 17, 2020)

EarthDog said:


> Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?



Check Cypress (334 sq.mm) and Juniper (166 sq.mm). Juniper is exactly 50% the performance of Cypress on N40.








						ATI Radeon HD 5750 Specs
					

ATI Juniper, 700 MHz, 720 Cores, 36 TMUs, 16 ROPs, 1024 MB GDDR5, 1150 MHz, 128 bit




					www.techpowerup.com
				











						ATI Radeon HD 5870 Specs
					

ATI Cypress, 850 MHz, 1600 Cores, 80 TMUs, 32 ROPs, 1024 MB GDDR5, 1200 MHz, 256 bit




					www.techpowerup.com
				




These are the same generation, the same micro-architecture, just scaled up and down.

RX 5700 XT is heavily overvolted out of the box, heavily pushed beyond its sweet spot. It's not an upper middle but lower middle range card.
Its real power consumption should be not more than 180-190-watt and even then it's too much.

Navi 21 at 505 sq.mm should have 100% more shaders and 50% higher power consumption, performance-per-watt, too.

Anything less than 80-100% higher performance than Navi 10 would be a major fail.

And where are your sources that say Nvidia is on track for delivery next-gen cards?
Because we hear exactly nothing and see no signs of anything in physical existence from them.


----------



## EarthDog (Mar 17, 2020)

ARF said:


> And where are your sources that say Nvidia is on track for delivery next-gen cards?
> Because we hear exactly nothing and see no signs of anything in physical existence from them.


I dont believe I've ever said that...?

Regarding the rest of your post... read on after my post you quoted. People have said that and I've already responded to it. 



ARF said:


> Anything less than 80-100% higher performance than Navi 10 would be a major fail.


wow... 80%+ or bust ehh? That's the most optimistic take I've heard.


----------



## ARF (Mar 17, 2020)

EarthDog said:


> wow... 80%+ or bust ehh? That's the most optimistic take I've heard.



I would be happy for 110-120% higher performance than the vanilla RX 5700 XT.


----------



## Super XP (Mar 18, 2020)

EarthDog said:


> Glad you jumped off the 'because they are worried' boat!
> 
> The waiting to finalize clocks/specs is quite normal. But it's not like they are sitting there ready to go waiting on amd to release. They, naturally, are not ready.


Here is the exact Source which is why I originally said Nvidia may be worried or something. Which they are not. 

*Nvidia is supposingly getting a little nervous* but then states not in terms of being worried, but because Nvidia may have to alter its next gen GPU specifications as to ensure they have enough to combat the Big Navi GPU. 
At the TIME 3:17 or listen to 3:00 to 4:00 about a minute.


----------



## Valantar (Mar 18, 2020)

I have to say that (without having done the numbers very thoroughly) the XSX APU makes it seem like AMD have managed some significant density gains with RDNA 2 on the tweaked 7nm node. Navi 10 is 251mm² and 40CUs. The XSX APU is 360mm² nd 56CUs (52 in use to reduce discard rates). Discounting everything else that sounds like similar density 6.3 vs. 6.4mm² per CU), but the XSX APU also has a full 8-core Zen2 CPU in there, which eats a significant portion of that die area. Sure, it likely cuts down on a lot of PC-centric stuff (less I/O etc.) but not by much, and not enough to really matter. It also had RT hardware in there.

Makes me rather curious to see the sizes of RDNA2 GPUs for PC.


----------



## ARF (Mar 18, 2020)

Valantar said:


> I have to say that (without having done the numbers very thoroughly) the XSX APU makes it seem like AMD have managed some significant density gains with RDNA 2 on the tweaked 7nm node. Navi 10 is 251mm² and 40CUs. The XSX APU is 360mm² nd 56CUs (52 in use to reduce discard rates). Discounting everything else that sounds like similar density 6.3 vs. 6.4mm² per CU), but the XSX APU also has a full 8-core Zen2 CPU in there, which eats a significant portion of that die area. Sure, it likely cuts down on a lot of PC-centric stuff (less I/O etc.) but not by much, and not enough to really matter. It also had RT hardware in there.
> 
> Makes me rather curious to see the sizes of RDNA2 GPUs for PC.



505 sq.mm. https://www.ptt.cc/bbs/PC_Shopping/M.1577766534.A.08E.html


----------



## Flanker (Mar 18, 2020)

ARF said:


> 505 sq.mm. https://www.ptt.cc/bbs/PC_Shopping/M.1577766534.A.08E.html


English source








						AMD's High-End 'Radeon RX' Navi 21 GPU Rumors: Twice As Fast As Navi 10, 505mm2 Die Size, Faster GDDR6 Memory
					

AMD's Navi 21 GPU based high-end Radeon RX graphics cards are rumored to be twice as fast as Navi 10 'RX 5700 XT' and feature GDDR6 memory.




					wccftech.com


----------



## ARF (Mar 18, 2020)

Flanker said:


> English source



That's not a source, you can just right-click your mouse button and translate the original link to any language you want.


----------



## Flanker (Mar 18, 2020)

ARF said:


> That's not a source, you can just right-click your mouse button and translate the original link to any language you want.


But that linked you posted cited the link I posted.


----------



## Valantar (Mar 18, 2020)

ARF said:


> 505 sq.mm. https://www.ptt.cc/bbs/PC_Shopping/M.1577766534.A.08E.html





Flanker said:


> English source
> 
> 
> 
> ...


Beyond this being a random post on a random BBS with zero reason for us to believe it ("According to people familiar with the matter at the Taiwan PTT Forum", lol), and some _very_ questionable assertions ("It was also pointed out that given the huge Die size of the GPU itself, the card will eventually not use HBM, but instead rely on GDDR6 " - yet this die is reportedly significantly _smaller_ than Fiji, which used HBM, and there's no reason two stacks of HBM2(E) wouldn't fit just fine next to a 505mm² die). Also, there's nothing new in that rumor, it's been rehashed over and over and over again on these forums and elsewhere. Still, let's be generous and assume it's somewhat accurate. The question then becomes: 505mm2 _of what_?

The density gains of the XSX would indicate more than 1:1 scaling from Navi 10, i.e. a 505mm2 chip would either have >80 CUs or some other stuff added on that we don't yet know about. Let's look closer at this.

Navi 10 has 40 CUs, a 256-bit G6 bus and a single IF/PCIe 4.0 x16 link on a 251mm² die. The XSX die is 360mm² with 56 CUs, 8 Zen 2 cores, and I/O including a 320-bit G6 bus. A Zen 2 CCD is 74 mm² with two 31.3 mm² CCXes, 16MB of L3, IF links and anything else that lives on that die. Let's be conservative and discount L3 completely - the XSX then uses at least 2x31.3 mm² = 62.6mm² of die area to its CPU cores (likely a bit more as it won't have zero L3 cache, but will also likely gain density from the node improvement. Some space will also be used for the IF links between the CPU, GPU and memory controllers). This leaves us with at most 360mm² - 63mm² = 297mm² for 52 56 CUs, all encode/decode blocks (which given the importance of streaming are likely to be fully featured and not cut down), a 320-bit GDDR6 PHY + controllers (compared to the 256-bit PHY and controllers of Navi 10, so 25% more die area for that), and at least two PCIe links for SSDs (unknown whether these are PCIe 3.0x4, PCIe 4.0x2 or PCIe 4.0x4 at this point), plus the chipset uplink etc. While the XSX does gain something in having slightly less I/O than a PC GPU, the gains from that are minor at best. Ignoring that, we have a 25% increase in VRAM die area + a 30% 40% increase in CUs with just an ~18% increase in die size (with the CPU subtracted, that is). And that _includes_ RT hardware.

While this is some real napkin math (we have no idea if anything beyond the die sizes here is actually accurate in terms of numbers, but IMO they shouldn't be too far off), it tells us that a 505mm² RDNA 2 GPU on the same improved 7nm node as the XSX either _must_ have more than 80 CUs - if the scaling roughly follows my calculations a 100% area increase would then be more like a 120% increase in CUs, or ~95 CUs - or use _a lot _of die area for something else. Might we see significantly more RT power compared to shader pwerformance in the PC GPUs? Also, if it uses HBM2 rather than a stupidly large 512-bit G6 bus (which IMO sounds likely, despite what that BBS post says), the CU count could grow further (100?) as HBM controllers and PHYs are much more space efficient than G6.

Still, with all of this within the realm of IMO reasonable speculation (and it is _very much_ speculation at this point) we have no idea about power, clocks, or anything else. Performance would vary _wildly_ based on all of this. Pricing is also crucial, and a 505mm2 die on TSMC 7nm is not going to be cheap. So, as I've said both here and elsewhere, I don't see a reason to doubt that AMD can bring about a true flagship this generation, but both the absolute performance and pricing is entirely up in the air at this point, as is its competitiveness with Nvidia's so far entirely unknown Ampere arch. There's absolutely no indication in any of this that this will _beat_ Ampere, simply because Ampere is entirely unknown. But will it be powerful? Absolutely.


*Edit: *I borked my numbers from about halfway through by calculating from the 52 active CUs in the XSX die rather than the 56 physically present ones. Fixed that; also added a note about possibly using "free" die space for more RT power compared to consoles.


----------



## ARF (Mar 18, 2020)

The RT hardware is very important. RTX 2080 Ti can do 10 Giga Rays of ray tracing performance and 78 trillion RTX-OPS.

I am quite sure that IF AMD wants to be competitive, they will design the Navi 2 GPU to be in line performance-wise with what comes next after Turing.
I mean it should be easy for them to take all the data they can gather on the previous generations and calculate an appropriate performance window range where the Turing successor will likely fall.

They did it with Zen. And they said that Zen targets the performance level where they expected Skylake-next-gen to be.


----------



## Valantar (Mar 18, 2020)

ARF said:


> The RT hardware is very important. RTX 2080 Ti can do 10 Giga Rays of ray tracing performance and 78 trillion RTX-OPS.
> 
> I am quite sure that IF AMD wants to be competitive, they will design the Navi 2 GPU to be in line performance-wise with what comes next after Turing.
> I mean it should be easy for them to take all the data they can gather on the previous generations and calculate an appropriate performance window range where the Turing successor will likely fall.
> ...


That, my friend, is what is called an _estimate_. Which for all intents and purposes is a qualified guess. A single event in the real world will likely fall within a certain margin of a statistical estimate, but it might well not, as statistics are a post hoc phenomenon; they only chart what has happened and can be used to estimate (i.e. guess) what will happen in the future. An estimate can be 100% correct or wildly inaccurate, there's no way of knowing until the thing being estimated becomes a reality. Generational GPU performance increases have been anywhere from near nothing to revolutionary, and there really isn't any reliable way of knowing which one is coming next.

I mean, sure, AMD has _obviously_ been working on their next flagship GPU based on an estimate of where Nvidia's competing architecture will be in terms of performance. But so what? They're still going to make the best products they can within the constraints of die size/cost/power/thermals for the high end, with everything else being spaced downwards to be competitive while producing sufficient margins and selling well. Only pricing (and thus margins) and the specifics of cut-down SKUs is really dependent on the competition.


----------



## ARF (Mar 19, 2020)

ARF said:


> The RT hardware is very important. RTX 2080 Ti can do 10 Giga Rays of ray tracing performance and *78 trillion RTX-OPS*.
> 
> I am quite sure that IF AMD wants to be competitive, they will design the Navi 2 GPU to be in line performance-wise with what comes next after Turing.
> I mean it should be easy for them to take all the data they can gather on the previous generations and calculate an appropriate performance window range where the Turing successor will likely fall.
> ...



Nah, that has to be a typo. 78 billion, not trillion.
The next-gen XBox will do 380 billion:
"the hardware acceleration for ray tracing maps traversal and _intersection_ of light at a rate of up to _380 billion intersections_ per second"








						Inside Xbox Series X: the full specs
					

This is it. After months of teaser trailers, blog posts and even the occasional leak, we can finally reveal firm, hard …




					www.eurogamer.net
				




Specs (all clocks are fixed, silicon is custom):

12.155 TFLOPs
AMD Zen 2 8c/16t @ 3.6-3.8 Ghz - Hyperthreading can be disabled for a 3.8 Ghz clock or enabled for a 3.6 ghz clock
16 GB GDDR6 ECC (!!!)
52 CU 3328 Shader GPU @ 1,825 MHz
Memory bandwidth: 10GB at 560GB/s, 6GB at 336GB/s
7nm - _NOT EUV_
1TB NVME SSD storage


__
		https://www.reddit.com/r/Amd/comments/fjkkev


----------



## Super XP (Mar 19, 2020)

Don't need 7nm EUV, the enhanced 7nm version is more than enough and efficient.


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> Don't need 7nm EUV, the enhanced 7nm version is more than enough and efficient.


Here is to hoping that is true... depends on where they tweak these, eh? I mean 5500 XT and 5700 XT aren't winning power /performance, but the 5600 XT is matching it... 

... and Nvidia is still on 12nm.


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> Here is to hoping that is true... depends on where they tweak these, eh? I mean 5500 XT and 5700 XT aren't winning power /performance, but the 5600 XT is matching it...
> 
> ... and Nvidia is still on 12nm.


I'm only going by AMDs explanation on why they changed the roadmaps from 7nm+ to just 7nm. They said it's an enhanced refined 7nm in comparison to the 5700XT 7nm process node.
The 5700XT is based on RDNA1. The upcoming RDNA2 is far more efficient. The XboxSX shows how efficient and fast it is, and that's a limited sample size APU. Lol

Nvidia should see a massive efficiency lift from 12nm all the way down to 7nm and some claim even 10nm and 8nm.









						Rumor: NVIDIA GeForce Ampere to be fabbed at 10nm, all cards RTX ?
					

We'll probably go back and forth a bit when it comes to the topic of Ampere until NVIDIA lifts all mystery, expected was that NVIDIA's upcoming GPUs would be fabbed at 7nm. However, that fabrication...




					www.guru3d.com


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> I'm only going by AMDs explanation on why they changed the roadmaps from 7nm+ to just 7nm. They said it's an enhanced refined 7nm in comparison to the 5700XT 7nm process node.
> The 5700XT is based on RDNA1. The upcoming RDNA2 is far more efficient. The XboxSX shows how efficient and fast it is, and that's a limited sample size APU. Lol
> 
> Nvidia should see a massive efficiency lift from 12nm all the way down to 7nm and some claim even 10nm and 8nm.
> ...


Your unfettered optimism over AMD never ceases.


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> Your unfettered optimism over AMD never ceases.


Intel and Nvidia can afford to fall behind. AMD cannot. But I'm optimistic because of the XSX specs.


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> Intel and Nvidia can afford to fall behind. AMD cannot. But I'm optimistic because of the XSX specs.


You were optimistic from silly rumors already!


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> You were optimistic from silly rumors already!


Everything starts from a rumor in this industry. Plus I already knew that RDNA2 was going to be a "Major" difference and have "Major" efficiency gains in comparison to GCN. Most industry sources even dating back to 2018 already knew this, despite being a rumor or speculation. 

I'm sure you already knew this too. It's all about Next Generation Gaming Consoles.


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> Plus I already knew that RDNA2 was going to be a "Major" difference and have "Major" efficiency gains in comparison to GCN.


It fookn better... that was 2 generations ago........lolololol...useless talking point, man. You seem to still expect a node shrink + arch update results from a simple arch update...

I bet it REALLY has more efficiency than the generation before that too!!! lol


----------



## Slizzo (Mar 19, 2020)

ARF said:


> Nah, that has to be a typo. 78 billion, not trillion.
> The next-gen XBox will do 380 billion:
> "the hardware acceleration for ray tracing maps traversal and _intersection_ of light at a rate of up to _380 billion intersections_ per second"
> 
> ...



It's 78 RTX Ops. No billion, no trillion, not even thousand.









						Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080
					

Nvidia's Turing architecture is loaded with new technology, plus features that improve performance in existing games. We step through the design's capabilities and introduce three Turing-based GPUs powering GeForce RTX graphics cards.




					www.tomshardware.com


----------



## ARF (Mar 19, 2020)

Slizzo said:


> It's 78 RTX Ops. No billion, no trillion, not even thousand.
> 
> 
> 
> ...



How does this 10 GRays/s and 78 RTX Ops compare to Xbox's 380 billion I/s?


----------



## Slizzo (Mar 19, 2020)

ARF said:


> How does this 10 GRays/s and 78 RTX Ops compare to Xbox's 380 billion I/s?



It's very difficult to compare those numbers, as even though they can quote the same scale, it's calculated differently per generation. Two different generations of GPU can be quoted for the same basic TFLOP output, but the newest one can be 30% faster in everything than the previous generation.


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> It fookn better... that was 2 generations ago........lolololol...useless talking point, man. You seem to still expect a node shrink + arch update results from a simple arch update...
> 
> I bet it REALLY has more efficiency than the generation before that too!!! lol


Are you assuming that RDNA2 is a very minor RDNA1 update? If you are then that's a LMAO to you.

According to sources and AMD themselves, RDNA2 is a architecture overhaul. What does Nvidia's GPU architecture have to do with RDNA2? Absolutely nothing lol, but you seem to be a little confused or too Nvidia biased. To each there own I suppose. I'll follow the evidence, you can continue to follow the fantasies.


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> Are you assuming that RDNA2 is a very minor RDNA1 update? If you are then that's a LMAO to you.
> 
> According to sources and AMD themselves, RDNA2 is a architecture overhaul. What does Nvidia's GPU architecture have to do with RDNA2? Absolutely nothing lol, but you seem to be a little confused or too Nvidia biased. To each there own I suppose. I'll follow the evidence, you can continue to follow the fantasies.


lol, no... weve went over this already...what are you not comprehending? Are you losing something in translation now?

Lol fantasies...lolololwtfbbqfanboysos


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> lol, no... weve went over this already...what are you not comprehending? Are you losing something in translation now?
> 
> Lol fantasies...lolololwtfbbqfanboysos


You are a Nvidia Fanboy. It's OK to like a company over another. Congratulations,


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> You are a Nvidia Fanboy. It's OK to like a company over another. Congratulations,


I'm not at all. Nothing I've said even infers such a thing. Look at my previous posts! I used the same information you did, but others here are talking your points down off a ledge, not mine. I play both sides to middle...my posts support that. Your highly optimistic opinion is the one rooted in rumor and assumption. I have no expectations of this product outside of it being competitive.

It's like groundhog day with you, lol.


----------



## Master Tom (Mar 19, 2020)

R0H1T said:


> Oh in that case AMD should fire the wise guy who made that slide, I mean it makes little sense even now!



What is the problem?


----------



## Super XP (Mar 19, 2020)

EarthDog said:


> I'm not at all. Nothing I've said even infers such a thing. Look at my previous posts! I used the same information you did, but others here are talking your points down off a ledge, not mine. I play both sides to middle...my posts support that. Your highly optimistic opinion is the one rooted in rumor and assumption. I have no expectations of this product outside of it being competitive.
> 
> It's like groundhog day with you, lol.


My highly optimistic opinions? Not my opinions but from actual tech sites that are quite optimistic for RDNA2. Here's a few, TechPowerUp, Wccftech, PCWorld, Extreme Tech, Guru3D, videocardz, Anandtech, techradar etc., 

What we currently know today and verified by Microsoft is the XBox Series X console specifications. Of course by the time this console is released, things could change articetually to a certain extent, but right now, the performance figures via RDNA2 are quite impressive coming from a APU none discrete GPU ya know?


----------



## EarthDog (Mar 19, 2020)

Super XP said:


> Here's a few, TechPowerUp, Wccftech, PCWorld, Extreme Tech, Guru3D, videocardz, Anandtech, techradar etc.,


Anand - https://www.anandtech.com/show/1559...-comes-this-year-with-50-improved-perfperwatt
guru3d - https://www.guru3d.com/news-story/a...admaps-and-a-possible-teaser-of-big-navi.html
TPU - https://www.techpowerup.com/264538/...re-detailed-offers-50-perf-per-watt-over-rdna
techradar - https://www.techradar.com/news/amd-big-navi-isnt-coming-until-the-end-of-2020
videocardz - https://videocardz.com/newz/amd-speaks-rdna2-rdna3-zen3-and-zen4-announces-new-roadmaps

Weird how one man's sites that are "quite" optimistic is another man's "they just regurgitated the rumor we've been talking about and nothing more".   

If I missed an article showing this high level of optimism you say some had, please link them. I gave up after those five. 

EDIT: The best piece of news you have is the Xbox able to run 4K 60 fps....which is easier on a console than it is on a PC...but also puts it on 2080 Super/2080 Ti level.


----------



## Super XP (Mar 20, 2020)

EarthDog said:


> Anand - https://www.anandtech.com/show/1559...-comes-this-year-with-50-improved-perfperwatt
> guru3d - https://www.guru3d.com/news-story/a...admaps-and-a-possible-teaser-of-big-navi.html
> TPU - https://www.techpowerup.com/264538/...re-detailed-offers-50-perf-per-watt-over-rdna
> techradar - https://www.techradar.com/news/amd-big-navi-isnt-coming-until-the-end-of-2020
> ...


Well of course they are reporting from AMD's own Financial Day with the information that was available. Then each and every single site takes that information and applies there opinions and tech expertise on the matter. Microsoft claims 4K60FPS, and no it is not easier to achieve that on a console, its easier to achieve that on a custom built gaming PC. No console that exists today can do 4K60FPS guaranteed. NONE, unless you are aware of some mysterious console which nobody heard of.


----------



## EarthDog (Mar 20, 2020)

Super XP said:


> Well of course they are reporting from AMD's own Financial Day with the information that was available. Then each and every single site takes that information and applies there opinions and tech expertise on the matter. Microsoft claims 4K60FPS, and no it is not easier to achieve that on a console, its easier to achieve that on a custom built gaming PC. No console that exists today can do 4K60FPS guaranteed. NONE, unless you are aware of some mysterious console which nobody heard of.


I'm waiting for "then" links you speak of where the sites say they are optimistic. I went and looked at 5 of the sites you mentioned but didnt see it....

Under a similar premise as iphones, when you have one configuration to code for and work with, you can get more out of what hardware you have. So in that respect, it is in fact easier to get 4k60 out of a console. I wasnt talking about hardware specs but being able to do more with the hardware due to having a single configuration like a console. "Whoooosh" as @Vayra86 says! 

Again, its 4k60 capable and on par with a 2080 super/2080ti. That's a start! I'd love to see what the discrete cards will do. Is it 80-100% like someone else said? Is it between a 2080ti and Ampre's flagship leaning towards 2080ti? Is it as fast as Ampre's flagship? I guess we'll find out in several months.


----------



## Super XP (Mar 20, 2020)

EarthDog said:


> I'm waiting for "then" links you speak of where the sites say they are optimistic. I went and looked at 5 of the sites you mentioned but didnt see it....
> 
> Under a similar premise as iphones, when you have one configuration to code for and work with, you can get more out of what hardware you have. So in that respect, it is in fact easier to get 4k60 out of a console. I wasnt talking about hardware specs but being able to do more with the hardware due to having a single configuration like a console. "Whoooosh" as @Vayra86 says!
> 
> Again, its 4k60 capable and on par with a 2080 super/2080ti. That's a start! I'd love to see what the discrete cards will do. Is it 80-100% like someone else said? Is it between a 2080ti and Ampre's flagship leaning towards 2080ti? Is it as fast as Ampre's flagship? I guess we'll find out in several months.


All we have right now is what AMD has said about RDNA2 and various rumors and speculations. There writing style seems optimistic, not that they actually said we are optimistic. lol,
I do agree on some of your points,   I think RDNA2 will be faster than the 2080Ti, the question is by how much? SemiAccurate sources say AMD is aiming to compete with Ampere and not nesesarily with the 2000 series. We are probably about 9 to 10 months away from both RDNA2 & Ampere.  Some paid subscriber at SemiAccurate took a picture of this. If there is one site that I would trust, it's Charlie from SemiAccurate.

Oh and this was back in November 2019 I think, it may no longer apply. Not really sure, but thought I post it.


----------



## ARF (Mar 20, 2020)

Super XP said:


> All we have right now is what AMD has said about RDNA2 and various rumors and speculations. There writing style seems optimistic, not that they actually said we are optimistic. lol,
> I do agree on some of your points,   I think RDNA2 will be faster than the 2080Ti, the question is by how much? SemiAccurate sources say AMD is aiming to compete with Ampere and not nesesarily with the 2000 series. We are probably about 9 to 10 months away from both RDNA2 & Ampere.  Some paid subscriber at SemiAccurate took a picture of this. If there is one site that I would trust, it's Charlie from SemiAccurate.
> View attachment 148619
> Oh and this was back in November 2019 I think, it may no longer apply. Not really sure, but thought I post it.



Didn't AMD say "later this year", not "late this year" ?



> At Financial Analyst Day, AMD talked about future products and about the so-called RDNA 2 architecture. Information was modest, but we learned that *video cards with a big Navi chip would be released later this year.*








						AMD has revealed new product launch plans for Big Navi in late 2020
					

At Financial Analyst Day, AMD talked about future products and about the so-called RDNA 2 architecture. Information was modest, but we learned that video




					engnews24h.com
				













						News - AMD Says its Upcoming RDNA 2 and Navi 2x Will Boost Performance per Watt by 50%
					

AMD plans to launch its RDNA 2 architecture and Navi 2x GPUs by the end of 2020, bringing ray tracing support and up to 50% more performance per Watt to its consumer graphics products.  AMD Says its Upcoming RDNA 2 and Navi 2x Will Boost Performance per Watt by 50% : Read more




					forums.tomshardware.com


----------



## Super XP (Mar 21, 2020)

ARF said:


> *Didn't AMD say "later this year", not "late this year" ?*


I must have gotten those terms confused. Later this year sounds earlier over late this year which seems like end of year Nov/Dec 2020.


----------



## Midland Dog (Mar 21, 2020)

oxrufiioxo said:


> Yeah, I figured this slide would confuse people even though I don't think that was the intention..... They clearly stated 50% more performance per watt in the live stream.


its die codenames guys, navi 10 navi 14 navi 12, navi 20 21 etc


----------



## ARF (Mar 21, 2020)

Midland Dog said:


> its die codenames guys, navi 10 navi 14 navi 12, navi 20 21 etc



Navi 12 probably doesn't even exist, or maybe it was but later renamed to Navi 10 while the real Navi 10 got cancelled.

With Navi 2* there should be more chips, so that the whole range from bottom to top gets covered with recent DX12.2 feature level.

For example:

Navi 21 - Radeon RXI 6900 XT and Radeon RXI 6900
Navi 23 - Radeon RXI 6800 XT and Radeon RXI 6800
Navi 24 - Radeon RXI 6500 XT and Radeon RXI 6500

so that
Navi 10 gets rebranded and relegated to the entry - low-end market like Radeon RXI 6300 XT
Navi 14 gets rebranded and relegated to the entry - low-end market like Radeon RXI 6100 XT.

Just wishful thinking but let's hope AMD has finally got some sense in naming its products.

And of course, no more Polaris and Vega, we are already tired of them!


----------



## Valantar (Mar 21, 2020)

ARF said:


> Navi 12 probably doesn't even exist, or maybe it was but later renamed to Navi 10 while the real Navi 10 got cancelled.
> 
> With Navi 2* there should be more chips, so that the whole range from bottom to top gets covered with recent DX12.2 feature level.
> 
> ...


RX*I*?

Also there's no way on earth a 250mm2 215W GPU (or even a cut down version) hits x300 naming in its second outing. My money would be on Navi 14 possibly surviving in the low end x300 range, but otherwise I think we'll see a wholesale move to RDNA 2 to ensure feature parity with consoles and to make use of the efficiency gains of the updated arch - though it might be 6+ months from the launch of the high end and upper midrange cards to the rest of the series being filled out.


----------



## EarthDog (Mar 21, 2020)

Super XP said:


> We are probably about 9 to 10 months away from both RDNA2 & Ampere.


New AMD should be out in late Q3 (4-5 months) with with Nvidia to follow. Initial launches will happen well before the end of the calendar year. 9 months+ puts that into 2021.



Midland Dog said:


> its die codenames guys, navi 10 navi 14 navi 12, navi 20 21 etc


That was corrected 200 posts ago.


----------



## R0H1T (Mar 21, 2020)

This thread doesn't have 200 pages


----------



## EarthDog (Mar 21, 2020)

R0H1T said:


> This thread doesn't have 200 pages


I said POSTS...   



EarthDog said:


> That was corrected 200 posts ago.


----------



## Super XP (Mar 21, 2020)

EarthDog said:


> New AMD should be out in late Q3 (4-5 months) with with Nvidia to follow. Initial launches will happen well before the end of the calendar year. 9 months+ puts that into 2021.
> 
> That was corrected 200 posts ago.


I'm factoring in the COVID-19 disaster. Hopefully all works out well, the world goes back to normal and both companies release it's new GPUs before Christmas 2020.


----------



## ARF (Mar 21, 2020)

Valantar said:


> RX*I*?
> 
> Also there's no way on earth a 250mm2 215W GPU (or even a cut down version) hits x300 naming in its second outing. My money would be on Navi 14 possibly surviving in the low end x300 range, but otherwise I think we'll see a wholesale move to RDNA 2 to ensure feature parity with consoles and to make use of the efficiency gains of the updated arch - though it might be 6+ months from the launch of the high end and upper midrange cards to the rest of the series being filled out.



XI is 11. X is 10, previous was 9, you know R9, R10, R11.

RDNA 1 doesn't support Variable Rate Shading, Ray-tracing, DX 12 FL 12.2, etc, so yes, normally it should be lower in the product stack.


----------



## EarthDog (Mar 21, 2020)

Super XP said:


> I'm factoring in the COVID-19 disaster. Hopefully all works out well, the world goes back to normal and both companies release it's new GPUs before Christmas 2020.


oh... good to know. Consider saying your complete thoughts in a post. You'll have less follow up that way, lol.


----------



## ARF (Mar 21, 2020)

Actually, it goes like this:

Radeon X700,
Radeon X800,
Radeon X1800,
Radeon X1900,
Radeon HD 2000,
Radeon HD 3000,
Radeon HD 4000,
Radeon HD 5000,
Radeon HD 6000,
Radeon HD 7000,
(Radeon HD 8000 ?)
Radeon R9 200, (from 7/8000 to 200 ?)
Radeon R9 300,
Radeon RX 400,
Radeon RX 500,
Radeon RX 5000,

and now what? RX 6000 or? RXI 6000?

Or RX 1280?
1 meaning 11, 2 that random number denoting the generation and 80 the performance tier? 


X0.
X1.
HD2.
HD3.
HD4.
HD5.
HD6.
HD7.
HD8.
R9.2.
R9.3.
RX.4.
RX.5.
RX.50.


----------



## Valantar (Mar 21, 2020)

ARF said:


> XI is 11. X is 10, previous was 9, you know R9, R10, R11.
> 
> RDNA 1 doesn't support Variable Rate Shading, Ray-tracing, DX 12 FL 12.2, etc, so yes, normally it should be lower in the product stack.





ARF said:


> Actually, it goes like this:
> 
> Radeon X700,
> Radeon X800,
> ...


No. The previous series was Rx with x ranging from 3 to 9 to indicate the tier of a GPU within its generation. I.e. R3 was entry level, R5 was lower midrange, R7 was upper midrange, and R9 was high-end. RX does not mean R-ten, but is an entirely new naming scheme that dropped the numbered tiers, seeing how this is all indicated in the model number anyhow (lower number = lower tier). If one GPU is called x300 and one is called x500, it doesn't require a numbered prefix to show that the latter is higher up the performance ladder, after all.

As you bring up a long history of naming it's also worth pointing out that - as your list clearly shows! - there have been several such shifts in naming. The Xxxx series followed after ATI ran out of numbers from their 9xxx series. In that case, X actually did mean 10, but also represented a break in naming that was then abandoned after just a single refresh, with the HD series then taking over. Roman numerals have not been seen here since (probably due to mixing two numbering systems in the same name being a terrible idea). The Rx series then took over as the Radeon HD naming was again nearing running out of numbers, and following the move from VLIW5 to GCN - making a significant change in naming make sense.

Thirdly, when they've used RX (for the entire lineup) for _three whole generations_ _across two architectures_ why would they now suddenly say "oh, this meant ten the whole time, and now we're going to start climbing the ladder of roman numerals"? Sorry, but that isn't happening. Either the new series is RX 6xxx, or they move to something new entirely to indicate that RDNA 2 is something new again.


As for RDNA (1) belonging lower in the product stack due to missing features, this is true, but it makes no sense whatsoever for a relatively large 251mm2 die. That is not a cheap die to produce, and selling an x300 tier GPU above $100 is near impossible. As such, Navi 10 is highly likely to be discontinued (partly to free up 7nm capacity I would guess) and replaced by new chips. Even accounting for the several million dollar cost of taping out a new die it makes very little sense to keep Navi 10 around when RDNA 2 is supposed to be more efficient, clock higher, and comparably dense in terms of CUs while also adding a lot of new features. There's very little chance there won't be a ~250mm2 RDNA 2 die coming (as that's where most sales tend to happen), and keeping two such dice in production at the same time doesn't make sense economically. Navi 14 is small enough that it might be kept around for a while longer as it is thus much cheaper to make and can be sold at sufficiently low prices to make sense in a product tier like that.


----------



## ARF (Mar 21, 2020)

Valantar said:


> No. The previous series was Rx with x ranging from 3 to 9 to indicate the tier of a GPU within its generation. I.e. R3 was entry level, R5 was lower midrange, R7 was upper midrange, and R9 was high-end. RX does not mean R-ten, but is an entirely new naming scheme that dropped the numbered tiers, seeing how this is all indicated in the model number anyhow (lower number = lower tier). If one GPU is called x300 and one is called x500, it doesn't require a numbered prefix to show that the latter is higher up the performance ladder, after all.
> 
> As you bring up a long history of naming it's also worth pointing out that - as your list clearly shows! - there have been several such shifts in naming. The Xxxx series followed after ATI ran out of numbers from their 9xxx series. In that case, X actually did mean 10, but also represented a break in naming that was then abandoned after just a single refresh, with the HD series then taking over. Roman numerals have not been seen here since (probably due to mixing two numbering systems in the same name being a terrible idea). The Rx series then took over as the Radeon HD naming was again nearing running out of numbers, and following the move from VLIW5 to GCN - making a significant change in naming make sense.
> 
> ...



R3, R5, R7 were mobile or integrated graphics, no?
I can't remember any desktop except R7...

Let's just stick with desktop models.

Either RX 6000, RXI 6000 or something like RX 1290/1280.

X0.
X1.
HD2.
HD3.
HD4.
HD5.
HD6.
HD7.
HD8.
R9.2.
R9.3.
RX.4.
RX.5.
RX.50.


----------



## Super XP (Mar 21, 2020)

ARF said:


> R3, R5, R7 were mobile or integrated graphics, no?
> I can't remember any desktop except R7...
> 
> Let's just stick with desktop models.
> ...


What the heck is a RXI? Where did the 'I' come from? Lol


----------



## ARF (Mar 21, 2020)

Super XP said:


> What the heck is a RXI? Where did the 'I' come from? Lol



Why do we hate the "I" ?  I find it quite sexy!


----------



## Valantar (Mar 21, 2020)

ARF said:


> R3, R5, R7 were mobile or integrated graphics, no?
> I can't remember any desktop except R7...
> 
> Let's just stick with desktop models.
> ...


No, this is entirely wrong.

Radeon R7 360.
Radeon R7 350.
Radeon R5 230.
Etc.


R3 was AFAIK only used for integrated graphics both on desktop and mobile. Beyond that the lower tiers weren't seen all that much in retail/media coverage simply due to AMD being at a significant economic disadvantage at the time and that most of these GPUs were rebrands/refreshes of previous cards with new names for OEMs to use. Just go to the TPU GPU database and search for "Radeon R[3/5/7/9]" and have a look for yourself. You are only looking at high-tier models, which doesn't show the whole picture of the naming scheme by any means.


----------



## ARF (Mar 21, 2020)

Valantar said:


> No, this is entirely wrong.
> 
> Radeon R7 360.
> Radeon R7 350.
> ...



Ok, now I see that R5 230 was a direct rebrand of HD 8450.

Then AMD went from HD 8000 to R0 200.


----------



## Valantar (Mar 21, 2020)

ARF said:


> Why do we hate the "I" ?  I find it quite sexy!


It looks absolutely terrible, and changes the prefix from a series prefix (i.e. a name) to somehow being a count of something. Either it's counting the generation of the GPU (which the model number already does, i.e. it's redundant and only confusing - why is 11 (XI) the same as 6(xxx)?) or it's counting something else entirely, in which case the question becomes _what_?

RX is an equivalent of GTX for Nvidia, which is currently simply the prefix to what all their gaming-focused GPUs are named (there is also the entry-level, mobile-only MX series). RTX is then of course an extension of this - a gaming card with *R*ay tracing support. Or do you think Nvidia's X also stands for 10?


----------



## ARF (Mar 21, 2020)

Valantar said:


> It looks absolutely terrible, and changes the prefix from a series prefix (i.e. a name) to somehow being a count of something. Either it's counting the generation of the GPU (which the model number already does, i.e. it's redundant and only confusing - why is 11 (XI) the same as 6(xxx)?) or it's counting something else entirely, in which case the question becomes _what_?
> 
> RX is an equivalent of GTX for Nvidia, which is currently simply the prefix to what all their gaming-focused GPUs are named (there is also the entry-level, mobile-only MX series). RTX is then of course an extension of this - a gaming card with *R*ay tracing support. Or do you think Nvidia's X also stands for 10?



I think the X comes from eXtreme.

Previously, they had GT and GTS which were lower than GTX.

AMD also needs something to clarify that Ray-tracing support.

The "I" in RXI can come from Intersection.




> Then AMD went from HD 8000 to R0 200.


----------



## Valantar (Mar 21, 2020)

ARF said:


> Ok, now I see that R5 230 was a direct rebrand of HD 8450.
> 
> Then AMD went from HD 8000 to R0 200.


The HD 8000 series was pretty much OEM only. OEMs sadly tend to demand new product names each year regardless of whether there are new products available, which has led to a lot of silly midrange-to-low-end rebrands for both Nvidia and AMD across the years. Regardless, there have always been lower-tier R7 and R5 cards to the higher end, consumer-facing R9 cards.


ARF said:


> I think the X comes from eXtreme.
> 
> Previously, they had GT and GTS which were lower than GTX.
> 
> ...


_Intersection_? Seriously? How is the average uninformed GPU buyer supposed to understand anything at all from that? At least _R_ (for Nvidia) is the first letter in the actual feature it seeks to describe. "RXI" is a _terrible_ idea. Period.

And you're right about X - in most product naming! - coming from extreme. That's likely why AMD fell back to it as well, as the letter X has become a sort of shorthand (much ridiculed, but still) for something cool/good/performant.


----------



## ARF (Mar 21, 2020)

Valantar said:


> The HD 8000 series was pretty much OEM only. OEMs sadly tend to demand new product names each year regardless of whether there are new products available, which has led to a lot of silly midrange-to-low-end rebrands for both Nvidia and AMD across the years. Regardless, there have always been lower-tier R7 and R5 cards to the higher end, consumer-facing R9 cards.
> 
> _Intersection_? Seriously? How is the average uninformed GPU buyer supposed to understand anything at all from that? At least _R_ is the first letter in the actual feature it seeks to describe. "RXI" is a _terrible_ idea. Period.
> 
> And you're right about X - in most product naming! - coming from extreme. That's likely why AMD fell back to it as well, as the letter X has become a sort of shorthand (much ridiculed, but still) for something cool/good/performant.



The "I" can also invoke memories of Intel's "i" series CPUs and the customers may say "oh cool, this is I like iPhone and i7" ....." cool, man  "


----------



## Valantar (Mar 21, 2020)

ARF said:


> The "I" can also invoke memories of Intel's "i" series CPUs and the customers may say "oh cool, this is I like iPhone and i7" ....." cool, man  "


Except that's a lowercase, single-letter prefix directly attached to a number. Does "RXI 6700" look like "i7-6700" to you? It sure doesn't to me. And besides, why would AMD try to sell products based on the naming of Intel CPUs when their own CPUs are kicking Intel's butt these days to such a degree that even people who don't care about PC hardware are picking up on it? Oh, and Apple got so much flack for their "it's pronounced iPhone _ten_, but we write it X" nonsense that anyone ought to understand that suddenly mixing in roman numerals into a series of non-roman numerals is a really bad and confusing idea. AFAIK most people still call it the iPhone "ex" (and even "ex eye" and especially "ex arr").


It's also worth mentioning that it's likely that part of why AMD abandoned the Rx naming was that they had long since seen that explicitly naming tiers like that has a detrimental effect on sales and marketing (you're very clearly telling your buyers that "this product is worse than something else", which isn't a good way of making people happy with their purchase), as Nvidia also saw and thus moved to all-over GTX branding. That's on top of it being redundant, of course. When your naming scheme consists of R[adeon][numbered tier] [space] [generation][numbered tier _again?_][0/5 if there's a new card/refresh] [X or no X depending if there's not enough room for a higher number] it doesn't take much brain power to tell that this scheme needs simplification. RX (named prefix, like "HD") [generation][three digits indicating performance level] [XT for higher end SKUs] is quite a lot simpler.


----------



## ARF (Mar 21, 2020)

Valantar said:


> Except that's a lowercase, single-letter prefix directly attached to a number. Does "RXI 6700" look like "i7-6700" to you? It sure doesn't to me. And besides, why would AMD try to sell products based on the naming of Intel CPUs when their own CPUs are kicking Intel's butt these days to such a degree that even people who don't care about PC hardware are picking up on it? Oh, and Apple got so much flack for their "it's pronounced iPhone _ten_, but we write it X" nonsense that anyone ought to understand that suddenly mixing in roman numerals into a series of non-roman numerals is a really bad and confusing idea. AFAIK most people still call it the iPhone "ex" (and even "ex eye" and especially "ex arr").
> 
> 
> It's also worth mentioning that it's likely that part of why AMD abandoned the Rx naming was that they had long since seen that explicitly naming tiers like that has a detrimental effect on sales and marketing (you're very clearly telling your buyers that "this product is worse than something else", which isn't a good way of making people happy with their purchase), as Nvidia also saw and thus moved to all-over GTX branding. That's on top of it being redundant, of course. When your naming scheme consists of R[adeon][numbered tier] [space] [generation][numbered tier _again?_][0/5 if there's a new card/refresh] [X or no X depending if there's not enough room for a higher number] it doesn't take much brain power to tell that this scheme needs simplification. RX (named prefix, like "HD") [generation][three digits indicating performance level] [XT for higher end SKUs] is quite a lot simpler.



I agree that it is simpler.
But AMD graphics division is not doing good and new image might become helpful.

For example, Navi 21 could be called Radeon iRT 900.
Navi 23 could be called Radeon iRT 700.
i just a cool letter, while RT from ray-tracing.


----------



## Valantar (Mar 21, 2020)

ARF said:


> I agree that it is simpler.
> But AMD graphics division is not doing good and new image might become helpful.
> 
> For example, Navi 21 could be called Radeon iRT 900.
> ...


But "i" still has _zero_ relation to AMD's brand, and is already strongly related to both one of AMD's main competitors (Intel) and possibly the biggest brand on earth regardless of business (Apple). AMD adopting that would then just make them look like they're copying others to look cool, which will inevitably backfire. That's the type of marketing clueless rebranding shops do, not serious businesses, and the _only_ reaction from the press if they did so would be to ask "_...but why?_". Of course RX is already very close to Nvidia's RTX, but at least in that case AMD had used the name for several years before Nvidia started using theirs, and Nvidia had a reasonable reason to switch.

As for replacing "RX" with "RT" ... why? That would suddenly make it AMD that's going after Nvidia's naming scheme rather than the other way around (which is a bad look, especially for an underdog), and they wouldn't gain much. While Nvidia's reason to switch from GTX to RTX was that they were adding RTRT and keeping both series alive, that is looking to be a single-generation thing - I don't think the GTX 16 series is getting a follow-up. For AMD to change their naming due to adding RTRT would then necessitate a wholesale name change for all RDNA 2-based cards, in which case "RT" or "iRT" would both be very poor choices (again, due to the resemblance to competitors' and other large brands' naming). Continuing with the established and relatively respected Radeon RX branding makes much more sense.


----------



## ARF (Mar 21, 2020)

Valantar said:


> But "i" still has _zero_ relation to AMD's brand, and is already strongly related to both one of AMD's main competitors (Intel) and possibly the biggest brand on earth regardless of business (Apple). AMD adopting that would then just make them look like they're copying others to look cool, which will inevitably backfire. That's the type of marketing clueless rebranding shops do, not serious businesses, and the _only_ reaction from the press if they did so would be to ask "_...but why?_". Of course RX is already very close to Nvidia's RTX, but at least in that case AMD had used the name for several years before Nvidia started using theirs, and Nvidia had a reasonable reason to switch.
> 
> As for replacing "RX" with "RT" ... why? That would suddenly make it AMD that's going after Nvidia's naming scheme rather than the other way around (which is a bad look, especially for an underdog), and they wouldn't gain much. While Nvidia's reason to switch from GTX to RTX was that they were adding RTRT and keeping both series alive, that is looking to be a single-generation thing - I don't think the GTX 16 series is getting a follow-up. For AMD to change their naming due to adding RTRT would then necessitate a wholesale name change for all RDNA 2-based cards, in which case "RT" or "iRT" would both be very poor choices (again, due to the resemblance to competitors' and other large brands' naming). Continuing with the established and relatively respected Radeon RX branding makes much more sense.



Or maybe it's the best to allow the users to vote about how they want their cards to be christened?
Nvidia christened its graphics cards GeForce because the people said so.

As for AMD being backfired, they are backfired from the very start to begin with, have always been very bad in everything.
Look at one example - why is it reporting 6-bit colour when Radeon Settings is installed and 8-bit when it's uninstalled?


----------



## Valantar (Mar 21, 2020)

ARF said:


> Or maybe it's the best to allow the users to vote about how they want their cards to be christened?
> Nvidia christened its graphics cards GeForce because the people said so.
> 
> As for AMD being backfired, they are backfired from the very start to begin with, have always been very bad in everything.
> ...


Why hold a naming contest? Radeon is a well established and respected brand name. Nobody ever suggested "GT/GTS/GTX (and now RTX)" in a naming contest. Besides, that was in 1999.

Also, you would do well to look up the meaning word "backfire" and how it's used, as you can't say that someone/something "is backfired". That something backfires means that it has the opposite (or at least a very different) effect than what was intended. As for AMD's drivers being buggy, apparently YMMV there, as I have yet to have any serious issues across quite a few generations of AMD GPUs. I might be lucky, but serious bugs seem limited to an understandably vocal minority - but still a minority.


----------



## Dyatlov A (Mar 28, 2020)

Will it support Windows 7?


----------



## ARF (Mar 29, 2020)

Prayers for July-September 2020 release.

*AMD Big Navi Will Be 50% Faster Than RTX 2080 Ti According To Latest Leaks*


















						AMD Big Navi Will Be 50 Percent Faster Than RTX 2080 Ti According To Latest Leaks
					

Long-awaited and hotly discussed AMD Big Navi GPU is still not out yet to beat the competition this year. Last strong leaks about Big Navi Graphics chip was




					ownsnap.com


----------



## EarthDog (Mar 29, 2020)

ARF said:


> *AMD Big Navi Will Be 50% Faster Than RTX 2080 Ti According To Latest Leaks*


If that is true... huuuuuuuuuuge. But not holding my breath.

Clicked play... heard it was Adored... stopped imediately.


----------



## Master Tom (Mar 29, 2020)

Dyatlov A said:


> Will it support Windows 7?



What about Window 3.1?


----------



## WeeRab (Apr 1, 2020)

_larry said:


> I'm just glad AMD is getting their $hit together GPU wise again finally. They have already done VERY well with their CPUs, now if they can get closer to what Nvidia delivers, it's gonna be another game changer. (Pun intended)
> 
> When the R9's came out I was stoked. I still have my R9 290 from 2013 and it still can handle most games at 1440p with some settings turned down. I was very disappointed with the Polaris architecture. All they did was make them more power efficient with the same performance as my 290. Hell, my 290 still beats the RX580 in some benchmarks... I am looking forward to getting a 5700XT when the new cards drop though


 Oh I don't know. Their current mid-tier cards stack up well value wise against Nvidia. I recently bought an RX5700 for less than £300.  Best bang-for-buck period.


----------



## Master Tom (Apr 1, 2020)

I am interested in Big Navi.


----------



## ECC_is_best (Apr 5, 2020)

Using one of these can we assign a GPU per VM?


----------



## Valantar (Apr 6, 2020)

ECC_is_best said:


> Using one of these can we assign a GPU per VM?


I guess we'll know when the cards arrive in... 6-8 months?


----------



## ECC_is_best (Apr 6, 2020)

Valantar said:


> I guess we'll know when the cards arrive in... 6-8 months?



Well I figure their specs would say what it supports... But I'm not sure what the current AMD-branded technology is for this... DirectGMA?


----------



## ARF (Apr 9, 2020)

Added preliminary support for upcoming Navi 21 and Navi 22 | in HWInfo


----------



## ARF (Apr 10, 2020)

New data and educated expectations:

*AMD Big Navi and RDNA 2 GPUs: Release Date, Specs, Everything We Know*








						AMD Big Navi and RDNA 2 GPUs: Everything We Know
					

The AMD Big Navi / RDNA 2 architecture powers the latest consoles and high-end graphics cards.




					www.tomshardware.com
				








According to a Chinese source, one card will be 80% faster than the RX 5700 XT.








						Big Navi would bring a huge increase in revenue for AMD
					

We have news from China: the improvement in performance that will be offered by "Big Navi" will be quite big. Inside we explain the details. Rumors about




					optocrypto.com
				












						7nm Navi接受度高于Vega 高端显卡big Navi将成杀手锏 性能提升80%
					

从去年7月7日与锐龙3000系列CPU一道首发算起，AMD的7nm Navi显卡发布也有9个月了，目前已经覆盖了RX 5700/5600/5500系列，市场区间在1000-3000元内。由于使用了7nm工艺及全新的RDNA架构，RX 5



					news.mydrivers.com


----------



## ARF (Apr 28, 2020)

Exclusive:












						AMD's Big Navi 7nm GPU Flagship Allegedly Features 505mm² Large Die And RDNA2 - Suggests 2x The Performance Of The RX 5700XT
					

Details about AMD's upcoming 7nm 'Big Navi' GPUs have leaked out. The Navi 21 flagship will feature a GPU with 505mm² die size.




					wccftech.com


----------



## EarthDog (Apr 28, 2020)

Dew et, AMD... Dew et!


----------



## ARF (Apr 29, 2020)

More news today:

High-end Navi 21 will support ray-tracing, while lower-grade Navi 23 won't and will target GTX 1600 series.









						AMD Raytracing Allegedly Exclusive To High-End RDNA 2 Navi 2X GPUs, Low-End RDNA 2 GPUs Focus on Power Efficiency And Compete Against Turing GeForce 16 Series
					

A rumor has stated that AMD might keep its hw-level raytracing feature exclusive to high-end RDNA 2 GPU based Radeon RX Navi 2X cards.




					wccftech.com


----------



## Master Tom (Apr 29, 2020)

ARF said:


> More news today:
> High-end Navi 21 will support ray-tracing, while lower-grade Navi 23 won't and will target GTX 1600 series.


That sounds like a good plan.


----------



## EarthDog (Apr 29, 2020)

ARF said:


> More news today:
> 
> High-end Navi 21 will support ray-tracing, while lower-grade Navi 23 won't and will target GTX 1600 series.
> 
> ...


Seems familiar and logical...expected, even.


----------



## Valantar (Apr 29, 2020)

ARF said:


> More news today:
> 
> High-end Navi 21 will support ray-tracing, while lower-grade Navi 23 won't and will target GTX 1600 series.
> 
> ...


I read that, and while I agree with @Master Tom above that it sounds like a good plan, I think WCCFTech's interpretation of it - that only Navi 21 is likely got get RTRT - sounds unlikely. After all we know that the XSX supports RTRT on its ~350mm2 die (which includes an 8-core CPU cluster), as does the PS5 on its even smaller die. Why, then, would a 240mm2 Navi 23 not include RTRT hardware? WCCFTech seems to assume "enthusiast and flagship" would mean "only Navi 21 gets RTRT", while I would say it sounds more likely that only Navi 21-23 get RTRT (given that the reported die sizes are in the right ballpark) with anything smaller not getting it. After all the 251mm2 Navi 10 competes with the RTX 2060 Super or 2070 (and is close to the 2070 Super in some cases), so their framing (non-RTRT RDNA 2 cards competing with GTX 1600 series, yet only Navi 21 gets RTRT?) doesn't make sense when comparing with current performance levels, let alone what we can expect for RDNA 2.


----------



## ARF (May 5, 2020)

Yeah, wccftech seems to take the exact "translation" from https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html



> AMD will also be like NVIDIA's strategy at Turnig, supporting hardware ray tracing only in higher-order products, lower-order: products that cancel ray tracing but take high efficiency RDNA 2 The architecture design, whose effectiveness or licensing is expected to compete with the NVIDIA: the previously available GTX 16 Series





Navi 23 with its 52 CUs at https://www.techpowerup.com/gpu-specs/amd-navi-23.g926 exactly matches the specs of the iGPU of the new Xbox.

50% performance improvement per watt would directly position it somewhere between RTX 2080 and RTX 2080 Ti.

Only Navi 10 and Navi 14 will be left to compete with non-RTX Nvidia cards - Navi 10 would beat GTX 1660 series, while Navi 14 will continue to compete with GTX 1650 series and below.


----------



## ARF (Jun 11, 2020)

Rumour/leak has it:


----------



## R0H1T (Jun 11, 2020)

*400W* TDP, yeah *BS*


----------



## ARF (Jun 11, 2020)

R0H1T said:


> *400W* TDP



True halo, enthusiast part. Even Threadripper 3990X is a single CPU alone draws 280-watts.

This will have GDDR6, not HBM, so the high power is welcome and good for us now.


----------



## Space Lynx (Jun 11, 2020)

im ok with 400w as long it beats a 2080 ti by 15-20% and costs around $799


----------



## Super XP (Jun 11, 2020)

lynx29 said:


> im ok with 400w as long it beats a 2080 ti by 15-20% and costs around $799


400W? No thanks, regardless on who releases such a thing.


----------



## ARF (Jun 11, 2020)

Super XP said:


> 400W? No thanks, regardless on who releases such a thing.



For you there will be Navi 23 with 150-watt and still as fast as RTX 2080 Ti


----------



## Deleted member 67555 (Jun 11, 2020)

I have a gfx that reaches 300w and I have a hard time keeping it below 80c.
I would not want a 400w card. I just wouldn't.


----------



## Super XP (Jun 11, 2020)

ARF said:


> For you there will be Navi 23 with 150-watt and still as fast as RTX 2080 Ti






jmcslob said:


> I have a gfx that reaches 300w and I have a hard time keeping it below 80c.
> I would not want a 400w card. I just wouldn't.


Me either,


----------



## Space Lynx (Jun 11, 2020)

jmcslob said:


> I have a gfx that reaches 300w and I have a hard time keeping it below 80c.
> I would not want a 400w card. I just wouldn't.



I mean the only way I would want it is if they invested heavily in a stock cooler that keeps it at 70 celsius in most situation, but yeah I agree with you otherwise

this 2 fan vaporchamber leaked design of the rtx 3080 looks very interesting for example, i bet it could cool 400 watts. all those fins


----------



## Super XP (Jun 11, 2020)

lynx29 said:


> I mean the only way I would want it is if they invested heavily in a stock cooler that keeps it at 70 celsius in most situation, but yeah I agree with you otherwise
> 
> this 2 fan vaporchamber leaked design of the rtx 3080 looks very interesting for example, i bet it could cool 400 watts. all those fins


For new modern technology, that's too much wattage. Which tells me if true, Nvidia has nothing new and are taking a slightly revamped Turing and pumping up the juice then renaming it 3080.


----------



## Valantar (Jun 11, 2020)

400W can't be cooled effectively in a PCIe form factor in >99% of cases on the market. It simply isn't feasible. Water cooling with a 240mm radiator as stock _might_ work, but that still limits compatibility to a) cases that can fit one, and b) users who don't already have that space taken by their CPU cooling. If a 360mm rad is needed, it is DOA.

So no, this is BS. 400W isn't happening.


----------



## ARF (Jun 11, 2020)

It had been done previously.









No technical problem or difficulty. Price and performance will be the decisive factors.












						AMD Radeon R9 295X2 8 GB Review
					

Today, AMD is launching their Radeon R9 295X2, a dual-GPU card based on two fully unlocked, fully clocked Hawaii graphics processors. As a result, the card delivers impressive numbers in 4K and EyeFinity. But with a price of $1500, it is certainly not cheap, no matter how you look at it.




					www.techpowerup.com


----------



## R0H1T (Jun 11, 2020)

Those were dual GPUs, you should know better than to spread such egregious rumors. Slow rumor day for you?


----------



## ARF (Jun 11, 2020)

R0H1T said:


> Those were dual GPUs, you should know better than to spread such egregious rumors. Slow rumor day for you?




And so what ? The power delivery circuitry obviously doesn't care how many chips you got on the PCB


----------



## moproblems99 (Jun 11, 2020)

lynx29 said:


> im ok with 400w as long it beats a 2080 ti by 15-20% and costs around $799



It better double the performance for nearly double the watts and 2 years later.


----------



## R0H1T (Jun 11, 2020)

Double as compared to what exactly?


----------



## moproblems99 (Jun 11, 2020)

R0H1T said:


> Those were dual GPUs, you should know better than to spread such egregious rumors. Slow rumor day for you?



Frankly, likely bullshit GPU rumors are still better than real news and reality this year.



R0H1T said:


> Double as compared to what exactly?



See quoted post.


----------



## ARF (Jun 11, 2020)

moproblems99 said:


> See quoted post.



Really ? It will be great if double the performance of the RX 5700 XT at 1080p.


----------



## moproblems99 (Jun 11, 2020)

ARF said:


> Really ? It will be great if double the performance of the RX 5700 XT at 1080p.



How would that be impressive?  It would basically be two 5700XT, complete with double the power draw.  That is not impressive.  400W is a disappointment with any metric unless it has double the performance of 2080 Ti.


----------



## ARF (Jun 11, 2020)

moproblems99 said:


> How would that be impressive?



Are you serious?












						AMD Radeon RX 5700 XT Review
					

The AMD Radeon RX 5700 XT is based on AMD's all-new Navi 10 GPU featuring the RDNA architecture. We thoroughly test the card's gaming performance and look at power, heat, noise, overclocking, and clock frequency stability, too, sometimes with surprising results.




					www.techpowerup.com


----------



## Master Tom (Jun 11, 2020)

I have the Radeon 64 Liquid Cooling. It does not even reach 60°C.



Super XP said:


> 400W? No thanks, regardless on who releases such a thing.


How much does 1 kw/h cost in Greece?


----------



## moproblems99 (Jun 11, 2020)

ARF said:


> Are you serious?
> 
> View attachment 158639
> 
> ...



Yes.

If you give me double the wattage then at a minimum I am expecting double the performance.  To be impressed, you have to give me double the performance at significantly less than double the wattage or double the wattage and significantly more than double the performance.  Besides, if they (AMD) achieve double performance of a 5700XT, then they will be at roughly the expected performance of Ampere...but.....at significantly more power usage.  I, for one, don't want to dump 400watts of heat into my office that is already 28C because it is summer.  I don't care about the power draw itself, I care about working/gaming in a sauna.


----------



## ARF (Jun 11, 2020)

moproblems99 said:


> Yes.
> 
> If you give me double the wattage then at a minimum I am expecting double the performance.  To be impressed, you have to give me double the performance at significantly less than double the wattage or double the wattage and significantly more than double the performance.  Besides, if they (AMD) achieve double performance of a 5700XT, then they will be at roughly the expected performance of Ampere...but.....at significantly more power usage.  I, for one, don't want to dump 400watts of heat into my office that is already 28C because it is summer.  I don't care about the power draw itself, I care about working/gaming in a sauna.




RTX 2080 Ti is a 300-watt part. So, you expect 33% higher performance, not double the performance up to 400-watt.


----------



## Master Tom (Jun 11, 2020)

moproblems99 said:


> Besides, if they (AMD) achieve double performance of a 5700XT, then they will be at roughly the expected performance of Ampere...but.....at significantly more power usage.  I, for one, don't want to dump 400watts of heat into my office that is already 28C because it is summer.  I don't care about the power draw itself, I care about working/gaming in a sauna.


In Germany it so hot in summer, that you need air condition anyway. In winter the i9-9900K and the Radeon Vega 64 Lquid Cooling help heating the room  
In which country do you live?


----------



## Caring1 (Jun 11, 2020)

lynx29 said:


> im ok with 400w as long it beats a 2080 ti by 15-20% and costs around $799


I'd be ok with 400W if it beat the 2080Ti by 100% and idle power consumption was less than 10W.


----------



## moproblems99 (Jun 12, 2020)

Master Tom said:


> In Germany it so hot in summer, that you need air condition anyway. In winter the i9-9900K and the Radeon Vega 64 Lquid Cooling help heating the room
> In which country do you live?



That is with Air Conditioning.



ARF said:


> RTX 2080 Ti is a 300-watt part. So, you expect 33% higher performance, not double the performance up to 400-watt.



That would have been relevant when the 2080ti launched but not years later.  Linear progression is not impressive.  Too me anyway.


----------



## ratirt (Jun 12, 2020)

I would rather compare all the aspects of a graphics card. Power draw, performance, price maybe even value. The 400W is a lot for a GPU. Although if it turns out it is a monster and performs extraordinary, then why not get it. I'm sure the performance/price ratio will not be great anyway.


----------



## EarthDog (Jun 12, 2020)

ARF said:


> RTX 2080 Ti is a 300-watt part.


It's a 260W part in FE (and many card partner) form(s).


----------



## medi01 (Jun 12, 2020)

lynx29 said:


> im ok with 400w as long it beats a 2080 ti by 15-20% and costs around $799



If 505mm2 RDNA2 chip doesn't beat 2080Ti by 15%, I'll apologize to Raja Koduri.



moproblems99 said:


> expected performance of Ampere...but.....at significantly more power usage.


There is no levels to mind blowing fanboi insanity you guys cannot dive into, is there?
You are in a thread about 350W cards.


----------



## moproblems99 (Jun 12, 2020)

medi01 said:


> There is no levels to mind blowing fanboi insanity you guys cannot dive into, is there?
> You are in a thread about 350W cards.



Please see my all AMD build and then call me nv fanboi.


----------



## medi01 (Jun 12, 2020)

moproblems99 said:


> Please see my all AMD build and then call me nv fanboi.


I judge your posts by their content, whatever you own, be it ryzen or Empire State Building, doesn't change it.

You are posting in a thread discussing 350W card, come into 400W card that is allegedly faster, then claim 400 is "substantially higher power consumption" than, oh wait, 350W.


----------



## ARF (Jun 27, 2020)

Navi 21 hasn't been presented, now we have already talks about future third generation Navi 3X.

AMD’s Next Generation Flagship ‘Navi 31 GPU’ Get’s First Confirmation – Even Bigger Navi?








						AMD's Next Generation Flagship 'Navi 31 GPU' Get's First Confirmation - Even Bigger Navi?
					

AMD's Navi 21 flagship GPU isn't out yet and the existence of an upcoming Navi 31 GPU has been confirmed thanks to the code for MacOS 11 Sur Beta (Hardware Leaks via Videocardz). AMD's Navi 21 aka 'Big Navi' has been a GPU that has been anxiously awaited for a very long time now. It […]




					wccftech.com


----------



## Caring1 (Jun 28, 2020)

ARF said:


> Navi 21 hasn't been presented, now we have already talks about future third generation Navi 3X.
> View attachment 160427


Last word in text - Apple.


----------



## Master Tom (Jun 28, 2020)

medi01 said:


> If 505mm2 RDNA2 chip doesn't beat 2080Ti by 15%, I'll apologize to Raja Koduri.


I hope it will be even a little bit faster.


----------



## ARF (Jul 4, 2020)

Master Tom said:


> I hope it will be even a little bit faster.




It has to be more than "a little bit" more than 15%. 
This is the first GPU from AMD that supports DX12U, so should be having more significant performance optimisations.


----------



## ARF (Jul 9, 2020)

Has anyone got information what's going on with Big Navi 2 ? Performance and launch windows estimates?


----------



## ARF (Jul 26, 2020)

AMD Navy Flounder - Navi 22
AMD  Sienna Cichlid - Navi 21

*AMD “Navy Flounder” NAVI 22 GPU added to Linux patches*








						AMD "Navy Flounder" NAVI 22 GPU added to Linux patches - VideoCardz.com
					

AMD Navy Flounder AMD is adding more patches to the Linux display drivers. We have so far heard about Sienna Cichlid, also known as Navi 21 or The Big Navi) The Navy Flounder is a new codename for yet another Navi 2X processor. The graphics processor appears to be from GFX103X family. Update: it...




					videocardz.com
				




*AMD NAVI 23 graphics processor gets GFX1032 ID*








						AMD NAVI 23 graphics processor gets GFX1032 ID - VideoCardz.com
					

AMD Navi 23 is confirmed Decoding AMD graphics chips is not an easy task. The more products AMD releases, the more likely they are to overlap with different architectures. Not all products are always up to date, so despite being the latest they actually feature last-gen architecture (such as...




					videocardz.com
				




Thoughts?


----------



## ARF (Sep 9, 2020)

AMD Could Potentially Unveil Big Navi ‘RDNA 2’ GPU Powered Radeon RX 6000 Series Tomorrow








						AMD Could Potentially Unveil Big Navi 'RDNA 2' GPU Powered Radeon RX 6000 Series Tomorrow
					

AMD's Big Navi Radeon RX 6000 series graphics cards could potentially be unveiled tomorrow as hinted in a tweet by Frank Azor.




					wccftech.com


----------



## EarthDog (Sep 9, 2020)

That would be today considering he wrote the tweet yesterday/over 16 hours ago.


.... still waiting!!!!!!!!!!!!!!


----------



## Super XP (Sep 9, 2020)

EarthDog said:


> That would be today considering he wrote the tweet yesterday/over 16 hours ago.
> 
> 
> .... still waiting!!!!!!!!!!!!!!


I just hope it's not the Xbox SX announcement.  I mean he is from AMD right? If it was a Xbox announcement, that would come from Microsoft not AMD.

Reason I mention this is due to a Reddit thread about it.


----------



## moproblems99 (Sep 9, 2020)

EarthDog said:


> That would be today considering he wrote the tweet yesterday/over 16 hours ago.
> 
> 
> .... still waiting!!!!!!!!!!!!!!



Remember, tomorrow is always a day away.  He said it himself.  Maybe it means we'll always be waiting.  After all, it has already been over a year...


----------



## dragontamer5788 (Sep 9, 2020)

EarthDog said:


> That would be today considering he wrote the tweet yesterday/over 16 hours ago.
> 
> 
> .... still waiting!!!!!!!!!!!!!!




__ https://twitter.com/i/web/status/1303725578160349185

__ https://twitter.com/i/web/status/1303726639013036033
Hurrah!  We got announcements for an... official announcement in a month.


----------



## EarthDog (Sep 9, 2020)

dragontamer5788 said:


> Hurrah! We got announcements for an... official announcement in a month.


Figures.... lol.

We got jebaited into thinking we'd hear more than just........... this.

Par for the course!


----------



## Super XP (Sep 9, 2020)

dragontamer5788 said:


> __ https://twitter.com/i/web/status/1303725578160349185
> 
> __ https://twitter.com/i/web/status/1303726639013036033
> Hurrah!  We got announcements for an... official announcement in a month.


I've never seen AMD so quiet about a CPU architecture and a GPU architecture in a very long time. Last time they were this hushed about something was back when the Athlon 64 was released. And for ATI, when the Radeon 9700 Pro. 
When many months before Fury & Vega's launch, AMD was pretty vocal about them. We all know how that turned out. They were great, but not great enough per say. 
This *R*DNA2 is a interesting and might very well propel AMD into full stack competition, evening out the GPU playing field against Nvidia. If this happens, PC Gamers are going to greatly benefit this Fall, Christmas & into 2021.  I hope I am right 



EarthDog said:


> Figures.... lol.
> 
> We got jebaited into thinking we'd hear more than just........... this.
> 
> Par for the course!


AMD's upcoming ZEN 3 announcement?  *AMD Quad FX platform *


----------



## Master Tom (Sep 10, 2020)

Super XP said:


> I've never seen AMD so quiet about a CPU architecture and a GPU architecture in a very long time. Last time they were this hushed about something was back when the Athlon 64 was released. And for ATI, when the Radeon 9700 Pro.


Yes, I hope Zen3 will surpass Intel and Big Navi the RTX 3070.


----------



## Super XP (Sep 10, 2020)

Master Tom said:


> Yes, I hope Zen3 will surpass Intel and Big Navi the RTX 3070.


Surpass the RTX 3070? Umm Big Navi is suppose to be AMDs highest end GPU, that's quite belittling it don't ya think? Lol


----------



## oxrufiioxo (Sep 10, 2020)

Super XP said:


> Surpass the RTX 3070? Umm Big Navi is suppose to be AMDs highest end GPU, that's quite belittling it don't ya think? Lol



Given AMD recent history its hard to gauge what their card will actually compete with. I think everyone knows they can make a 350W big ass die card to compete with the 3080/90 if they wanted to my issues is believing they would do that over making 3-4 16 core ryzen 4000 chips they can sell for $700+ each with the same amount of silicon.


----------



## R0H1T (Sep 10, 2020)

oxrufiioxo said:


> they would do that over making 3-4 16 core ryzen 4000 chips they can sell for $700+ each with the *same amount of silicon*e.


Probably less, if not a lot less. As Intel found out about the Zen secret sauce ~

*Intel 7nm Delayed By 6 Months; Company to Take “Pragmatic” Approach in Using Third-Party Fabs*


----------



## Super XP (Sep 10, 2020)

oxrufiioxo said:


> Given AMD recent history its hard to gauge what their card will actually compete with. I think everyone knows they can make a 350W big ass die card to compete with the 3080/90 if they wanted to my issues is believing they would do that over making 3-4 16 core ryzen 4000 chips they can sell for $700+ each with the same amount of silicon.


Difference here is that RDNA2 is a new design. AMD hasn't had a new GPU design in a long time. We will have to wait and see what comes out. 
The mainstream market is where the majority of profits & sales come from.


----------



## R0H1T (Sep 10, 2020)

And that's exactly why AMD hasn't bothered competing at the top end for the last few years. A lot of people tend to expect AMD to always be the cheaper brand, even if they tend to lead in performance, efficiency or really just about anything else. Nvidia can still command a premium by selling equivalent performing cards just because they're Nvidia, this is the same even for Intel though with locked chips they tend to segment the hell out of that market covering pretty much every price point imaginable.

The point being buyers tend to expect more, a lot more for a lot less ($) from AMD than their bigger brand counterparts. This is why selling a near 3080 performant chip, at similar prices is gonna be hard for them.


----------



## oxrufiioxo (Sep 10, 2020)

R0H1T said:


> And that's exactly why AMD hasn't bothered competing at the top end for the last few years. A lot of people tend to expect AMD to always be the cheaper brand, even if they tend to lead in performance, efficiency or really just about anything else. Nvidia can still command a premium by selling equivalent performing cards just because they're Nvidia, this is the same even for Intel though with locked chips they tend to segment the hell out of that market covering pretty much every price point imaginable.
> 
> The point being buyers tend to expect more, a lot more for a lot less ($) from AMD than their bigger brand counterparts. This is why selling a near 3080 performant chip, at similar prices is gonna be hard for them.



I definitely agree even if they have 3080+/-10%  performance at $649 Nvidia will outsell them by at least 2-1. I ran into this issue a lot doing systems for people no matter how much I showed them the RX 570/580 was better than the 1050/Ti they wouldn't touch AMD even at a similar price with much better performance. To a lesser degree I ran into the same issue with 5700XT vs the 2060s/2070s


----------



## Super XP (Sep 10, 2020)

Though I agree with the last 2 posts...
AMD competing in the high end and ultra high end is the only way they can convince the gaming community they have what it takes and some. The majority of the money is made in the mainstream, but i ln order for there brand to overcome this comparison hump AMDs been having issues with is competition in the high end.


----------



## Valantar (Sep 10, 2020)

Super XP said:


> Though I agree with the last 2 posts...
> AMD competing in the high end and ultra high end is the only way they can convince the gaming community they have what it takes and some. The majority of the money is made in the mainstream, but i ln order for there brand to overcome this comparison hump AMDs been having issues with is competition in the high end.


That is partially true, but it also takes more than that - there have been periods when AMD has had the best performance available in both the CPU and GPU spaces, yet in all instances they've been outsold by the incumbents in each market. Mindshare gains take a lot of time - but as you say, halo products are also necessary to gain it to any significant degree. Some people take a lot of time to move from "there are Nvidia GPUs, and then there's also another option if those don't rock your boat" to "there are GPUs from both Nvidia and AMD" when they are considering a GPU. Moving that needle is going to take significant time and effort, so I'm truly hoping they have a compelling halo product to start doing that work. Without it there's a significant risk that AMD and Intel will be competing over the same 20% of the GPU market with Nvidia sitting pretty on the rest, and that isn't good for anyone.


----------



## Super XP (Sep 10, 2020)

Valantar said:


> That is partially true, but it also takes more than that - there have been periods when AMD has had the best performance available in both the CPU and GPU spaces, yet in all instances they've been outsold by the incumbents in each market. Mindshare gains take a lot of time - but as you say, halo products are also necessary to gain it to any significant degree. Some people take a lot of time to move from "there are Nvidia GPUs, and then there's also another option if those don't rock your boat" to "there are GPUs from both Nvidia and AMD" when they are considering a GPU. Moving that needle is going to take significant time and effort, so I'm truly hoping they have a compelling halo product to start doing that work. Without it there's a significant risk that AMD and Intel will be competing over the same 20% of the GPU market with Nvidia sitting pretty on the rest, and that isn't good for anyone.


Couldn't have said it better myself   
Let's hope RDNA2 has what it takes.


----------



## Master Tom (Sep 10, 2020)

O.K. let's say hopefully it can compete with the RTX 3080.


----------



## BoboOOZ (Oct 28, 2020)

oxrufiioxo said:


> I definitely agree even if they have 3080+/-10%  performance at $649 Nvidia will outsell them by at least 2-1. I ran into this issue a lot doing systems for people no matter how much I showed them the RX 570/580 was better than the 1050/Ti they wouldn't touch AMD even at a similar price with much better performance. To a lesser degree I ran into the same issue with 5700XT vs the 2060s/2070s


Quite true, the only thing which is different now is that 3080 availability seems appalling.


----------

