• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD’s new RDNA GPU Architecture has Been Exclusively Designed for Gamers

That was introduced with the X1950... wasn't new to R600.
Yeah, the XTX, that came out too little too late to impact anything. The real showcase was the shinny red HD2900XT.
Man that cooler looks nice even today. I wouldn't mind a translucent green RTX2080.
 
The issue with GDDR4 on both the X1950 and the HD2900XT is it doesn't actually provide much of a performance benefit.
They barely performs better than their GDDR3 counterpart.
 
The issue with GDDR4 on both the X1950 and the HD2900XT is it doesn't actually provide much of a performance benefit.
They barely performs better than their GDDR3 counterpart.
Made worse when Nvidia managed 2,5GHz out of their GDDR3s.
 
Made worse when Nvidia managed 2,5GHz out of their GDDR3s.
It really is the Memory manfacturer's fault.
They failed to reach the speeds they promised for GDDR4, and also the ram ended up running hotter that GDDR3 anyway.
 
OMG the same CRAP all over again. Nvidia has had the SAME architecture for over 10 years now buddy boy. I mean their latest Turing is essentially the same architecture from their GTX 200 days.

The whole old and tiresome and fake and propagandist fake news that AMD's is using the same architecture and somehow Nvidia magically has new architecture every time is the most absurd propaganda I've ever seen on the internet. Fact of the matter is that Turing literally has roots and bottom down is the same architecture as their GTX 200 architecture from over 10 years ago!

IF anything AMD has changed architecture a lot more times and even their GCN architecture has seen major changed with each revision. GCN 1 to GCN 4 is virtually incomparable, even Vega was a massive redesign on GCN 4 which was already massively different from GCN 3.

What's with this fake news propaganda that somehow everything AMD does in the GPU space is GCN and is always the same, while Nvidia the saviors of the universe, the angels of heaven always magically have a new architecture every time brought from the clouds above on unicorns? Complete fake news propaganda! Its worse fake news like the decades old fake news of "AMD drivers are bad".

Fake news propaganda?

I know architecture development is iterative for both AMD and Nvidia. I never said Nvidia revamped their entire architecture all over every other year, or did I? If I did, point me to it ;)

All I do is look at results, and that is what I base my conclusions on. You know why? Because everything else is just marketing. AMD and renaming/rebranding is a close marriage, and Nvidia selling every revision as the latest and greatest is a similar marriage. It works well for each company in marketing. The end results = perf per shader, perf per watt, perf per mm2 of die space tell us the real story.

And the real story is this: whether AMD calls it RDNA, GCN 3000 or Ultra Revolution Twenty is irrelevant when all we get is that same performance level we've been looking at for the past 3 years and when everything they release thus far is the same crap rebranded, renamed or shrunk again. Poor Vega Navi... 2 years and all we got was GDDR6. Pathetic.

:slap:

Also, do you not see the irony with this?

OMG the same CRAP all over again. Nvidia has had the SAME architecture for over 10 years now buddy boy. I mean their latest Turing is essentially the same architecture from their GTX 200 days.

That AMD, with all their revamps and revisions of GCN cannot seem to surpass perf/watt of Nvidia's GTX 200 days and also stalls at a performance level nearly 40% below it?

Do you even logic, buddy boy?
 
Last edited:
Fake news propaganda?

I know architecture development is iterative for both AMD and Nvidia. I never said Nvidia revamped their entire architecture all over every other year, or did I? If I did, point me to it ;)

All I do is look at results, and that is what I base my conclusions on. You know why? Because everything else is just marketing. AMD and renaming/rebranding is a close marriage, and Nvidia selling every revision as the latest and greatest is a similar marriage. It works well for each company in marketing. The end results = perf per shader, perf per watt, perf per mm2 of die space tell us the real story.

And the real story is this: whether AMD calls it RDNA, GCN 3000 or Ultra Revolution Twenty is irrelevant when all we get is that same performance level we've been looking at for the past 3 years and when everything they release thus far is the same crap rebranded, renamed or shrunk again. Poor Vega Navi... 2 years and all we got was GDDR6. Pathetic.

:slap:

Also, do you not see the irony with this?



That AMD, with all their revamps and revisions of GCN cannot seem to surpass perf/watt of Nvidia's GTX 200 days and also stalls at a performance level nearly 40% below it?

Do you even logic, buddy boy?
Stop saying AMD's using the same architecture then, when every time its a significantly changes architecture. If anything Nvidia have historically done smaller changes from generation to generation.

Nvidia is not ahead by any means, I mean look at Vega 56 and Vega 64 competing with 1070 and 1080, AMD just made the wrong decision to have a one all be all card for computing, AI, gaming, etc...
Otherwise comparing RX 580, 570, 560 its very competitive against Nvidia's GTX 1060 6g, 1060 3g, 1050ti, 1050.

And ultimately its the price that matters, not anything else and AMD has been winning on that front for the past 5 years!
 
Stop saying AMD's using the same architecture then, when every time its a significantly changes architecture. If anything Nvidia have historically done smaller changes from generation to generation.

Nvidia is not ahead by any means, I mean look at Vega 56 and Vega 64 competing with 1070 and 1080, AMD just made the wrong decision to have a one all be all card for computing, AI, gaming, etc...
Otherwise comparing RX 580, 570, 560 its very competitive against Nvidia's GTX 1060 6g, 1060 3g, 1050ti, 1050.

And ultimately its the price that matters, not anything else and AMD has been winning on that front for the past 5 years!

No, no and no. Crawl back under that rock pls, you missed the point and still do.

A product that uses more power, needs a bigger die and more CPU to do the same work as its competitor is simply not as good, and that is all there is to it. Again, like I said before, when I buy a gaming GPU all I care about is the end performance and how it progresses from one gen to the next. Die size and power are factors in that equation. If all you like to do is buy a midrange GPU when its dropped in price, by all means, cheer for AMD.

By the way, how's that Vega price/perf doing shortly after launch - which was delayed already? And how's that Radeon VII price/perf metric working out? Or that battle for the midrange? There is no AMD price advantage - it always comes at a cost: higher power bill over the lifetime of the card, lower performance, or more noise, and usually all three of those... to save what, 20-40 bucks? Yeah, that really is a sign of a competitive product!

The last 5 years the only real GPU win for AMD was GPU mining and prior to that their HD7970. The rest was utter meh front to back and it still is. AMD"s GPU division has been stacking one bad decision on top of another ever since it was erected from the ashes of ATI.
 
Last edited:
A product that uses more power, needs a bigger die and more CPU to do the same work as its competitor is simply not as good, and that is all there is to it.

Here is how I know this isn't what ultimately dictates market success or if something is considered competitive : Fermi.
 
Last edited:
No, no and no. Crawl back under that rock pls, you missed the point and still do.

A product that uses more power, needs a bigger die and more CPU to do the same work as its competitor is simply not as good, and that is all there is to it. Again, like I said before, when I buy a gaming GPU all I care about is the end performance and how it progresses from one gen to the next. Die size and power are factors in that equation. If all you like to do is buy a midrange GPU when its dropped in price, by all means, cheer for AMD.

By the way, how's that Vega price/perf doing shortly after launch - which was delayed already? And how's that Radeon VII price/perf metric working out? Or that battle for the midrange? There is no AMD price advantage - it always comes at a cost: higher power bill over the lifetime of the card, lower performance, or more noise, and usually all three of those... to save what, 20-40 bucks? Yeah, that really is a sign of a competitive product!

The last 5 years the only real GPU win for AMD was GPU mining and prior to that their HD7970. The rest was utter meh front to back and it still is. AMD"s GPU division has been stacking one bad decision on top of another ever since it was erected from the ashes of ATI.
Well the 290X was decent for its era.
 
Stop saying AMD's using the same architecture then, when every time its a significantly changes architecture. If anything Nvidia have historically done smaller changes from generation to generation.

Nvidia is not ahead by any means, I mean look at Vega 56 and Vega 64 competing with 1070 and 1080, AMD just made the wrong decision to have a one all be all card for computing, AI, gaming, etc...
Otherwise comparing RX 580, 570, 560 its very competitive against Nvidia's GTX 1060 6g, 1060 3g, 1050ti, 1050.

And ultimately its the price that matters, not anything else and AMD has been winning on that front for the past 5 years!
can you specify what perfromance increases those "significant" changes bring so that I can present you with what it actually is?


gcn 2 and gcn 3 are 10% over gcn 1 collectively. in 4 generations,they managed 18%,which is just 6% per gen on average.

what Vayra said above is all that matters. I don't really see how significant changes are even a topic here if they're lagging behind pascal in efficiency and performance per shader while using 7nm and hbm2.

performance-per-watt_2560-1440.png
 
Last edited:
We really have no idea what Navi will provide. To me it is the successor to Polaris and not Vega. Vega was supposed to have been cheaper than the 1080 but it was released at the height of the Crypto boom thus inflating the price. Most people like to compare it to the 1080TI becuse of that. In terms of raw performance we really have no idea what it will be. I will say that a card that half the power draw for the same performance is not good but great. If that is anything to go by Navi should be a hit. Maybe Lisa Su is really that savvy and we may see the same thing from AMD vs NVidia as AMD vs Intel.
 
can you specify what perfromance increases those "significant" changes bring so that I can present you with what it actually is?


gcn 2 and gcn 3 are 10% over gcn 1 collectively. in 4 generations,they managed 18%,which is just 6% per gen on average.

what Vayra said above is all that matters. I don't really see how significant changes are even a topic here if they're lagging behind pascal in efficiency and performance per shader while using 7nm and hbm2.

performance-per-watt_2560-1440.png
Which is SAME as Nvidia. Their GTX 500 to GTX 900 were essentially the same. In fact the GTX 760 was actually faster than the GTX 960. At the same price ranges they kept actually releasing slower products and what they did is move upwards the mid range and high range.

But GCN 4 was much faster than GCN 3, in fact at the same price points the RX 580 was close to double the performance over the R9 380 at lower power consumption. About 10% lower power consumption and 50% performance increase for $200.

If you look at RX 570 to R7 370 again 50% performance increase, $170 vs $150 price. 120W vs 110W.
 
Last edited:
Which is SAME as Nvidia. Their GTX 500 to GTX 900 were essentially the same. In fact the GTX 760 was actually faster than the GTX 960. At the same price ranges they kept actually releasing slower products and what they did is move upwards the mid range and high range.
Pay attention to what vayra said.Don't compare skus but perf per shader. Maxwell was a pretty big architectural improvement,they were able to match a 2880 cuda 780ti with 1664 cuda 970 using 27% higher clocks.That's 73% higher shader count matched with 27% core frequency increase but lower memory bandwith at the same time.On same 28nm process.

and 960 was 10% faster than 760 while packing 12% less shaders.

perfrel_1920.gif


No one here is really excited about talking about gcn again,esspecially when you're using your made up numbers.
 
Last edited:
Which is SAME as Nvidia. Their GTX 500 to GTX 900 were essentially the same. In fact the GTX 760 was actually faster than the GTX 960. At the same price ranges they kept actually releasing slower products and what they did is move upwards the mid range and high range.

But GCN 4 was much faster than GCN 3, in fact at the same price points the RX 580 was close to double the performance over the R9 380 at lower power consumption. About 10% lower power consumption and 50% performance increase for $200.

If you look at RX 570 to R7 370 again 50% performance increase, $170 vs $150 price. 120W vs 110W.

I don't know about double the performance. The biggest advantage that Polaris had over Tahiti based cards was power draw. Having owned both series of cards I can attest that in most titles you would be hard pressed to tell the difference between them especially in crossfire. Vega are the only AMD cards that I can say give a noticeable increase in all out performance vs previous gen since Tahiti. The 570 is in no way flat out faster than the 370 by 50%. Again I am maiing these statements based on experience. Even though I am no fan of Nvidia one cannot deny that Nvidia has AMD firmly beat in the GPU space.
 
Why does anyone ever pay attention to press releases ? have we ever seen even the simplest thing produce what the marketing people come up with ? Even a fan that does 35 cfm @ 0.6 SP is advertised as 70 cfm, 1.2 SP. Ever seen a monitor match the response time of what's on the spec sheet ... hasn't happened yet. I don't care who the vendor is .... marketing departments lose to politicians on the believability scale.
 
Here is how I know this isn't what ultimately dictates market success or if something is considered competitive : Fermi.

Tbf AMD market share DID rise when Fermi was running around.
 
I don't know about double the performance. The biggest advantage that Polaris had over Tahiti based cards was power draw. Having owned both series of cards I can attest that in most titles you would be hard pressed to tell the difference between them especially in crossfire. Vega are the only AMD cards that I can say give a noticeable increase in all out performance vs previous gen since Tahiti. The 570 is in no way flat out faster than the 370 by 50%. Again I am maiing these statements based on experience. Even though I am no fan of Nvidia one cannot deny that Nvidia has AMD firmly beat in the GPU space.
I'm literally looking at techpowerup database and writing their numbers. the RX 570 is pretty much 50% faster than R7 370.

Pay attention to what vayra said.Don't compare skus but perf per shader. Maxwell was a pretty big architectural improvement,they were able to match a 2880 cuda 780ti with 1664 cuda 970 using 27% higher clocks.That's 73% higher shader count matched with 27% core frequency increase but lower memory bandwith at the same time.On same 28nm process.

and 960 was 10% faster than 760 while packing 12% less shaders.

perfrel_1920.gif


No one here is really excited about talking about gcn again,esspecially when you're using your made up numbers.
You can't really compare "core" counts, because these are just word games. "Cores" doesn't mean anything ig GPU designs, and at one point Nvidia barely had 500 cores, while AMD was running 2000+ cores. You can't then just say well Nvidia is 4x faster, because that would be stupid, just as comparing core performance is even stupider.

core numbers depends on the whole overall design on the graphic pipeline.
 
Pay attention to what vayra said.Don't compare skus but perf per shader. Maxwell was a pretty big architectural improvement,they were able to match a 2880 cuda 780ti with 1664 cuda 970 using 27% higher clocks.That's 73% higher shader count matched with 27% core frequency increase but lower memory bandwith at the same time.On same 28nm process.

and 960 was 10% faster than 760 while packing 12% less shaders.

perfrel_1920.gif


No one here is really excited about talking about gcn again,esspecially when you're using your made up numbers.
Yeh it can be explained so simply, anyone can spend some time, weave a tale.
The details get dropped, like they did with the losses colour compression that Nvidia improved on then too.
Or culling 64bit compute to get that speed, I get it , if you don't use it it's irrelevant.
Designs are always a compromise even Nvidias.
 
Here is how I know this isn't what ultimately dictates market success or if something is considered competitive : Fermi.

Well, Fermi was faster than 5870 by 10% but it was 25% more expensive and a nuclear reactor. So they could at least say it was faster. Amd can't really say that.

Actually, I guess they can. Nothing to see here.
 
Yeh it can be explained so simply, anyone can spend some time, weave a tale.
The details get dropped, like they did with the losses colour compression that Nvidia improved on then too.
Or culling 64bit compute to get that speed, I get it , if you don't use it it's irrelevant.
Designs are always a compromise even Nvidias.
You know the thing about the Jack of all trades though
 
Back
Top