• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA

Joined
May 31, 2016
Messages
4,437 (1.44/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
For that you'd also need a 512-bit memory bus, which ... well, is both expensive, huge, and power hungry. Not a good idea (as the 290(X)/390(X) showed us).
It would have been a big chip so yes you would need it but in any case this 500mm2 chip, would do the trick tapping beyond 2080 Ti's performance. You pack a lot of cores you need to feed them so either way you need to do something with the memory interface. Power hungry, yes but not all the way. You need to remember, it all depends on the frequency used if you balance it it would be ok. There are possibilities to make it happen.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Edit: ah, I see you edited in the 2070 as the comparison. Your power draw number is still a full 20W too low though.
i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.

I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.

EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.

I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.

EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.
Yeah, I quoted you to remind myself to respond to that later, then went and did something else :p Sorry about that. Anyhow, by not going by real-world power draw numbers you're effectively giving Nvidia an advantage due to them lowballing specs. That's ... nice of you, I guess? My general rule of thumb is to never - ever! - trust manufacturer power draw numbers, but rely on real-world measurements from reviews. The former is okay for ballpark stuff or if no reviews exist, but should always be taken with a (huge) grain of salt.

It would have been a big chip so yes you would need it but in any case this 500mm2 chip, would do the trick tapping beyond 2080 Ti's performance. You pack a lot of cores you need to feed them so either way you need to do something with the memory interface. Power hungry, yes but not all the way. You need to remember, it all depends on the frequency used if you balance it it would be ok. There are possibilities to make it happen.
No, you would need that not due to the size of the chip, but due to the 5700 XT having a 256-bit memory interface, and doubling the compute power necessitates doubling memory bandwidth too unless you want to intentionally bottleneck the chip. How many cores you have doesn't matter if they can't get data to process quickly enough. And there's no power tuning to be done in this case - 8GB of GDDR6 on a 256-bit bus consumes somewhere around 30-35W; twice that will consume twice the power unless you downclock the memory and sacrifice performance. I'm not talking about chip power consumption but the power consumption of the memory and its interface.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
The former is okay for ballpark stuff or if no reviews exist, but should always be taken with a (huge) grain of salt.
There is nothing there for RDNA2 or Ampre, so I used what I will have with all comparison cards... what the MFG says. Once we see Ampre's flagship and big Navi, we will deal with actual numbers.

Regardless of 50W(~20%) or 24W (~10%)The high level point is unchanged... the RDNA arch on a smaller node is less efficient than Turing on a larger node. They have a lot of work to do to reclaim the performance crown and have some work to regain performance /watt. Where AMD only has an arch change, Nvidia is coming with both barrels loaded (arch and node shrink).

EDIT:
I can see why Nvidia may be worried.
My reply all started with this comment, mind you.......

I don't think they have much to worry about except for the usual price to performance ratio considering all that we know right now, including the 50% rumors from both camps...but I've said that like 3 times now to 3 different people it feels like.

EDIT2: Isn't RDNA2 also supposed to at RT capabilities as well? Won't that eat into their 'normal' power envelope? Like Nvidia, this lowered their typical GoG (generation over geneation) performance improvements.... will it do the same to AMD?

All of these factors make me confident Nvidia isn't "worried" about 'big navi'. They have A LOT of work to do in order to catch up.
 
Last edited:
Joined
May 31, 2016
Messages
4,437 (1.44/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
No, you would need that not due to the size of the chip, but due to the 5700 XT having a 256-bit memory interface, and doubling the compute power necessitates doubling memory bandwidth too unless you want to intentionally bottleneck the chip. How many cores you have doesn't matter if they can't get data to process quickly enough. And there's no power tuning to be done in this case - 8GB of GDDR6 on a 256-bit bus consumes somewhere around 30-35W; twice that will consume twice the power unless you downclock the memory and sacrifice performance. I'm not talking about chip power consumption but the power consumption of the memory and its interface.
I'm surprised you are still going with this. It is obvious it would be necessary to get more bandwidth but that wasn't the problem here. Making 500mm2 chip is nothing out of ordinary or an extreme and it can be done. Bandwidth is obvious and it can be done as well. Power consumption is another story. You can tweak everything and make it OK balanced.
GDDR6 consumes 20Watts for 16GB. Same capacity HBM2 is 10W.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm surprised you are still going with this. It is obvious it would be necessary to get more bandwidth but that wasn't the problem here. Making 500mm2 chip is nothing out of ordinary or an extreme and it can be done. Bandwidth is obvious and it can be done as well. Power consumption is another story. You can tweak everything and make it OK balanced.
GDDR6 consumes 20Watts for 16GB. Same capacity HBM2 is 10W.
I never said it couldn't be done, I said it would require a huge and expensive PCB and need a lot of power (which would necessitate lowering the power budget of the GPU, sacrificing performance). All of which is still true.
 
Joined
May 31, 2016
Messages
4,437 (1.44/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
I never said it couldn't be done, I said it would require a huge and expensive PCB and need a lot of power (which would necessitate lowering the power budget of the GPU, sacrificing performance). All of which is still true.
And i never said it wouldn't require expensive PCB and a lot more power. That was not the point, anyway thanks for bringing this up :)
It is possible and we can only assume of the outcome.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
There is nothing there for RDNA2 or Ampre, so I used what I will have with all comparison cards... what the MFG says. Once we see Ampre's flagship and big Navi, we will deal with actual numbers.

Regardless of 50W(~20%) or 24W (~10%)The high level point is unchanged... the RDNA arch on a smaller node is less efficient than Turing on a larger node. They have a lot of work to do to reclaim the performance crown and have some work to regain performance /watt. Where AMD only has an arch change, Nvidia is coming with both barrels loaded (arch and node shrink).
I didn't say there were numbers available for either of the two, but given how notoriously unreliable manufacturer specifications for power draw are, I would argue that the only reasonable thing to try to base our speculations on are actual real-world numbers and not wildly inaccurate specifications.

You're right that RDNA is still slightly less efficient in an absolute sense (though that depends on the implementation; the RX 5700 XT is slightly less efficient than the 2070S, but the 5600 XT is (even with the new, boosted BIOS) better than its Nvidia competition by a few percent. Nvidia still (obviously!) has the more efficient architecture given the node disadvantage, but taking into account that AMD has historically struggled on perf/W, just launched a new arch with major perf/w improvements (not just due to 7nm, remember that the 5700 XT roughly matches the VII in performance at significantly less power draw on the same node, and with less efficient memory to boot), one might assume that there weren't major efficiency improvements to be had in the new architecture right off the bat. Apparently AMD says there are. Which is surprising to me, at least.

Now, I'm not saying "Nvidia should be worried", as that's a silly statement implying that AMD is somehow going to surpass them out of the blue, but unless Nvidia manages to pull off their fifth consecutive round of significant efficiency improvements (beyond just the node change, that is) we might see AMD come close to parity if these rumors pan out. Of course we also might not, the rumors might be entirely wrong, or Nvidia might indeed have a major improvement coming - we have no idea.

It's also worth pointing out that your initial statement is rather self-contradictory - on the one hand you're saying we don't have data so we should use manufacturer specs (for entirely different cards..?), while you also say "we will deal with actual numbers" (which I'm reading as real-world test data) once they arrive. Why not then also base ourselves on real-world numbers for currently available cards, rather than their specs (which are very often misleading if not flat out wrong)? Your latter statement implies that real-world data is better, so why not also use that for existing cards?

And i never said it wouldn't require expensive PCB and a lot more power. That was not the point, anyway thanks for bringing this up :)
It is possible and we can only assume of the outcome.
Possible, yes. But AMD brought in HBM specifically as a way of increasing memory bandwidth without the massive PCBs and expensive and complex trace layouts required by 512-bit memory buses. Now, GDDR6 is much faster than GDDR5, but also more expensive, which somewhat alleviates the main pain point of HBM - cost. Add to that that GDDR6 needs even more complex traces than GDDR5, and it becomes highly unlikely that we'll ever see a GPU with a 512-bit GDDR6 bus - HBM2(E) is far more likely at that kind of performance (and thus price) level. You're welcome to disagree, but AMD's recent history doesn't.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Now, I'm not saying "Nvidia should be worried", as that's a silly statement implying that AMD is somehow going to surpass them out of the blue, but unless Nvidia manages to pull off their fifth consecutive round of significant efficiency improvements (beyond just the node change, that is) we might see AMD come close to parity if these rumors pan out. Of course we also might not, the rumors might be entirely wrong, or Nvidia might indeed have a major improvement coming - we have no idea.
It's also worth pointing out that your initial statement is rather self-contradictory - on the one hand you're saying we don't have data so we should use manufacturer specs (for entirely different cards..?), while you also say "we will deal with actual numbers" (which I'm reading as real-world test data) once they arrive.
It was clear as Windexed glass. I am saying instead of mixing and matching actual numbers, I simplified and went with MFG listed specs. You are getting lost in the details that aren't terribly relevant to the point. Take the deets away and see the forest through the trees, please. :)

Again, I wasn't really talking to you out of the gate, but to the Super XP guy who thinks Nvidia is going to be "worried". AMD has a long way to go, bud, no matter what way you slice the numbers. Nvidia has a die shrink and arch change, while AMD has an arch change while adding on RT hardware for the first time. I'm a betting man and my money is on Nvidia being able to reach these rumored goals.

But yes, we have no idea... I know/knew that going into my first reply to Super XP... may have even said it there too....this merry go round is making me dizzy. I don't give 2 shits to split hairs and semantics which don't matter to the overall point........ :).

AMD is currently behind in ppw. Outside of the 5600XT which had to be tweaked the week before reviews, Navi is less efficient than Turing. At best, with 5600XT it is on par/negligible differences. However the budget 5500 XT and the (current) flagship 5700 XT are not as efficient. So there is that hurdle to overcome. Next, performance. 46% increase to reach 2080 Ti speeds from a 5700 XT. If we use Kepler to Turing and its paltry increase (25%), that means AMD needs to come close to a 71% performance increase to match Ampre. I'll call AMD's flagship 'close' to Nvidia's when it is within 10%. So let's say it needs 61% improvement over the 5700 XT.... I ask again, to all, have we ever seen a 61% performance increase from gen to gen? Maybe 8800 GTS over a decade ago??? I don't recall....

So, for the last time....... :)

Nvidia is sure as hell not worried about AMD. AMD has a lot of work to match/come close to what Ampre can bring in performance, a bit less work - but work nonetheless - to take the overall PPW crown. Can anyone refute those points?
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
It was clear as Windexed glass. I am saying instead of mixing and matching actual numbers, I simplified and went with MFG listed specs. You are getting lost in the details that aren't terribly relevant to the point. Take the deets away and see the forest through the trees, please. :)

Again, I wasn't really talking to you out of the gate, but to the Super XP guy who thinks Nvidia is going to be "worried". AMD has a long way to go, bud, no matter what way you slice the numbers. Nvidia has a die shrink and arch change, while AMD has an arch change while adding on RT hardware for the first time. I'm a betting man and my money is on Nvidia being able to reach these rumored goals.

But yes, we have no idea... I know/knew that going into my first reply to Super XP... may have even said it there too....this merry go round is making me dizzy. I don't give 2 shits to split hairs and semantics which don't matter to the overall point........ :).

AMD is currently behind in ppw. Outside of the 5600XT which had to be tweaked the week before reviews, Navi is less efficient than Turing. At best, with 5600XT it is on par/negligible differences. However the budget 5500 XT and the (current) flagship 5700 XT are not as efficient. So there is that hurdle to overcome. Next, performance. 46% increase to reach 2080 Ti speeds from a 5700 XT. If we use Kepler to Turing and its paltry increase (25%), that means AMD needs to come close to a 71% performance increase to match Ampre. I'll call AMD's flagship 'close' to Nvidia's when it is within 10%. So let's say it needs 61% improvement over the 5700 XT.... I ask again, to all, have we ever seen a 61% performance increase from gen to gen? Maybe 8800 GTS over a decade ago??? I don't recall....

So, for the last time....... :)

Nvidia is sure as hell not worried about AMD. AMD has a lot of work to match/come close to what Ampre can bring in performance, a bit less work - but work nonetheless - to take the overall PPW crown. Can anyone refute those points?
I know I wasn't the one you were responding to, the reason I keep splitting hairs with you is that you keep making mismatched comparisons or false equivalencies or otherwise presenting stuff in a clearly unequal way. The statement I pointed out rather conspicuously says "we'll see what real-world numbers for future products tell us when they arrive, but for now, let's skip real-world numbers for existing products and go with specs instead!" Which is ... odd, to say the least. Why not use today's real-world numbers when they are readily available and clearly demonstrate specs to be inaccurate? Only one reason that I can see: that real-world numbers make Nvidia's advantage look smaller than on-paper specs.

Also, saying Navi is overall less efficient than Turing ... well, that depends massively on the implementation. First off mentioning that the 5600 XT was tweaked just before launch is rather contrary to your argument in this context, as it was tweaked to be far less efficient by boosting clocks, with the pre-update bios being by far the most efficient GPU TPU has ever tested at 1440p and 4k (not that it's a 4k capable GPU, but it is definitely an entry-level 1440p card). In other words, depending on the implementation Navi can be both more and less efficient than Turing. Does that mean it's a more efficient architecture? Obviously not - the node advantage AMD has at this point means that Nvidia still has their obvious architecture advantage. But Navi has been demonstrated to be very efficient when it's not being pushed as far as it can possibly go. That it scales well downwards is very promising in terms of a larger die being efficient at lower clocks, after all. People keep talking about "AMD just needs X times the 5700 XT to beat the 2080 Ti", yet that would be a ~440W GPU barring major efficiency improvements. 2x 5600 XT, on the other hand, would still beat the 2080 Ti handily (the latter is 60, 74 and 85% faster at 1080p, 1440p and 4k respectively), but at just ~330W. Or you could use clocks closer to the original 5600 XT BIOS, and still beat or nearly match it (2x 91 vs 160%,2x 91 vs. 174% and 2x 90 vs. 185%, assuiming perfect scaling which is of course a bit optimistic) but at just 250W! So yeah, don't discount the value of scaling down clocks to reach a performance target with a larger die. Just because the 5700 XT was pushed as far as it can go to compete as well as possible with the 2070 doesn't mean that AMD's next large GPU will be pushed as far. They have a history of doing so, but that was with GCN which had a hard limit of 64 CUs, which meant that the only way to improve performance was higher clocks. That no longer applies for RDNA.

As I said above, I completely agree that saying "Nvidia should be worried" is silly, but you on the other hand seem to be consistently skewing things in favor of Nvidia, whether consciously or not.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
seem to be consistently skewing things in favor of Nvidia, whether consciously or not.
That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.

AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.

Cheers.
 
Joined
Mar 23, 2005
Messages
4,082 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
i didnt inflate anything intentionally. I compared apples to apples... their mfg ratings. My point remains.

I edited like 35 minutes before your post, lol...hit refresh before you post if its sitting that long, lol.

EDIT: We have no idea how either RDNA2 nor Ampre will respond versus its TBP. So to that, I used a static value, the MFG ratings (sourced from TPUs specs pages on the cards). Actual use will vary but how will depend... so again, I took the only static numbers out there that would not vary by card...I see the actual numbers are lower. They are at least 10% behind in that metric. Still facing an uphill battle considering Nvidia has a node shrink in front of them along with a change in architecture.
A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.

That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.

AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.

Cheers.
Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.
Well that's just plain wrong. RDNA 2 is still RDNA, just fully implemented RDNA (and likely including various tweaks, optimizations and improvements), while RDNA (1) is RDNA with some features omitted and some minor parts of GCN kept to ensure it could launch in a reasonable time. That of course doesn't mean RDNA 2 can't or won't be a major update - at this point I think it will be, given how AMD talks about it and the performance of the new Xbox shown off today - but it is still very much related to RDNA (1).

Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.
Experience with a node doesn't matter much unless it's a bleeding-edge node. 7nm isn't that any more, it's quite mature. TSMC can guide Nvidia through any issues they might have, in fact they have engineering teams specifically for this.
That surely isn't my intent and already explained the sourcing for my numbers (and what your 'actual' values add to the conversation - nothing much)... I'm not going to go over it a 3rd time. You can split hairs and throw stones, but that doesn't change my support or endgame.

AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch.

Cheers.
Definitely don't mean to throw any stones, just pointing out what looked like a consistent slant in what you were saying. I entirely agree that AMD will have a hard time beating Ampere, but I do think there's reason to expect them to get pretty close this time around, and I don't think launching a true flagship level GPU will be an issue for them this go around, even if it would then be >=60% faster than the upper midrange "flagship" of the previous generation. We might see parity, might see a bit behind and cheaper, though I think the chance of them being outright ahead is by far the slimmest of these three, it is looking more possible than since 2015 (which on the other hand isn't saying much). It'll nonetheless be a very exciting release cycle (especially with new consoles bringing a lot of goodness to cross platform games).
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
A change in architecture? Well so does AMD, last I heard RDNA2 is brand new, and will have little to do with RDNA1.


Not necessarily, AMD has the Node advantage here, they have 7nm experience. Nvidia does not.
In each and every post I've mentioned both have architectural improvements to be had...

And node advantage doesnt mean much here. Even if you potato your way into a lower node, there are still inherent efficiency gains to be had. If there is a sponge where more can be squeezed out of, it seems like that is Nvidia considering node shrink on top of new arch. AMD is also adding ray tracing cores. If their addition is anything like nvidia's, it will be lucky to reach 2080ti speeds.

As I said, I'll bet it lands between a 2080ti and Ampre flagship. I believe it will fall at least 10% short of ampre on performance alone (no clue on rtx performance, likely the same idea...faster than 2080ti, slower than ampre) and slightly worse power to performance overall. Pricing on these parts, from both parties, will be paramount in choosing the right card...and amd will surely be a worthy competitor and offer viable options.
 
Joined
Mar 23, 2005
Messages
4,082 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
In each and every post I've mentioned both have architectural improvements to be had...

And node advantage doesnt mean much here. Even if you potato your way into a lower node, there are still inherent efficiency gains to be had. If there is a sponge where more can be squeezed out of, it seems like that is Nvidia considering node shrink on top of new arch. AMD is also adding ray tracing cores. If their addition is anything like nvidia's, it will be lucky to reach 2080ti speeds.

As I said, I'll bet it lands between a 2080ti and Ampre flagship. I believe it will fall at least 10% short of ampre on performance alone (no clue on rtx performance, likely the same idea...faster than 2080ti, slower than ampre) and slightly worse power to performance overall. Pricing on these parts, from both parties, will be paramount in choosing the right card...and amd will surely be a worthy competitor and offer viable options.
I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete.

Correct me if I am wrong of course.

Well that's just plain wrong. RDNA 2 is still RDNA, just fully implemented RDNA (and likely including various tweaks, optimizations and improvements), while RDNA (1) is RDNA with some features omitted and some minor parts of GCN kept to ensure it could launch in a reasonable time. That of course doesn't mean RDNA 2 can't or won't be a major update - at this point I think it will be, given how AMD talks about it and the performance of the new Xbox shown off today - but it is still very much related to RDNA (1).


Experience with a node doesn't matter much unless it's a bleeding-edge node. 7nm isn't that any more, it's quite mature. TSMC can guide Nvidia through any issues they might have, in fact they have engineering teams specifically for this.

Definitely don't mean to throw any stones, just pointing out what looked like a consistent slant in what you were saying. I entirely agree that AMD will have a hard time beating Ampere, but I do think there's reason to expect them to get pretty close this time around, and I don't think launching a true flagship level GPU will be an issue for them this go around, even if it would then be >=60% faster than the upper midrange "flagship" of the previous generation. We might see parity, might see a bit behind and cheaper, though I think the chance of them being outright ahead is by far the slimmest of these three, it is looking more possible than since 2015 (which on the other hand isn't saying much). It'll nonetheless be a very exciting release cycle (especially with new consoles bringing a lot of goodness to cross platform games).
Fully Agree.
We will definitely get more concrete details about both RDNA2 & Ampere. It's going to be a very interesting y2020. Hopefully the COVID-19 doesn't slow down both AMD & Nvidia GPU launches, because many are itching for new GPUs. :D
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete.

Correct me if I am wrong of course.


Fully Agree.
We will definitely get more concrete details about both RDNA2 & Ampere. It's going to be a very interesting y2020. Hopefully the COVID-19 doesn't slow down both AMD & Nvidia GPU launches, because many are itching for new GPUs. :D
Not an advantage over AMD, but an efficiency gain over their own previous GPU.
 
Joined
Jun 28, 2018
Messages
299 (0.13/day)
I agree, there isn't really a node advantage per say but I only said there was because of this post ""AMD is going to have a tough time beating Ampre on either front...one has arch + node, the other, just arch. ""
I assume you meant that Nvidia would have a Arch+node advantage over the other (AMD) just arch? Because AMD is already on 7nm, where as Nvidia currently is not. If that is what you mean, then you are saying that Nvidia has a node advantage over AMD. Which is why I said AMD has more 7nm experience, which would render Nvidia's so called node advantage obsolete.

Correct me if I am wrong of course.

He is saying that AMD already played the 7nm card, from here they will have to rely manly on its architecture, while Nvidia, in addition to the inherent gains of architecture, will still gain something more from the 12nm -> 7nm migration.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
...which would render Nvidia's so called node advantage obsolete.
It doesnt though. I've said it directly....used a sponge analogy, lol...we'll just have to agree to disagree.

He is saying that AMD already played the 7nm card, from here they will have to rely manly on its architecture, while Nvidia, in addition to the inherent gains of architecture, will still gain something more from the 12nm -> 7nm migration.
This! Maybe after seeing it five times that point will land. :p
 
Last edited:
Joined
Mar 23, 2005
Messages
4,082 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
It doesnt though. I've said it directly....used a sponge analogy, lol...we'll just have to agree to disagree.

This! Maybe after seeing it five times that point will land. :p
IMG_20200317_004724.jpg

Nvidia waiting for AMDs Big Navi lol..
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Nvidia have nothing to worry about unless their next-gen somehow gets delayed.
Nvidia might be holding off finalizing the timing, pricing and segmentation until they know more, but if so this is to position themselves, not due to concern. When rumors are pointing in every direction, it's usually a sign that the rumors are all speculation, and Nvidia probably don't know quite what to expect.

But I don't think Nvidia's next-gen is imminent. Everything seems to point to it being months away.
 
Joined
Mar 23, 2005
Messages
4,082 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
Nvidia have nothing to worry about unless their next-gen somehow gets delayed.
Nvidia might be holding off finalizing the timing, pricing and segmentation until they know more, but if so this is to position themselves, not due to concern. When rumors are pointing in every direction, it's usually a sign that the rumors are all speculation, and Nvidia probably don't know quite what to expect.

But I don't think Nvidia's next-gen is imminent. Everything seems to point to it being months away.
I agree, which is why I posted that picture. Nvidia is waiting for AMD's Big Navi, because they know it's going to be very fast. What they do not know is how fast, and nobody knows this but AMD at the moment, regardless of rumors and speculation. I think AMD will release its RDNA2 GPUs first then set a price tone. If they overpriced as they've done in the past, they will probably get burned by Nvidia's Ampere pricing. Which is important for AMD not to overprice. The same goes for Nvidia, they should not overprice due to what the competition has pending.
2020 will be a great year for new GPUs. Can't wait,
:toast:
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Glad you jumped off the 'because they are worried' boat!

The waiting to finalize clocks/specs is quite normal. But it's not like they are sitting there ready to go waiting on amd to release. They, naturally, are not ready.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.68/day)
Location
Ex-usa | slava the trolls
Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?

Check Cypress (334 sq.mm) and Juniper (166 sq.mm). Juniper is exactly 50% the performance of Cypress on N40.

These are the same generation, the same micro-architecture, just scaled up and down.

RX 5700 XT is heavily overvolted out of the box, heavily pushed beyond its sweet spot. It's not an upper middle but lower middle range card.
Its real power consumption should be not more than 180-190-watt and even then it's too much.

Navi 21 at 505 sq.mm should have 100% more shaders and 50% higher power consumption, performance-per-watt, too.

Anything less than 80-100% higher performance than Navi 10 would be a major fail.

And where are your sources that say Nvidia is on track for delivery next-gen cards?
Because we hear exactly nothing and see no signs of anything in physical existence from them.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
And where are your sources that say Nvidia is on track for delivery next-gen cards?
Because we hear exactly nothing and see no signs of anything in physical existence from them.
I dont believe I've ever said that...?

Regarding the rest of your post... read on after my post you quoted. People have said that and I've already responded to it. ;)

Anything less than 80-100% higher performance than Navi 10 would be a major fail.
wow... 80%+ or bust ehh? That's the most optimistic take I've heard.
 
Top