• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 3090 Founders Edition Potentially Pictured: 3-slot Behemoth!

1598195477551.png


Looks like this thing runs hot enough to bubble the fan hub cover. ?
 
I've been singing the same tune for several weeks.. if this power rumor us true (3 slot monster seems like that is coming true) amd doesn't stand a chance to compete with non titan flagship. Unless this silicon is completely borked, how does a new architecture and a die shrink at 300W+ compare against a new arch with a tweaked process? Remember 5700XT was 45% slower than a 2080ti. If ampere is 50% faster, then AMD needs to be ~100% faster to compete. We havent see a card come close, from any camp, ever. That saod, maybe its RTRT performance is where the big increase is... who knows.

So lomg as amd's card lands between them and is notably cheaper, it will be a win for everyone. But I just don't think rdna2 flagship will be within 15%.
Hey, for once we disagree on something! Not necessarily in your conclusion (I also find it unlikely that AMD would be able to compete with a 350-400W Nvidia card assuming +~10% IPC and a reasonable efficiency boost from the new process node) but mostly your reasoning. Firstly, I find it unlikely that AMD will compete at this level mainly because I find it unlikely that they'll make a GPU this (ridiculously) power hungry. (As you said, assuming that the power draw rumors are true, obviously.) Beyond that though, you're comparing a 215-225W GPU against a 275-300W GPU and extrapolating from that as if both were equal, which is obviously not true. The 5700 XT was ~45% slower but also used 28% less power. On this point it's also worth noting that TPU measures power at 1080p, where the performance delta is just 35%, and that the 2808ti is one of the most efficient renditions of Turing while the 5700XT is the least efficient rendition of RDNA by quite a bit. AMD has also promoted "up to 50%" improved perf/W for RDNA 2, which one should obviously take with a heaping pile of salt (does that for example mean up 50% from the 5700 XT, or from any RDNA 1 GPU?), but must also be correct in some sense lest they be subjected to yet another shareholder lawsuit. So IMO it's reasonable to expect a notable perf/W jump from RDNA overall even if it's just an improved arch on a tweaked node. Will it be on par with Ampere? I don't quite think they'll be there, but I think it will be closer than we're used to seeing. Which would/could also explain Nvidia deciding to make a power move of a GPU like this to cement themselves as having the most powerful GPU despite much tougher competition, as AMD would be very unlikely to gamble on a >300W GPU given their history in GPUs for the past half decade or so.
 
5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.
 
Last edited:
It's a big card but mostly when compared to reference size card. If you put that next to the AIB 2080 Ti cards it probably won't look so huge. Also, just because the cooler is that big doesn't necessarily mean it is minimum/reasonable cooling as often found on Nvidia reference cards. It is possible that Nvidia wanted to have an AIB competitive factory cooler this time around instead of the barely sufficient reference cooler on the 2080 Ti.
 
I am really curious to see what Big Navi has to offer, I am sure we will start to see leaks from the AMD camp after the 1st of September.
 
Well as a temp setup while building hard tubing in my primary pc I put my Asus Rog Strix OC 1080TI in a T3500 I had to tweak the Closing mechanism for the cards a little but it worked like a charm :)
That can be a problem too. How did you fix it?
 
Last edited by a moderator:
That beast will not be going into my Dell T3500. It literally won't fit in physically.
Actually from the image I'm pretty sure it would. It looks to be ~40-50mm longer than the pictured 2080 and 10-15mm higher, so should fit in a T3500 physically (just) but you will have to remove the left HDD (if fitted) and take out the blanking plate from the HDD bay (it's removable for fitting expansion cards).

Having said that, You probably wouldn't want to do it as a 5700XT will bottleneck like mad in a T3500 (even with a 3.8GHz 6c12t CPU) so this GPU would be choked to death.

Gtx 480 might have a succesor with respect to power drawn and heat outputed.
Isn't that exactly what the GTX580 was, it beat the GTX480 in both, hell some AIB 580s were sucking over 100w more than the 480 lol
 
Actually from the image I'm pretty sure it would.
It will not, unless I modify the case.
Having said that, You probably wouldn't want to do it as a 5700XT will bottleneck like mad in a T3500 (even with a 3.8GHz 6c12t CPU)
I currently have an RTX2080 that is only CPU bottlenecked in some games. It's not severe. However...
so this GPU would be choked to death.
...this is correct, which is why I will be building a new system. I only started using the T3500 as a daily driver on a challenge and then was impressed enough from it's performance that I just kept it. It is starting to show it's age these days and I'm jones'ing for a ThreadRipper.. with 32GB of DDR4-3800. I'm likely going to put an RTX 30xx in that system.
 
Last edited:
5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.
Average gaming power vs. average gaming power in TPU's benchmarks, they are 219W vs. 273W, which makes the 5700 XT consume 80% of the 2080 Ti's power, or the 2080Ti consume 125% the power of the 5700 XT. I guess I should have looked up the numbers more thoroughly (saying 215 vs. 275 did skew my percentages a bit), but overall, your 17% number is inaccurate. Comparing TDPs between manufacturers isn't a trustworthy metric due to the numbers being defined differently.

As for the 5700 being stretched in efficiency somehow proving they're further behind: obviously not, which you yourself mention. The 5700 XT is a comparatively small die, which AMD chose to push the clocks of to make it compete at a higher level than it was likely designed for originally. The 2080 Ti on the other hand is a classic wide-and-(relatively-)slow big die GPU, which gives it plenty of OC headroom if the cooling is there, but also makes it operate in a more efficient DVFS range. AMD could in other words compete better simply by building a wider chip and clocking it lower. Given just how much more efficient the 5700 non-XT is (166W average gaming power! With the 5700 XT just winning by ~14%!) we know even RDNA 1 can get a lot more efficient than the 5700 XT (not to mention the 5600 XT, of course, which beats any Nvidia GPU out there for perf/W). And the 2080 Ti still can't get 2x the performance of the 5700 non-XT (+54-76% depending on resolution). Which tells us that AMD could in theory build a slightly downclocked double 5700 non-XT and clean the 2080 Ti's clock at the same power, as long as the memory subsystem keeps up. Of course they never built such a GPU, and it's entirely possible there are architectural bottlenecks that would have prevented this scaling from working out, but the efficiency of the architecture and node is there. And RDNA 2 GPUs promise to improve that both architecturally and from the node. We also know that they can clock to >2.1GHz even in a console (which means limited power delivery and cooling), so there's definitely improvements to be found in RDNA 2.

The point being: if AMD is finally going to compete in the high end again, they aren't likely to go "hey, let's clock the snot out of this relatively small GPU" once again, but rather design as wide a GPU as is reasonable within their cost/yield/balancing/marketability constraints. Then they might go higher on clocks if it looks like Nvidia are pulling out all the stops, but I would be downright shocked if the biggest big Navi die had less than 80 CUs (all might not be active for the highest consumer SKU of course). They might still end up releasing a >350W clocked-to-the-rafters DIY lava pool kit, but if so that would be a reactive move rather than one due to design constraints (read: a much smaller die/core count than the competition) as in previous generations (RX 590, Vega 64, VII, 5700 XT).

I don't think anyone will mind a 250W Big Navi being more than 20% behind Ampere if said Ampere is 350W or more. On the other hand, if it was more than 20% Ampere at the same power? That would be a mess indeed - but it's looking highly unlikely at this point. If Nvidia decided to go bonkers with power for their high end card, that's on them.
 
Average gaming power vs. average gaming power in TPU's benchmarks, they are 219W vs. 273W, which makes the 5700 XT consume 80% of the 2080 Ti's power, or the 2080Ti consume 125% the power of the 5700 XT. I guess I should have looked up the numbers more thoroughly (saying 215 vs. 275 did skew my percentages a bit), but overall, your 17% number is inaccurate. Comparing TDPs between manufacturers isn't a trustworthy metric due to the numbers being defined differently.

As for the 5700 being stretched in efficiency somehow proving they're further behind: obviously not, which you yourself mention. The 5700 XT is a comparatively small die, which AMD chose to push the clocks of to make it compete at a higher level than it was likely designed for originally. The 2080 Ti on the other hand is a classic wide-and-(relatively-)slow big die GPU, which gives it plenty of OC headroom if the cooling is there, but also makes it operate in a more efficient DVFS range. AMD could in other words compete better simply by building a wider chip and clocking it lower. Given just how much more efficient the 5700 non-XT is (166W average gaming power! With the 5700 XT just winning by ~14%!) we know even RDNA 1 can get a lot more efficient than the 5700 XT (not to mention the 5600 XT, of course, which beats any Nvidia GPU out there for perf/W). And the 2080 Ti still can't get 2x the performance of the 5700 non-XT (+54-76% depending on resolution). Which tells us that AMD could in theory build a slightly downclocked double 5700 non-XT and clean the 2080 Ti's clock at the same power, as long as the memory subsystem keeps up. Of course they never built such a GPU, and it's entirely possible there are architectural bottlenecks that would have prevented this scaling from working out, but the efficiency of the architecture and node is there. And RDNA 2 GPUs promise to improve that both architecturally and from the node. We also know that they can clock to >2.1GHz even in a console (which means limited power delivery and cooling), so there's definitely improvements to be found in RDNA 2.

The point being: if AMD is finally going to compete in the high end again, they aren't likely to go "hey, let's clock the snot out of this relatively small GPU" once again, but rather design as wide a GPU as is reasonable within their cost/yield/balancing/marketability constraints. Then they might go higher on clocks if it looks like Nvidia are pulling out all the stops, but I would be downright shocked if the biggest big Navi die had less than 80 CUs (all might not be active for the highest consumer SKU of course). They might still end up releasing a >350W clocked-to-the-rafters DIY lava pool kit, but if so that would be a reactive move rather than one due to design constraints (read: a much smaller die/core count than the competition) as in previous generations (RX 590, Vega 64, VII, 5700 XT).

I don't think anyone will mind a 250W Big Navi being more than 20% behind Ampere if said Ampere is 350W or more. On the other hand, if it was more than 20% Ampere at the same power? That would be a mess indeed - but it's looking highly unlikely at this point. If Nvidia decided to go bonkers with power for their high end card, that's on them.
As far as wattages, I simply used the nameplate values for ease of scope and context.

You're going a lot further down the wormhole than I ever want to go. Time will tell... but I dont see big navi within 15%. That said, we'll all take that as a win im sure (depending on price).
 
RTX 2080 FE is:
10.5" long
4.6" high
1.4" wide

My 980Ti AMP! Omega (and the Extreme version) is:
12.9" long
5.25" high
??? wide - can't find specific width dimensions listed, but it takes up just shy of 3 slots (by "just shy" I mean about 1/4" of an inch, if that)

My guess is the pictured (supposedly) 3090 is similar in size as my 980Ti AMP Omega card.

you don't have to go out of the PC systems guys, you just have to go out of the video cards (for a while) :) just don't buy 10,900K for 600$ and you are ok :)
I running 780Ti now 6+ years and no problems with any games :)

Only problem you run into if you keep a card for a very long period of time is they will eventually drop support. I had some GTX 280 in SLI for about 3.5 years. About a year after I stopped using them I gifted one to my younger brother and he used it for about 3 years. This put the age of card just over 7 years old of the release date (originally released June 2008). Nvidia stopped driver support for that series of cards in 2014, if I remember correctly.

He used that card until the release of Dying Light (which was 2015), but the driver support was gone for his 280 and Dying Light literally wouldn't work because the driver was too old. The game would tell him his card/driver was not supported.

My point is, sure, you can use a card for a good amount of time, but unfortunately it will stop getting support and new games won't run.
 
Last edited:
I thought technicaly progress mean't that the cards should not grow in size, but at least stay the same. :kookoo:
This does look like it would need a support inside the case, because I can already imagine cards breaking the PCIe slots/their own connectors...
 
Are you focking serious? 1060 is THREE years older than 16xx's. In old days you could've get 100% performance for same price in such period.

In my opinion, it would have been possible for Nvidia to create a successor to the GTX 1060 that is close to 100% faster. If you look at the RTX cards which are supposed to be premium Nvidia cards, Nvidia invested sideways and heavily into RT and DLSS. I believe significant die space where they could have cram in more powerful hardware to spruce up performance, was allocated to the RT and Tensor cores. Just comparing the transistor count between the GTX 1660 vs RTX 2060, there is a whooping 4.2 billion difference. The latter has more CUDA cores, but still I feel the extra CUDA cores will not be contribute to the bulk of the difference.

With the premium series being capped in performance, Nvidia will need to artificially gimp their GTX series to avoid cannibalizing the sales of RTX series. The same should be expected with the upcoming 3xxx series as I am sure Nvidia will double down on the likes of RT and DLSS. In addition, I am not sure what sorts of bespoke tech will Nvidia introduce at the hardware level since they tend to do this with every new generation.

I thought technicaly progress mean't that the cards should not grow in size, but at least stay the same. :kookoo:
This does look like it would need a support inside the case, because I can already imagine cards breaking the PCIe slots/their own connectors...

I don't agree. While I am not fan of giant graphic card/ cooler, the reality is that with every few passing generations, we are observing a jump in size. When I started on my first PC, the graphic card I used relies on passive cooling with a small heatsink. Then active cooling started creeping in after a few years. The active coolers grew in size over the years but maintained as a single slot. Then 2 slots cooler appeared, 2x fans. Fast forward to the recent 3 to 4 years, it is not uncommon to see coolers with 3x fan, taking up 3 slots, and also taller than the graphic card. As technology improve, the graphic card makers get more aggressive with adding hardware and features, pushing the boundaries and also power consumption.
 
Last edited:
View attachment 166452

Looks like this thing runs hot enough to bubble the fan hub cover. ?

I think that's some leftover from an old Asus AREZ sticker under there...

Once more, all I can say is... credibility... LOW


Is that Jerry? Lol. I can sort of hear his soothing voice :twitch:

As far as wattages, I simply used the nameplate values for ease of scope and context.

You're going a lot further down the wormhole than I ever want to go. Time will tell... but I dont see big navi within 15%. That said, we'll all take that as a win im sure (depending on price).

Big Navi might gain 30, best case 40% over RDNA2. Best case. Or AMD has gone similarly mental and this whole 3090 BS is true and they do both sport 400+W cards that I won't ever buy :p In that case I'm staying far away from any res higher than 1440p for the foreseeable future and keep rolling with sensible pieces of kit... but then I might do that anyway.

5700XT is a 225W card in reference form. A 2080Ti is 260W in FE (not reference, = 250W) form....a 17% difference. It is totally irrelevant sweetspot/efficiencyversus over extending... it is what it is for each. In fact I would think that supports my thoughts more, no? If AMD is already reaching and over extending themselves to be 45% below 2080Ti (I get it, never intended to compete there, RDNA2 will be a true high-end) and NV isn't....so what if they try the same thing with big navi running it out of the sweetspot again to be closer? I don't think many will have an issue with a 250W BNavi... but it had better be within 20% of flagship Ampere. I think few doubt it will be more efficient but it isn't catching up within 10-15% if I had to guess.

AMD has a hell of a leap to catch up and be competitive. I think they'll do it.. but they'll be on the outside looking in by at least 10-15%. It will be slower, cheaper and use less power...AMD's motto on the GPU side.

Perhaps the far more interesting question is what AMD is going to offer across the stack below their top end RDNA part. Because it was AMD itself that once told us when they were gonna do RT, it would be from midrange on up. Where is it? .... Its starting to smell a lot like late to the party again.
 
Last edited:
I am really curious to see what Big Navi has to offer, I am sure we will start to see leaks from the AMD camp after the 1st of September.

Well, looking at what we reasonably expect from AMD is:
1) 505mm2 (a rumor, however from a source with good track record) and 80CUs (over 5700XTs 40CUs), all sounds reasonable
2) PS5 being able to push GPU to 2.1Ghz (with some power consumption reservations)
3) RDNA2 should be a bit faster, not slower, than RDNA1

Optimistically, next gen, improved fab node, twice 5700XT with faster RAM could be about 100% faster.
Taking 2080Ti as being 45% faster than 5700Xt, we get:

RDNA2 505mm2 thing with 80CUs = 2/1.45 =38% faster than 2080Ti, or somewhat lower (it would, of course, be drastically different in different games, but note how optimizing for RDNA2 becomes unavoidable, given AMD's dominance in console market)
 
Why Asus? It's not the best which comes to coolers. MSI's been hella fine for several years, Asus has insane brand premium for its decent cards.

Its not ASUS , its about a choice of ASUS about them using quality cooler at 1660 Super so them to impress INTEL and become their business partner at MINI PC.
They did form an good card in a package with best ever cooler system, this does not happen every day.
 
Big Navi might gain 30, best case 40% over RDNA2.
Oh..I fully believe big navi will beat the 2080Ti... and that is AT LEAST 45%... I think it will land between the 2080Ti and 3090... I just hope it is closer the latter, not the former.

I also believe if big navi does that, it will be at least a 225W GPU... more likely 250W.
 
Big Navi might gain 30, best case 40% over RDNA2. Best case. Or AMD has gone similarly mental and this whole 3090 BS is true and they do both sport 400+W cards that I won't ever buy :p In that case I'm staying far away from any res higher than 1440p for the foreseeable future and keep rolling with sensible pieces of kit... but then I might do that anyway.
30-40% absolute performance, perf/W, or something else? 30-40% increased absolute performance could theoretically be done just by scaling up RDNA 1 to flagship power draw levels with a wider die, so that seems like a too low bar IMO. 30-40% increased perf/W could make for a potent Ampere competitor - if going from the (least efficient rendition of RDNA1, the) 5700 XT, that would mean +30% performance at ~225W (slightly lower at stock according to TPU's numbers, but let's go by what it says on the tin for now). For the sake of simplicity, let's assume perf/W is flat across the RDNA 2 range - it won't be, but it's not a crazy assumption either - which then puts a 275W RDNA 2 GPU at ~159% the performance of the 5700 XT, matching or beating the 2080 Ti even at 4k where it wins by the highest margin (35/46% for 1080p/1440p), which is admittedly not a high bar in 2020, or a 300W RDNA 2 GPU at 173% of the 5700 XT, soundly beating the 2080Ti overall. That is going by 30% increased overall/average perf/W though, which for me is the minimum reasonable expectation when AMD has said "up to 50%". I'm by no means expecting +50% perf/W overall based on that statement, obviously, but 30% overall seems reasonable based on that.

Oh..I fully believe big navi will beat the 2080Ti... and that is AT LEAST 45%... I think it will land between the 2080Ti and 3090... I just hope it is closer the latter, not the former.

I also believe if big navi does that, it will be at least a 225W GPU... more likely 250W.
If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.
 
30-40% absolute performance, perf/W, or something else? 30-40% increased absolute performance could theoretically be done just by scaling up RDNA 1 to flagship power draw levels with a wider die, so that seems like a too low bar IMO. 30-40% increased perf/W could make for a potent Ampere competitor - if going from the (least efficient rendition of RDNA1, the) 5700 XT, that would mean +30% performance at ~225W (slightly lower at stock according to TPU's numbers, but let's go by what it says on the tin for now). For the sake of simplicity, let's assume perf/W is flat across the RDNA 2 range - it won't be, but it's not a crazy assumption either - which then puts a 275W RDNA 2 GPU at ~159% the performance of the 5700 XT, matching or beating the 2080 Ti even at 4k where it wins by the highest margin (35/46% for 1080p/1440p), which is admittedly not a high bar in 2020, or a 300W RDNA 2 GPU at 173% of the 5700 XT, soundly beating the 2080Ti overall. That is going by 30% increased overall/average perf/W though, which for me is the minimum reasonable expectation when AMD has said "up to 50%". I'm by no means expecting +50% perf/W overall based on that statement, obviously, but 30% overall seems reasonable based on that.


If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.

+30-40% perf compared to 5700XT, to clarify. Maybe if I have a good day I'd be ootimistic enough to say +50%.

Any more would have me very surprised.
 
If AMD is going back to competing for the GPU crown, wouldn't the safe assumption be that Big Navi is in the 275-300W range? That's where the flagships tend to live, after all. It would be exceptionally weird for them to aim for flagship performance yet limit themselves to upper midrange power draw levels.
It depends on who you ask and what you expect out of them and a minor node tweak. I fully expect it to be over extended to perform closer to NV cards and run into a similar situation as the 5700XT did. So yeah, nameplate values (again, I don't play this review said XXX W stuff right now), I expect it to be 225-250W in reference form. I'm trying to give them some credit on the arch change and minor node tweak. If big navi is any closer than 15% I''ll expect 250+ out of it for sure.
 
It depends on who you ask and what you expect out of them and a minor node tweak. I fully expect it to be over extended to perform closer to NV cards and run into a similar situation as the 5700XT did. So yeah, nameplate values (again, I don't play this review said XXX W stuff right now), I expect it to be 225-250W in reference form. I'm trying to give them some credit on the arch change and minor node tweak. If big navi is any closer than 15% I''ll expect 250+ out of it for sure.
Again, I think that is a really weird expectation. The 225W rating of the 5700 XT is on the high side but nothing abnormal for an upper midrange card. For a flagship GPU in 2020 that kind of power draw (if it is at all competitive) would be revolutionary. The 7970 GHz edition was 300W. The 290X was 290W. The 390X was (admittedly a minor tweak of the 290X, and) 275W. The Fury X was 275W. The Vega 64 was 295W. The VII was 295W. You would need to go back to 2010 and the 6970 to find a single-GPU AMD flagship at 250W, and the 5870 in 2009 at 188W. And Nvidia's flagships have consistently been at or above 250W for more than a decade as well. The 5700 XT never made any claim to being or performing on the level of a flagship GPU. AMD's current fastest GPU is an upper midrange offering, is explicitly positioned as such, so expecting their well publicized upcoming flagship offering to be in the same power range seems to entirely disregard the realities of GPU power draw. Higher end = higher performance = more power draw.

+30-40% perf compared to 5700XT, to clarify. Maybe if I have a good day I'd be ootimistic enough to say +50%.

Any more would have me very surprised.
That sounds overly pessimistic to me. The 5700 XT was never designed to be anything but upper midrange, and pushed a small die higher than was efficient. As I said above, on paper even RDNA (1) could hit that performance level if scaled up to flagship-level power draws with a matching wide die. AMD is promising significant perf/W gains for RDNA 2, so expecting increases beyond that seems sensible simply from the fact that this time around they'll be designing a die for the high end and not the midrange.
 
Again, I think that is a really weird expectation. The 225W rating of the 5700 XT is on the high side but nothing abnormal for an upper midrange card. For a flagship GPU in 2020 that kind of power draw (if it is at all competitive) would be revolutionary. The 7970 GHz edition was 300W. The 290X was 290W. The 390X was (admittedly a minor tweak of the 290X, and) 275W. The Fury X was 275W. The Vega 64 was 295W. The VII was 295W. You would need to go back to 2010 and the 6970 to find a single-GPU AMD flagship at 250W, and the 5870 in 2009 at 188W. And Nvidia's flagships have consistently been at or above 250W for more than a decade as well. The 5700 XT never made any claim to being or performing on the level of a flagship GPU. AMD's current fastest GPU is an upper midrange offering, is explicitly positioned as such, so expecting their well publicized upcoming flagship offering to be in the same power range seems to entirely disregard the realities of GPU power draw. Higher end = higher performance = more power draw.


That sounds overly pessimistic to me. The 5700 XT was never designed to be anything but upper midrange, and pushed a small die higher than was efficient. As I said above, on paper even RDNA (1) could hit that performance level if scaled up to flagship-level power draws with a matching wide die. AMD is promising significant perf/W gains for RDNA 2, so expecting increases beyond that seems sensible simply from the fact that this time around they'll be designing a die for the high end and not the midrange.

My pessimism has been on the right track more often than not though, when it comes to these predictions.

So far AMD has not shown us a major perf/w jump on anything GCN-based ever, but now they call it RDNA# and they suddenly can? Please. Tonga was a failure and that is all they wrote. Then came Polaris - more of the same. Now we have RDNA2 and already they've been clocking the 5700XT out of its comfort zone to get the needed performance. And to top it off they felt the need to release vague 14Gbps BIOS updates that nobody really understood, post/during launch. You don't do that if you've got a nicely rounded, future proof product here.

I'm not seeing the upside here, and I don't think we can credit AMD with trustworthy communication surrounding their GPU department. It is 90% left to the masses and the remaining 10% is utterly vague until it hits shelves. 'Up to 50%'... that sounds like Intel's 'Up to' Gigahurtz boost and to me it reads 'you're full of shit'.

Do you see Nvidia market 'up to'? Nope. Not a single time. They give you a base clock and say a boost is not guaranteed... and then we get a slew of GPUs every gen that ALL hit beyond their rated boost speeds. That instills faith. Its just that simple. So far, AMD has not released a single GPU that was free of trickery - either with timed scarcity (and shitty excuses to cover it up, I didn't forget their Vega marketing for a second, it was straight up dishonest in an attempt to feed hype), cherry picked benches (and a horde of fans echoing benchmarks for games nobody plays), supposed OC potential (Fury X) that never materialized, supposed huge benefits from HBM (Fury X again, it fell off faster than GDDR5 driven 980ti which is still relevant with 6GB), the list is virtually endless.

Even in the shitrange they managed to make an oopsie with the 560D. 'Oops'. Wasn't that their core target market? Way to treat your customer base. Of course we both know they don't care at all. Their revenue is in the consoles now. We get whatever falls off the dev train going on there.

Nah, sorry. AMD's GPU division has lost the last sliver of faith a few generations back, over here. I don't see how or why they would suddenly provide us with a paradigm shift. So far, they're still late with RDNA as they always have been - be it version 1, 2 or 3. They still haven't shown us a speck of RT capability, only tech slides. The GPUs they have out lack feature set beyond RT. Etc etc ad infinitum. They've relegated themselves to followers and not leaders. There is absolutely no reason to expect them to leap ahead. Even DX12 Ultimate apparently caught them by surprise... hello? Weren't you best friends with MS for doing their Xboxes? Dafuq happened?

On top of that, they still haven't managed to create a decent stock cooler to save their lives, and they still haven't got the AIBs in line like they should. What could possibly go wrong eh

//end of AMD roast ;) Sorry for the ninja edits.
 
Last edited:
Unbelievable how one could be regularly posting on a tech savvy forum, yet be so ignorant.
 
Back
Top