# MSI GeForce RTX 3090 Suprim X



## W1zzard (Nov 26, 2020)

The MSI RTX 3090 Suprim X is the company's new flagship card. It is highly overclocked, to 1860 MHz rated boost, and ticks at a power limit of 420 W. In our review, it was the quietest RTX 3090 we've ever tested, quieter than the EVGA FTW3 Ultra, almost whisper-quiet.

*Show full review*


----------



## spnidel (Nov 26, 2020)

474w peak in games
....oooookay then
fermi 2.0


----------



## Vya Domus (Nov 26, 2020)

spnidel said:


> 474w peak in games
> ....oooookay then
> fermi 2.0



It's much, much worse than Fermi actually.


----------



## Lomskij (Nov 26, 2020)

Card like this would definitely benefit from liquid cooling. Heat output similar to 2x 1080ti cards, and I remember how much airflow such a set up required to keep the case relatively cool.


----------



## spnidel (Nov 26, 2020)

Vya Domus said:


> It's much, much worse than Fermi actually.


that's why it's fermi 2.0


----------



## Xuper (Nov 26, 2020)

80'c ? pretty bad. ASUS RTX 3090 STRIX OC  = 68'c


----------



## R0H1T (Nov 26, 2020)

Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever


----------



## Fluffmeister (Nov 26, 2020)

R0H1T said:


> Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever



Well, it's no RX 590 at least.


----------



## Caring1 (Nov 26, 2020)

From reading this review, my take from it is EVGA FTW3 is the better card, even if slightly louder it is a lot cooler and has a much higher power limit.


----------



## M2B (Nov 26, 2020)

R0H1T said:


> Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever



let me show you the chart that actually matters:



The 3070 is actually 20% more efficient than the most efficient Turing GPU and even the 3090 is 10% more efficient.


----------



## Vya Domus (Nov 26, 2020)

M2B said:


> let me show you the chart that actually matters:



Actually that's the one that matters the least if you want to get the most accurate measure of efficiency.

The more frames a GPU renders the least efficient they get because in between frames the energy of the chip goes down, the more peaks and troughs there are the worst the power consumption gets. That happens because as with any other system you need to consume energy to bring it from a low power state to a higher power state. The most efficient scenario is where you maintain a constant amount of load for as long as possible and that happens at 4K where one frame takes the longest to render.

Do you not find it odd that practically all GPUs become more power efficient the higher the resolution gets no matter how old/bad they are ? It's because of what I just explained so if you want to test efficiency you need to look at a workload  which generates the most amount of frames/second.


----------



## R0H1T (Nov 26, 2020)

M2B said:


> let me show you the chart that actually matters


So you're saying the most popular (?) gaming resolution or *1440p* doesn't matter, mkay if you say so 


Fluffmeister said:


> Well, it's no RX 590 at least.


Or the GTX 480?


----------



## Vya Domus (Nov 27, 2020)

R0H1T said:


> So you're saying the most popular (?) gaming resolution or 1440p doesn't matter, mkay if you say so



Get some glasses. I said that the best way to measure efficiency is to make the GPU render as many frames as possible because that's the worst case scenario, nothing less nothing more.


----------



## R0H1T (Nov 27, 2020)

You sure you're quoting the right person? If not then I guess my reply would change accordingly


----------



## Vya Domus (Nov 27, 2020)

R0H1T said:


> You sure you're quoting the right person? If not then I guess my reply would change accordingly



Your comment was right after mine so I could only presume it was addressed to me.


----------



## M2B (Nov 27, 2020)

Vya Domus said:


> Do you not find it odd that practically all GPUs become more power efficient the higher the resolution gets no matter how old/bad they are ? It's because of what I just explained so if you want to test efficiency you need to look at a workload which generates the most amount of frames/second.



We're talking about the relative efficiency here, not efficiency in absolute terms.
Measuring the relative efficiency at 1080p with these GPUs is very demb, not only because you're going to be CPU limited in many games, but also because you have a shitton of bandwidth and memory modules that consume so much power at low resolutions but don't provide any value.
The memory chips alone consume about 70W on the 3090 if I'm not mistaken.
And even if we ignore the higher resolutions the 3070 is still the most efficient Nvidia GPU.
I'm not saying Ampere is the biggest Upgrade Nvidia has ever done, but it's not as bad as some people think.


----------



## Vya Domus (Nov 27, 2020)

M2B said:


> We're talking about the relative efficiency here, not efficiency in absolute terms.



Then don't draw conclusions such as "the only chart that matters". If you want to prove an architecture is not as bad then the only proper and objective way is to measure things in absolute terms.


----------



## M2B (Nov 27, 2020)

Vya Domus said:


> Then don't draw conclusions such as "the only chart that matters". If you want to prove an architecture is not as bad then the only proper and objective way is to measure things in absolute terms.



I said the chart that actually matters, not the only chart that matters.
And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency), based on your magical logic 1080p in this case is the most accurate way to measure efficiency, how about that?


----------



## Vya Domus (Nov 27, 2020)

M2B said:


> based on your magical logic 1080p is the most accurate way to measure efficiency, how about that?



My logic never claimed 1080p is the most accurate way, I simply said that you need a workload that generates the most peaks and troughs in power. If that happens to be playing a game at 1080p then so be it. And I don't think there is anything odd about a 1660ti being almost as power efficient as a chip that has almost 5 times the amount of transistors and runs at a faster clock speed, the bigger and faster the chip the more power leakage you get. If you look at similarly sized GPUs AMD wins by a landslide.


----------



## cellar door (Nov 27, 2020)

@W1zzard 
We're at a point that you might as well start adding a banana for scale


----------



## R0H1T (Nov 27, 2020)

M2B said:


> I said the chart that actually matters, not the only chart that matters.
> And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency), based on your magical logic 1080p in this case is the most accurate way to measure efficiency, how about that?


Well tbf the 3070 is unlikely going to be the most efficient Ampere GPU in Nvidia's lineup, if Nvidia doesn't decide to go TSMC 7nm then it'll probably be the RTX 3060Ti or RTX 3050Ti or something. The point is that Ampere is not that great a uarch especially as compared to Turing, people often bring up Polaris as well now do you recall where it was made? Then Vega on 7nm TSMC & finally RDNA on the same node followed by RDNA2. Whether you agree with everything that's been said here or not, the fact remains this is more like a repeat of *Fermi vs Evergreen*.


----------



## Fluffmeister (Nov 27, 2020)

R0H1T said:


> Or the GTX 480?



Really? You lot still going there? I mean if I really wanted to heat my room I would have *bought a 295X2*.


----------



## M2B (Nov 27, 2020)

R0H1T said:


> Well tbf the 3070 is unlikely going to be the most efficient Ampere GPU in Nvidia's lineup, if Nvidia doesn't decide to go TSMC 7nm then it'll probably be the RTX 3060Ti or RTX 3050Ti or something. The point is that Ampere is not that great a uarch especially as compared to Turing, people often bring up Polaris as well now do you recall where it was made? Then Vega on 7nm TSMC & finally RDNA on the same node followed by RDNA2. Whether you agree with everything that's been said here or not, the fact remains this is more like a repeat of Fermi vs Evergreen.



You're making up nonsense and labelling it as a fact.
Some of the AMD chips back then were literally 75% more efficient than Fermi while RDNA2 is barely 10-15% more efficient than ampere while being on a much better node.
I would go as far as to say that AMD doesn't have any Architectural advantage here if we factor in the process node difference.


----------



## R0H1T (Nov 27, 2020)

M2B said:


> You're making up nonsense and labelling it as a fact.
> Some of the AMD chips back then were literally 75% more efficient than Fermi while RDNA2 is barely 10-15% more efficient than ampere while being on a* much better node.*
> I would go as far as to say that AMD doesn't have any Architectural advantage here if we factor in the process node difference.


Right, so let's see *your facts that back up the claim that TSMC 7nm is much more superior* than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes, then wake me up! Till then *keep your claims to yourself* & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks 



Fluffmeister said:


> Really? You lot still going there? I mean if I really wanted to heat my room I would have *bought a 295X2*.


Not a bad idea actually, though you have to agree (or not) that this  brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.


----------



## M2B (Nov 27, 2020)

R0H1T said:


> Right, so let's see *your facts that back up the claim that TSMC 7nm is much more superior* than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes then wake me up! Till then *keep your claims to yourself* & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks
> 
> Not a bad idea actually, though you have to agree (or not) that this  brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.



Anandtech:
"I had mentioned that the 7LPP process is quite a wildcard in the comparisons here. Luckily, I’ve been able to get my hands on a Snapdragon 765G, another SoC that’s manufactured on Samsung’s EUV process. It’s also quite a nice comparison as we’re able to compare that chip’s performance A76 cores at 2.4GHz to the middle A76 cores of the Exynos 990 which run at 2.5GHz. Performance and power between the two chips here pretty much match each other, and a clearly worse than other TSMC A76-based SoCs, especially the Kirin 990’s. The only conclusion here is that Samsung’s 7LPP node is quite behind TSMC’s N7/N7P/N7+ nodes when it comes to power efficiency – anywhere from 20 to 30%."

Now this is not a straight comparison. They're comparing Samsung's 7nm to tsmc's 7nm but it should give you a decent idea of just how much better tsmc's node is.


----------



## R0H1T (Nov 27, 2020)

M2B said:


> Now this is not a straight comparison. They're comparing Samsung's 7nm to tsmc's 7nm but it should give you a decent idea of just how much better Tsmc's node is.


You know that's not how node comparison's work, or do you? While you have physical characteristics that you can direct measure, then electrical characteristics & finally the most important part of the puzzle ~ the *uarch*. Just because you say TSMC 7nm is "much more better" than SS' 8nm doesn't make it a fact that it'd be also much better for Ampere, unless you have the* same GPU/chip/uarch made on the two nodes*. The closest comparison *IIRC* was the VII & Vega64.

Anyone claiming otherwise is just *guesstimating*! So waiting for you to provide evidence about how superior (or inferior) one is wrt to the other.

Take zen3 for instance, with a (major) tweak in zen2 you not only get higher IPC but also much higher clocks on the same node. Can you now claim that zen3 would clock just as high on 7nm+ or 7LPP, perhaps 6nm?


----------



## apamise (Nov 27, 2020)

$1750 card that is only 16% faster than a $700 card getting "highly recommended" badge just doesn't feel right.


----------



## M2B (Nov 27, 2020)

R0H1T said:


> You know that's not how node comparison's work, or do you? While you have physical characteristics that you can direct measure, then electrical characteristics & finally the most important part of the puzzle ~ the *uarch*. Just because you say TSMC 7nm is "much more better" than SS' 8nm doesn't make it a fact that it'd be also much better for Ampere, unless you have the* same GPU/chip/uarch made on the two nodes*. The closest comparison *IIRC* was the VII & Vega64.
> 
> Anyone claiming otherwise is just *guesstimating*! So waiting for you to provide evidence about how superior (or inferior) one is wrt to the other.



Unlike you, I don't lable my thoughts as facts.
I'm not saying TSMC's N7P or whatever is for definite better than Samsung's 8N, but based on what we've seen before it's a pretty safe bet that the tsmc node is better both in terms of performance and efficiency.
And yes I'm aware that you can't compare different architectures across different nodes so easily, Mr. Electrical Engineer.

I'm basically wasting my time with someone who can easily lie to prove his made up nonsense correct.


----------



## R0H1T (Nov 27, 2020)

Oh really, so I guess you must've mislabeled these thoughts as well?


M2B said:


> while RDNA2 is barely 10-15% more efficient than ampere while being on a much better node.


Or how about this one?


M2B said:


> And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency),


It'd be nice if you stopped at the point where you said the 3070 is the most efficient GPU in their lineup - to which I even said that there could well be *more efficient Ampere GPUs released after it *- but then you just had to bring in some other assumptions to prove a point huh?


----------



## M2B (Nov 27, 2020)

R0H1T said:


> Or how about this one?



What in the actual fuck is wrong with you?
That was a different discussion with someone else to prove my point that 1080p is not the best indicator of the relative efficiency for these high-end cards.
Let's just end this mess here and move on. What a fucktard.


----------



## swirl09 (Nov 27, 2020)

Xuper said:


> 80'c ? pretty bad. ASUS RTX 3090 STRIX OC  = 68'c


The Strix default profile is very aggressive. I had to play around a lot to get it to my liking, which wasnt as easy as it should be due to the fact that currently there is an unknown fan offset on afterburner atm - presumably will be fixed in a future update of either the program and/or card. I have mine targeting 2010 / v0.993 / 70C / 60% fan which is reasonably quiet - just audible. There is some variance from game to game.

The out of the box settings for the Suprim gaming bios is much better balanced.


----------



## owen10578 (Nov 27, 2020)

Aw I was hoping for a 500W power limit after having the highest power limit on the 3080 Suprim X out of all the 3080s.

I really don't get people complaining about the power consumption. If you care so much about it why buy a pre-overclocked card with a huge cooler and a strong VRM? Even if you wanted the cooler but not the power consumption then just lower the limit manually to whatever you want. Higher power limits gives the user freedom to overclock higher and that's never a bad thing.


----------



## okbuddy (Nov 27, 2020)

poor 6900xt


----------



## kruk (Nov 27, 2020)

Fluffmeister said:


> Really? You lot still going there? I mean if I really wanted to heat my room I would have *bought a 295X2*.



Yeah, but 295X2 destroyed the competition in 4K:







This one only destroys your wallet


----------



## nguyen (Nov 27, 2020)

@W1zzard Can you utilize Nvidia PCAT (which does measure performance per watt) and note down the perf/watt of each GPU in a few games and average them down ? I think it would be more precise than using power consumption in only 1 game and 1 resolution at that.



kruk said:


> Yeah, but 295X2 destroyed the competition in 4K:
> This one only destroys your wallet



Didn't you read the 2011 memo, Average FPS for Xfire/SLI setup are useless numbers, all you get are incomplete frames or microstuttering, that's why Nvidia burried it when they are the first to invent it.


----------



## mak1skav (Nov 27, 2020)

Isn't this card already recalled? Why people keep arguing about imaginary graphic cards?


----------



## W1zzard (Nov 27, 2020)

nguyen said:


> Can you utilize Nvidia PCAT (which does measure performance per watt) and note down the perf/watt of each GPU in a few games and average them down ? I think it would be more precise than using power consumption in only 1 game and 1 resolution at that.


I've told NVIDIA I want to check out PCAT, they said they'll look into it but so far they dont have enough units to give me one



mak1skav said:


> Isn't this card already recalled?











						MSI Pauses Shipments of RTX 30-series SUPRIM X, No Recall in Progress
					

MSI had paused shipments of its RTX 3090 SUPRIM X and RTX 3080 SUPRIM X graphics cards on suspicion of a quality issue with their cards. Earlier today, a Reddit post by someone who was in queue with a retailer to purchase an RTX 3080 SUPRIM X graphics card was allegedly informed by the retailer...




					www.techpowerup.com
				






apamise said:


> $1750 card that is only 16% faster than a $700 card getting "highly recommended" badge just doesn't feel right.


I have between $1500 and $2000 to spend on a graphics card, what would you recommend?


----------



## Sovsefanden (Nov 27, 2020)

spnidel said:


> 474w peak in games
> ....oooookay then
> fermi 2.0



No its not, simply look at performance per watt, which looks fine, for reference cards that is, which is the only thing that matters

AMDs 6800 series watt usage also skyrockets when overclocked, 6800 xt goes to 300+ watts easily in gaming when overclocked, 3080 goes to 350 but performs better on avg + has DLSS and better ray tracing

3090 beats them all and hits 400+ at times, im sure 6900 xt is going to be upthere as well post oc

Could not care less about 3090 or 6900 XT since its 1000+ dollars cards (on paper), worst perf value you can get and 6900 xt is going to be missing in action for months

Hell even 6800 series are MiA and most retailers say people will likely wait deep into 2021 to get one, AMD fucked their launch up more than Nvidia


----------



## Vayra86 (Nov 27, 2020)

R0H1T said:


> Right, so let's see *your facts that back up the claim that TSMC 7nm is much more superior* than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes, then wake me up! Till then *keep your claims to yourself* & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks
> 
> Not a bad idea actually, though you have to agree (or not) that this  brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.



Lol. It was exactly the reason AMD was going to catch up prior to this release, in the eyes of many people. And here you are saying the opposite AFTER the cards are out? I think its crystal clear now that AMD gains the advantage through a combination of architecture (they finally !!! caught up, look at how the GPUs work with boost, there is feature set parity, etc.), and having a better node. So yes, its pretty logical to assume - even without a shred of direct evidence beyond benchmarks and perf/watt numbers and die sizes/specs, that TSMC has that much of a better node.

On the other side of the fence, we see Nvidia releases GPUs that are not as optimal as they usually do. Power budgets have risen tremendously, after half a decade of strict adherence to 250 ish watt GPUs on the top. Now they exceed that by a royal margin. Not 20W or something as they did with Turing, but in the case of x80, nearly 100W depending on what gen you compare to. That's huge and its def not up to architecture where Nvidia has been leading and kinda still does, as their time to market was and still is better, despite them 'rushing' Ampere.



W1zzard said:


> I have between $1500 and $2000 to spend on a graphics card, what would you recommend?



Save half, upgrade faster, duh. Or buy two x80s so you can SL...oh wait.


----------



## W1zzard (Nov 27, 2020)

Vayra86 said:


> Save half, upgrade faster, duh.


But I'm rich, I want the best, and it should be quiet


----------



## XL-R8R (Nov 27, 2020)

W1zzard said:


> I have between $1500 and $2000 to spend on a graphics card, what would you recommend?


Spending half the amount for 85%+ of the performance and not buying silly overpriced GPU's to support the market of nonsense pricing scheme thats arrived over the last 6 years?  Is what I'd personally recommended.... not this.

Put the rest of the money you just saved, in shares for your favorite tech company.


Edit:

"and it should be quiet" - FE edition card with waterblock - still cheaper and probably as fast (if not faster with tweaks)


----------



## spnidel (Nov 27, 2020)

Sovsefanden said:


> No its not, simply look at performance per watt, which looks fine, for reference cards that is, which is the only thing that matters


474w is 474w regardless of what the performance per watt is - it's not a card I would put in my machine



Sovsefanden said:


> AMDs 6800 series watt usage also skyrockets when overclocked


which is where an undervolt overclock comes into play...



Sovsefanden said:


> 3090 beats them all and hits 400+ at times, im sure 6900 xt is going to be upthere as well post oc


idk given that the 6900 xt has the same tdp as the 6800 xt, and taking into zen 2 and zen 3 tdp numbers into account, I wouldn't be surprised if the power draw difference was minimal between the 6800 xt and 6900 xt


----------



## medi01 (Nov 27, 2020)

@W1zzard

I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)



Sovsefanden said:


> 3090 beats them all and hits 400+ at times


400+ is not how I'd refer to a card ringing at 500W door, and that before being OCed:


----------



## spnidel (Nov 27, 2020)

medi01 said:


> @W1zzard
> 
> I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
> It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)
> ...


yeah there's no way that thing doesn't peak at under 500W when overclocked... power consumption wise these cards are a fucking joke, jesus christ


----------



## R0H1T (Nov 27, 2020)

Vayra86 said:


> Lol.* It was exactly the reason AMD* was going to catch up prior to this release, in the eyes of many people.


You can keep LOLing all you want, unless you have a GA104 die on 8nm Samsung & 7nm TSMC you cannot say for sure how the latter is superior (or inferior) to the former ~ that's a fact. Anything else is just conjecture, & that's what I was fighting ~ the assumption that one node is much superior to the other without actual hard numbers to back any claim to the effect. So unless now you're gonna claim that you have that data with us, how about you keep the same assumptions in check?

It wasn't the only reason, if zen2 -> zen3 & RDNA -> RDNA2 has taught us is that *uarch* is a major part of the equation which simply cannot be ignored. *IMO* Ampere is a dud relative to Turing, now you can claim that TSMC 7nm would've made a big difference here but until we get real data to cross reference how performance can vary between nodes ~ you're just making castles in thin air.


----------



## W1zzard (Nov 27, 2020)

medi01 said:


> I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
> It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)


I agree it would be cool, but it's simply not feasible with all the tests and games that I have


----------



## Sovsefanden (Nov 27, 2020)

spnidel said:


> yeah there's no way that thing doesn't peak at under 500W when overclocked... power consumption wise these cards are a fucking joke, jesus christ



I doubt people that pay 1750 dollars for a card will care about it using 100 watts more or less. This card is 31dB load, which is very quiet for this amount of performance and flagship gpu's always use alot of power, big dies

Simply don't buy this card and don't overclock if you think watt usage is important, both AMD 6000 and Nvidia 3000 series use way more power when overclocked and OC scaling is low on both (for 24/7 OC, not LN2 testing and other useless methods)

Custom cards with higher boost clocks yields ~2% extra performane. Powerdraw increases by 20% this is true on both AMD 6000 and Nvidia 3000 - OC some more and see another 2-3% performance, with another 20% bump on power. You are now up 40% in powerusage, but only 4-5% up in performance, worth it?

Not to mention that you pay several hundred dollars extra for some of these custom cards, which delivers almost nothing over the reference solution (which are good on both AMD and Nvidia now - which is why custom cards barely performs better)

Hell some 6800XTs end up at 800 and even 900 dollars, thats insane for a 650 dollar card
And this is the MSRP price, not the scalper price


----------



## the54thvoid (Nov 27, 2020)

W1zzard said:


> I agree it would be cool, but it's simply not feasible with all the tests and games that I have



For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.


----------



## W1zzard (Nov 27, 2020)

the54thvoid said:


> For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.


It's close enough


----------



## Vayra86 (Nov 27, 2020)

R0H1T said:


> You can keep LOLing all you want, unless you have a GA104 die on 8nm Samsung & 7nm TSMC you cannot say for sure how the latter is superior (or inferior) to the former ~ that's a fact. Anything else is just conjecture, & that's what I was fighting ~ the assumption that one node is much superior to the other without actual hard numbers to back any claim to the effect. So unless now you're gonna claim that you have that data with us, how about you keep the same assumptions in check?
> 
> It wasn't the only reason, if zen2 -> zen3 & RDNA -> RDNA2 has taught us is that *uarch* is a major part of the equation which simply cannot be ignored. *IMO* Ampere is a dud relative to Turing, now you can claim that TSMC 7nm would've made a big difference here but until we get real data to cross reference how performance can vary between nodes ~ you're just making castles in thin air.



But there Is real data and it says you are wrong every time. Sources were provided, did you even get into them or? Do you read reviews? We know how efficient the arch and the node is, the products are out... We know Ampere has more efficient RT, but less efficient raster. We know the end FPS, bandwidth/memory/power usage, we know die size and we know the architectural improvements. If you still didn't figure it out by now, you never will. Your opinion is irrelevant, the facts are there. Its like a graph with lots of dots, you just need to connect them with  straight lines and you've got your comparison well laid out.



the54thvoid said:


> For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.



Exactly. Its as much of a non-issue as the additional OC headroom this specific card offers. Margin of error territory at best.


----------



## R0H1T (Nov 27, 2020)

Vayra86 said:


> *But there Is real data and it says you are wrong every time*. *Sources were provided*, did you even get into them or?* Do you read reviews*?* We know how efficient the arch and the node is, the products are out*... We know *Ampere has more efficient RT*, but less efficient raster. We know the end FPS, bandwidth/memory/power usage, we know die size and we know the architectural improvements. *If you still didn't figure it out by now, you never will.* Your opinion is irrelevant, the facts are there.* Its like a graph with lots of dots*, you just need to connect them with straight lines and you've got your comparison well laid out.


What real data? Do you have reviews of Ampere's consumer lineup on 7nm TSMC?

Sources like? Do you read reviews, actual reviews?

No we only know how efficient the node is for the products that are out, we have "0" data on how (much more or less) efficient it is for the same chip on SS' 8nm.

Yeah I'm not in the business of extrapolating such complex data to form a conclusion based on sometimes vast uarch differences, heck the differences between chips on the same node can vary greatly!

Graphs with what exactly? TSMC could be 5% more efficient for Ampere or 35% ~ what you & many others are doing is complete BS! Let's check the last real world data we had for the same chip on similar nodes ~


> *Conclusion*
> 
> Based on the results of our testing, it's clear that both versions of Apple's A9 SoC deliver the same level of performance, *but Samsung's 14nm FinFET process appears to offer slightly better power efficiency, extending battery life between 3.5-10.8 percent*. This is a little more than the 2-3 percent quoted by Apple, but not much, and it equates to only about 5-15 minutes of runtime under the most extreme conditions.











						iPhone 6s: Samsung And TSMC A9 SoCs Tested
					

The A9 SoC inside the iPhone 6s comes from two different vendors (Samsung and TSMC) using two different FinFET processes (14nm and 16nm, respectively). We test both versions to see if there's any power or performance differences.




					www.tomshardware.com
				



Next time don't bring waffles to a biscuit party


----------



## Vayra86 (Nov 27, 2020)

R0H1T said:


> What real data? Do you have reviews of Ampere's consumer lineup on 7nm TSMC?
> 
> Sources like? Do you read reviews, actual reviews?
> 
> ...



You need to bring ARM to prove something in an x86 GPU comparison? Grasping at straws.


----------



## R0H1T (Nov 27, 2020)

What does ARM (ISA) have anything to do with it & *x86 GPU *really? You said we have "real world data" such as?
The data I provided is real world for the exact same chip on two different nodes, if you don't like facts then please don't present your *assumptions* as "graphs with lots of dots" that show *exactly how efficient 7 nm is*!


----------



## the54thvoid (Nov 27, 2020)

The passion is great in this discussion but seriously, stop derailing the review thread. Discuss the card--pros and cons--without going into petty squabbles. This isn't the playground.


----------



## kapone32 (Nov 27, 2020)

Xuper said:


> 80'c ? pretty bad. ASUS RTX 3090 STRIX OC  = 68'c


Yes but not pulling 470+ watts.


----------



## lexluthermiester (Nov 28, 2020)

spnidel said:


> 474w peak in games
> ....oooookay then
> fermi 2.0


Except that that this card is a HUGE jump in performance over the RTX2000 series cards, something that didn't happen with the Fermi gen cards. So not really. Yes, this generation of cards eat a ton of power but there is excellent performance trade off.

MSI pulled out all the stops with this card. It is a freaken beast of a card in physical size, performance and quality of build! Impressive!

As always, great review W1zard!


----------



## Nima (Nov 30, 2020)

R0H1T said:


> Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever


Power consumption is only measured in 4k and the obtained number is also used to calculate performance per watt for 1080p and 1440p resolutions.  RTX 3090 is not fully utilized in lower resolutions and consume much less power than 4k so the actual efficiency is much higher than what these charts are suggesting.

In my opinion 1080p and 1440p numbers should not be included in performance per watt page. showing these charts is just pointless and can only misinform people.


----------



## ratirt (Nov 30, 2020)

This card is not my thing though. Somehow people are justifying the power consumption by better performance. I think there's boundaries in that regard and this card has crossed them definitely.


----------



## Sovsefanden (Nov 30, 2020)

lexluthermiester said:


> Except that that this card is a HUGE jump in performance over the RTX2000 series cards, something that didn't happen with the Fermi gen cards. So not really. Yes, this generation of cards eat a ton of power but there is excellent performance trade off.
> 
> MSI pulled out all the stops with this card. It is a freaken beast of a card in physical size, performance and quality of build! Impressive!
> 
> As always, great review W1zard!



Yes this is NOTHING like Fermi

Ampere HAS the performance


----------



## spnidel (Nov 30, 2020)

lexluthermiester said:


> Except that that this card is a HUGE jump in performance over the RTX2000 series cards, something that didn't happen with the Fermi gen cards. So not really. Yes, this generation of cards eat a ton of power but there is excellent performance trade off.
> 
> MSI pulled out all the stops with this card. It is a freaken beast of a card in physical size, performance and quality of build! Impressive!
> 
> As always, great review W1zard!


it's fermi 2.0


----------



## lexluthermiester (Nov 30, 2020)

spnidel said:


> it's fermi 2.0


And your comment makes as much sense.


----------



## spnidel (Nov 30, 2020)

lexluthermiester said:


> And your comment makes as much sense.


not here to argue, 500W is 500W


----------



## lexluthermiester (Nov 30, 2020)

spnidel said:


> not here to argue, 500W is 500W


Yet that is exactly what you are doing, poorly.  And as the testing showed a MAXIMUM of 482w under the HIGHEST load possible, your point isn't one. Additionally, this card is a literal top tier card that grants maximum performance. It's expected to have a high power draw. That does NOT make it Fermi 2.0. Such a comparison is has zero merit.


----------



## ratirt (Dec 1, 2020)

lexluthermiester said:


> Yet that is exactly what you are doing, poorly.  And as the testing showed a MAXIMUM of 482w under the HIGHEST load possible, your point isn't one. Additionally, this card is a literal top tier card that grants maximum performance. It's expected to have a high power draw. That does NOT make it Fermi 2.0. Such a comparison is has zero merit.


I'm curious. If next year NV releases a card like 4090 and it will gobble 650W would you be OK with it if the "performance is there"? Is there a limit to power usage/performance for you?


----------



## lexluthermiester (Dec 1, 2020)

ratirt said:


> I'm curious. If next year NV releases a card like 4090 and it will gobble 650W would you be OK with it if the "performance is there"? Is there a limit to power usage/performance for you?


I'll assume you're being serious and not a smart-alec. Given that SLI configurations of the past(Dual, Triple and Quad SLI) as well as CrossFire(also Dual, Triple and Quad) could easily top 750W, for people who want the best performance, 650W is a reasonable power level. If that is a problem for certain people, hard cheese, don't buy such a card. However, complaining about it is not going to stop people from buying it. The upcoming RX6900XT is rumored to be near 550W requirement. Is everyone going to whine about that too? Seriously, find something better to complain about.


----------



## ratirt (Dec 1, 2020)

lexluthermiester said:


> I'll assume you're being serious and not a smart-alec. Given that SLI configurations of the past(Dual, Triple and Quad SLI) could easily top 750W, for people who want the best performance, 650W is a reasonable power level. If that is a problem for certain people, hard cheese, don't buy such a card. However, complaining about it is not going to stop people from buying it. The upcoming RX6900XT is rumored to be near 550W requirement. Is everyone going to whine about that too? Seriously, find something better to complain about.


Oh I'm smart that's for sure but not like smart-alec. Honest question with that power usage. Just want to see how much people's perception of graphics card's power usage changed over the years.
It's hard to compare 2 cards vs one in performance per watt. If you do, you do realize that 2 cards eat up a lot of Watts and the performance gain is not that spectacular. Also, I though the SLI is dead at least a lot of people say it and SLI support supports that theory. So 650w for 2 cards or one is a great deal of Watt for me. Is that the boundary for you with a single card 650W? For me that's way overboard to be honest. I only hope, this is not a trend to get more performance. If it is and we end up with 1 cards sucking down over 1k Watt that would be a horrible thing and you would say innovation in that regard is dead just like SLI. Still there but nobody cares.
Rumors say strange things and 6900XT is spec'ed as 300W card. We'll know soon what the actually card power consumption is, the reviews are around the corner. 550W is simply not true for a stock card. Especially if you look at 6800XT's power draw.


----------



## lexluthermiester (Dec 1, 2020)

ratirt said:


> It's hard to compare 2 cards vs one in performance per watt.


Except that today's GPU's leave past, high wattage, GPU's in the dust performance-wise. Perspective within context is important. Whether you have a single card giving top-tier performance or multiple cards doing the same is irrelevant. Again, if buyers don't want cards which use that much powere, no one is forcing them to buy them.


----------



## ratirt (Dec 1, 2020)

lexluthermiester said:


> Except that today's GPU's leave past, high wattage, GPU's in the dust performance-wise. Perspective within context is important. Whether you have a single card giving top-tier performance or multiple cards doing the same, is irrelevant. Again, if buyers don't want cards which use that much wattage, no one is forcing them to buy one.


Yeah the performance is there but I disagree with the concept that the only way to get more performance is to go overboard with the power consumption. This is not an advancement but rather pushing to the limits of inferior architecture. 
Your argument about, don't want don't buy it is irrelevant here. The advancement in computing is not about raising the power over and over. If that's the case, that means the company is just going inefficient in terms of advancement. It's like it doesn't have a plan how to get it done which and it is slow and clumsy. We don't have anything to offer now so bump the price and release whatever.
AMD did it right though with Ryzen. The power consumption is lower or stays at the same level and the performance uplift is great. 
I wonder, what would be the difference if you have locked the 2 gen NV cards with power 250Watts and compared the performance. Maybe someone did it already. How would that comparison look like. You may disagree but that's how I see it.


----------



## spnidel (Dec 1, 2020)

lexluthermiester said:


> Yet that is exactly what you are doing, poorly.  And as the testing showed a MAXIMUM of 482w under the HIGHEST load possible, your point isn't one. Additionally, this card is a literal top tier card that grants maximum performance. It's expected to have a high power draw. That does NOT make it Fermi 2.0. Such a comparison is has zero merit.


nah, I'm not arguing, 500w is 500w, and these cards are fermi 2.0


----------



## lexluthermiester (Dec 1, 2020)

ratirt said:


> This is not an advancement but rather pushing to the limits of inferior architecture.


Are seriously calling brand new, high performance designs "inferior".  What a laughable notion!


ratirt said:


> Your argument about, don't want don't buy it is irrelevant here.


Your opinion, not supported by common sense.


ratirt said:


> The advancement in computing is not about raising the power over and over. If that's the case, that means the company is just going inefficient in terms of advancement. It's like it doesn't have a plan how to get it done which and it is slow and clumsy.


Once again, context is important. You are talking about brand new designs. Pushing them to their limits is not slow, lazy, clumsy or stupid. It is perfectly logical and is exactly what literally everyone has been doing since computers were invented.


ratirt said:


> We don't have anything to offer now so bump the price and release whatever.


How is it you don't get the way things work? Seems like you need to brush up on history just a little bit.


----------



## ratirt (Dec 1, 2020)

lexluthermiester said:


> Are seriously calling brand new, high performance designs "inferior".  What a laughable notion!
> 
> Your opinion, not supported by common sense.
> 
> ...


Listen. I'm not gonna argue with you because you take whatever companies bring before your eyes. If you look closer, Turing vs Ampere is not that different one to the other. So brand new, just because NV calls it that way and it has been released, doesn't mean it actually is brand new by design but by release date.
NV can say new RT cores, gen 2 RT cores or gen "whatever" tensor cores but you wont be able to verify the difference anyway and there's new node to consider which is not NV achievement. The sole clue for the performance is bump the number of cores and power limit and get a price bump. If that is how "the way things work" for you then you are a lucky man.  hope it works out for you the same way in every other aspect.


----------



## lexluthermiester (Dec 1, 2020)

ratirt said:


> Listen. I'm not gonna argue with you because you take whatever companies bring before your eyes.


And yet, you continue?


ratirt said:


> If you look closer, Turing vs Ampere is not that different one to the other.


You keep thinking that. Ignorance is bliss after all.


ratirt said:


> So brand new, just because NV calls it that way and it has been released, doesn't mean it actually is brand new by design but by release date.


So are you saying the same thing about AMD's RDNA2? Because such opinions would be silly and without merit of any kind.


ratirt said:


> NV can say new RT cores, gen 2 RT cores or gen "whatever" tensor cores but you wont be able to verify the difference anyway and there's new node to consider which is not NV achievement.


You're still arguing and failing to bring merit to your position.


ratirt said:


> If that is how "the way things work" for you then you are a lucky man.


Seemingly so.

Do you wish to "not" argue more?


----------



## HenrySomeone (Dec 1, 2020)

spnidel said:


> 474w peak in games
> ....oooookay then
> fermi 2.0


It's not great, no, but just like with Fermi, they are the undeniably faster cards and in this price bracket that matters (far) more than a couple dozen W of power consumption. If Ampere was on the same 7nm TSMC node though, there would once again be absolutely no competition...however the already bad availability would likely reach epic levels...


----------



## spnidel (Dec 2, 2020)

HenrySomeone said:


> It's not great, no, but just like with Fermi, *they are the undeniably faster cards and in this price bracket that matters (far) more than a couple dozen W of power consumption*. If Ampere was on the same 7nm TSMC node though, there would once again be absolutely no competition...however the already bad availability would likely reach epic levels...


oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!


----------



## ratirt (Dec 2, 2020)

HenrySomeone said:


> It's not great, no, but just like with Fermi, they are the undeniably faster cards and in this price bracket that matters (far) more than a couple dozen W of power consumption. If Ampere was on the same 7nm TSMC node though, there would once again be absolutely no competition...however the already bad availability would likely reach epic levels...


I wouldn't be so sure about that. The 8nm Samsung and 7nm TSMC are fairly similar in density. There are pros and cons on both sides and NV went for better yields and lower price per wafer instead of slightly better power efficiency. Either way it would have been better for NV to go TSMC but saying absolutely no competition if they would have gone 7nm is over exaggerated.


----------



## HenrySomeone (Dec 2, 2020)

spnidel said:


> oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!


Not really for 1.5 - 2k $ cards, no; 295x2 for instance approached 700W under stress test







ratirt said:


> I wouldn't be so sure about that. The 8nm Samsung and 7nm TSMC are fairly similar in density. There are pros and cons on both sides and NV went for better yields and lower price per wafer instead of slightly better power efficiency. Either way it would have been better for NV to go TSMC but saying absolutely no competition if they would have gone 7nm is over exaggerated.


I'd be pretty sure; there already isn't that much competition when looking at the whole picture - particularly the horrendously bad ray tracing performance of RDNA2, which will definitely become much more important before the life cycle of current cards is over (I'd wager that in two years or less more than half of new titles will implement some sort of RT functionality) and last but definitely not least, DLSS which will definitely help to prolong the usability of Ampere cards especially in 4k, but also lower resolutions further down the line. Now if we imagine considerably better power efficiency coupled with notably higher clocks on top of this (from 7nm TSMC), I don't think what I said is over exaggerated at all (although obviously, AMD would have to position their MSRPs much lower in that case)


----------



## medi01 (Dec 2, 2020)

ratirt said:


> The 8nm Samsung and 7nm TSMC are fairly similar in density.


I keep hearing that 8nm Samsung is 10nm rebrand, so, shrug.



lexluthermiester said:


> Except that that this card is a HUGE jump in performance over the RTX2000 series cards, something that didn't happen with the Fermi gen cards


An interesting myth.





						NVIDIA’s GeForce GTX 480 and GTX 470: 6 Months Late, Was It Worth the Wait?
					






					www.anandtech.com
				




No, performance was not the issue with Fermi, power consumption and price was. Both are there with Ampere.


----------



## lexluthermiester (Dec 2, 2020)

spnidel said:


> oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!


That's not what was said, don't twist words out of context. AMD's current line-up is just as power hungry with very little variation.


----------



## ratirt (Dec 2, 2020)

HenrySomeone said:


> I'd be pretty sure; there already isn't that much competition when looking at the whole picture - particularly the horrendously bad ray tracing performance of RDNA2, which will definitely become much more important before the life cycle of current cards is over (I'd wager that in two years or less more than half of new titles will implement some sort of RT functionality) and last but definitely not least, DLSS which will definitely help to prolong the usability of Ampere cards especially in 4k, but also lower resolutions further down the line. Now if we imagine considerably better power efficiency coupled with notably higher clocks on top of this (from 7nm TSMC), I don't think what I said is over exaggerated at all (although obviously, AMD would have to position their MSRPs much lower in that case)



I don't think it's bad but it does lack in comparison to NV's cards. I also don't think that Ray Tracing matters the most here at what the state of Ray Tracing is at this point in terms of support for software. But that's just how I see it at this point in time. That is your wager but we can't be certain. Even though the NV RT performance is better than AMD's, it isn't is great for certain games. So if your wager is correct and there will be plenty of new games with RT my wager is they would require more RT performance which current cards either NV or AMD may not have (that includes DLSS like feature)
The thing is, TSMC and NV Ampere graphics card, You don't know how much of an improvement will be there to be honest. It may have been same performance as Samsung's but less power draw.

BTW. The R9 295x2 was dual Graphics chip card. Don't think that is relevant and on a node from the stone age. So if you compare 3090 over 500W peak, the R9 295x2 doesn't look bad as for a dual card and chip done in 28nm.


medi01 said:


> I keep hearing that 8nm Samsung is 10nm rebrand, so, shrug.


Well yes that's what people say but you already know that 7nm or 8nm is just a naming scheme nothing more.


----------



## lexluthermiester (Dec 2, 2020)

medi01 said:


> An interesting myth.
> 
> 
> 
> ...


Oh really? Let's review.








						Zotac GeForce GTX 260 Amp² Edition 216 Shaders Review
					

Today NVIDIA released their updated GeForce GTX 260 GPUs which come with 24 extra shaders, for a total shader count of 216. Zotac has also overclocked their card beyond the GTX 280 clock speeds which results in a card that is just 1% slower than the regular GTX 280, yet costs over $130 less.




					www.techpowerup.com
				











						Palit GeForce GTX 560 2 GB Review
					

Palit's implementation of the NVIDIA GeForce GTX 560 non-Ti is the only card that features 2 GB of GDDR5 memory instead of 1 GB on the reference design. Will the additional memory be enough to make a difference?




					www.techpowerup.com
				



The first shows Crysis with a GTX260, the next a GTX560. Pay attention to the 1920x1200 scores in particular.

Now let's look at power usage, shall we?








						Zotac GeForce GTX 260 Amp² Edition 216 Shaders Review
					

Today NVIDIA released their updated GeForce GTX 260 GPUs which come with 24 extra shaders, for a total shader count of 216. Zotac has also overclocked their card beyond the GTX 280 clock speeds which results in a card that is just 1% slower than the regular GTX 280, yet costs over $130 less.




					www.techpowerup.com
				











						Palit GeForce GTX 560 2 GB Review
					

Palit's implementation of the NVIDIA GeForce GTX 560 non-Ti is the only card that features 2 GB of GDDR5 memory instead of 1 GB on the reference design. Will the additional memory be enough to make a difference?




					www.techpowerup.com
				



Isn't that interesting?

Now lets look at the top tier models for each of those generations;








						Leadtek GeForce GTX 285 1024 MB Review
					

NVIDIA's latest addition to their graphics card lineup is based around the 55 nanometer GT200b. The new card offers more performance, consumes less power and is quieter than the previous model. Yet these improvements cost you a premium of about $70 over the GTX 280. Is that price justified?




					www.techpowerup.com
				











						NVIDIA GeForce GTX 580 1536 MB Review
					

Today NVIDIA releases their new GeForce GTX 580 which is based on their Fermi architecture. The card is 20% faster than the GTX 480, yet requires less power. NVIDIA has also optimized fan noise making this the quietest highest-end card on the market today.




					www.techpowerup.com
				



Hmm..

Now look at the power consumption for each;








						Leadtek GeForce GTX 285 1024 MB Review
					

NVIDIA's latest addition to their graphics card lineup is based around the 55 nanometer GT200b. The new card offers more performance, consumes less power and is quieter than the previous model. Yet these improvements cost you a premium of about $70 over the GTX 280. Is that price justified?




					www.techpowerup.com
				











						NVIDIA GeForce GTX 580 1536 MB Review
					

Today NVIDIA releases their new GeForce GTX 580 which is based on their Fermi architecture. The card is 20% faster than the GTX 480, yet requires less power. NVIDIA has also optimized fan noise making this the quietest highest-end card on the market today.




					www.techpowerup.com
				



Well isn't that interesting also?

You were saying what now?
Context is important. Try it sometime. You too @ratirt

Ampere is NOT Fermi 2.0. Perfect rubbish notion. To make such a claim is as ridiculous as it is completely lacking in merit.


----------



## medi01 (Dec 2, 2020)

lexluthermiester said:


> Hmm..


Hmm?






285 - 26.1FPS
580 - 47fFPS

*80% more perf is not a huge perf jump, but 45% more perf is*:








By this metric, Ampere Fermi 2 is 2 times worse than normal Fermi, as it pushed perf by only half of the original.


----------



## spnidel (Dec 2, 2020)

lexluthermiester said:


> That's not what was said, don't twist words out of context. AMD's current line-up is just as power hungry with very little variation.



oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!


----------



## HenrySomeone (Dec 2, 2020)

spnidel said:


> oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!


They are not far apart and even if you push 6800XT to high heaven where it consumes at least as much as 3080, it still doesn't match the latter's performance:


----------



## spnidel (Dec 2, 2020)

HenrySomeone said:


> They are not far apart and even if you push 6800XT to high heaven where it consumes at least as much as 3080, it still doesn't match the latter's performance:


don't care LOL one card is 500W the other isn't


----------



## R0H1T (Dec 2, 2020)

HenrySomeone said:


> *If Ampere was on the same 7nm TSMC node* though, there would once again be absolutely no competition


I keep hearing this BS from multiple forum members, on various forums! Evidence please, not conjecture or your best guesstimate


----------



## R0H1T (Dec 2, 2020)

Fanboy or not the same garbage is being recycled as *fact*. I just posted links to "fact based review" a while back, waiting for the perpetrators to present their evidence!


----------



## spnidel (Dec 2, 2020)

I was actually considering getting a 3000 series nvidia GPU, but the power consumption turned me off entirely


----------



## medi01 (Dec 2, 2020)

R0H1T said:


> I keep hearing this BS from multiple forum members, on various forums! Evidence please, not conjecture or your best guesstimate



We went from "AMD is so many years behind" to "AMD will maybe compete with 2080Ti, but I doubt it" to full throttle meltdowns.

Have mercy.


----------



## lexluthermiester (Dec 2, 2020)

medi01 said:


> Hmm?
> 
> 
> 
> ...


Hmm, let's review;


medi01 said:


> No, performance was not the issue with Fermi, power consumption and price was.


Yup, that's what you said... Nice cherry pick though, in doing so you effectively proved the point I was making, on several levels. Well done!



spnidel said:


> oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!


Um, YOU are not paying attention!


HenrySomeone said:


> Not really for 1.5 - 2k $ cards, no; 295x2 for instance approached 700W under stress test


Henry was referring to AMD's previous line up for comparison.








						AMD Radeon RX 6800 XT Review - NVIDIA is in Trouble
					

The Radeon RX 6800 XT is a worthy opponent to the NVIDIA GeForce RTX 3080. It offers similar performance at better power efficiency and lower noise levels. In our Radeon RX 6800 XT review, we also take a closer look at raytracing and "Rage Mode", a new 1-click method for overclocking.




					www.techpowerup.com
				



And that shows that when I said...


lexluthermiester said:


> AMD's current line-up is just as power hungry with very little variation.


...I was spot-on. 328W VS 324W qualifies as very little variation.

Do you two want to continue trying and failing to pick nits?


----------



## spnidel (Dec 2, 2020)

lexluthermiester said:


> Hmm, let's review;
> 
> Yup, that's what you said... Nice cherry pick though, in doing so you effectively proved the point I was making, on several levels. Well done!
> 
> ...



where did you find that 328W VS 324W figure LOL
in the review you linked peak is 324W vs 350W, and average is 279W vs 339W, you can guess which is which 
are you shilling on purpose or are you being ignorant in regards to what you post on purpose?


----------



## lexluthermiester (Dec 2, 2020)

spnidel said:


> where did you find that 328W VS 324W figure LOL


It's called reading. Try it sometime.


spnidel said:


> are you shilling on purpose or are you being ignorant in regards to what you post on purpose?


Ignorant? Yes, there is most definitely some ignorance going on.


----------



## spnidel (Dec 2, 2020)

lexluthermiester said:


> It's called reading. Try it sometime.


pay me, and then I'll have as many online arguments with you as your heart desires, sweetheart


----------



## medi01 (Dec 3, 2020)

lexluthermiester said:


> Nice cherry pick though.



What the heck, dude:






what is it, on average 56% faster? Yet "performance was not there", but it's there with 45%...


----------



## lexluthermiester (Dec 4, 2020)

medi01 said:


> What the heck, dude:
> 
> 
> 
> ...


You're still missing the context. READ;


medi01 said:


> *No, performance was not the issue* with Fermi, *power consumption and price was*. Both are there with Ampere.


Yup, those are your words. Would you like to continue contradicting yourself?


----------



## ityrant (Dec 6, 2020)

well done MSI


----------



## ityrant (Dec 6, 2020)

I prefer to see some more silent and efficiency GPU/CPU in this generation gaming rigs.And i will turn to buy a console,maybe?
(i guess intel and nvidia are responsible for this.


----------



## Dan848 (Jan 15, 2021)

Xuper said:


> 80'c ? pretty bad. ASUS RTX 3090 STRIX OC  = 68'c



ASUS RTX 3090 STRIX OC (Quiet BIOS) was 75°C









						ASUS GeForce RTX 3090 STRIX OC Review
					

The ASUS GeForce RTX 3090 STRIX OC is the fastest RTX 3090 we have tested today by quite the big margin. It also has a huge power limit adjustment range that maxes out at 480 W! We hance added a whole test run at 480 W to our review to see how much extra headroom RTX 3090 Ampere has left and...




					www.techpowerup.com


----------



## terroralpha (Jan 17, 2021)

HenrySomeone said:


> It's not great, no, but just like with Fermi, they are the undeniably faster cards and in this price bracket that matters (far) more than a couple dozen W of power consumption. If Ampere was on the same 7nm TSMC node though, there would once again be absolutely no competition...however the already bad availability would likely reach epic levels...



if nvidia would have went with 7nm TSMC, the GPU supply shortage would be magnitudes worse. we'd be paying $1500 for a 3080/6800xt, assuming we'd even see any 6800xt at all. nvidia, in their quest to save some money, actually ended up doing us all a favor. i'm not going to pretend i'm an engineer (like some of you here) so i won't pretend to know how well 8nm samsung compares to 10nm samsung or TSMC, HOWEVER, it's clearly better when amd and nvidia don't have to compete over the same node, along with gaming consoles and other contracts TSMC has to fulfil. 



spnidel said:


> oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!



are we judging all ampere cards based on the performance of one garbage brand that makes garbage products? asus and gigabyte cards pull 100W less than the "suprim" 3090

i personally have a ryzen 5900x, 3090 tuf OC, with a custom loop running a d5 pump, 8 fans, peripherals, plus the obligatory RGB puke, ect. the higher power draw registered on my UPS so far was 512W for the ENTIRE system. 

the difference in power consumption between reference 3080 and 6800xt, as well as 3090 and 6900xt, is about 60W. but don't forget that ampere is still faster on average and it can do what navi can't. the extra hardware present on ampere allows for nvenc, faster RT, rtx voice, dlss, etc. big navi is just a faster navi, it doesn't do anything special aside from render frames faster. 

now, if you think that the extra features are not worth the extra power draw, then you decision on which card to buy is clear. if i had a choice between a 3080 and a 6800xt, at their proper pricing, i'd pick the 3080 100 times out of 100. but during the fermi days, i had amd cards. i had a 5970 followed by 2x 6950s unlocked with the 6970 BIOS


----------



## LukeCuda (Jan 27, 2021)

Power usage debate makes no sense to me. As long as it’s not heating up your room, you are spending thousands on high end parts so power cost is too low comparatively to care about. Noise is comparable to other gens.
So unless you’re over clocking, you plug the thing in and get FPS. You would have to be a special-use person to care about power.

the only reason power should matter is if it affects your life somehow. Otherwise it’s hypothetical debates.


----------



## terroralpha (Jan 27, 2021)

LukeCuda said:


> Power usage debate makes no sense to me. As long as it’s not heating up your room, you are spending thousands on high end parts so power cost is too low comparatively to care about. Noise is comparable to other gens.
> So unless you’re over clocking, you plug the thing in and get FPS. You would have to be a special-use person to care about power.
> 
> the only reason power should matter is if it affects your life somehow. Otherwise it’s hypothetical debates.


so, 2 things. 

first, i upgraded from a 2080 ti to an msi 3090 trio and It actually was heating my room up noticeably. at no point before did i have to stop playing games because sitting next to my computer become uncomfortable. i live in the north east US and this was in the month of December. my computer literally became a space heater. so yes, it did affect my life. the suprim draw even more power than the trio. so i can't imagine it would be any better 

and second, there is no reason for the same product (rtx 3090) from two different companies, that perform about the same, to have a difference of 100W in power draw between the two. the msi card is garbage. 3090 strix is slightly faster (about 1% according to TPU) than the 3090 suprim but pulls 100W less power.


----------



## lexluthermiester (Jan 27, 2021)

terroralpha said:


> first, i upgraded from a 2080 ti to an msi 3090 trio and It actually was heating my room up noticeably. at no point before did i have to stop playing games because sitting next to my computer become uncomfortable. i live in the north east US and this was in the month of December. my computer literally became a space heater. so yes, it did affect my life. the suprim draw even more power than the trio. so i can't imagine it would be any better


And you're surprised? Open a window. It's what I do.


terroralpha said:


> and second, there is no reason for the same product (rtx 3090) from two different companies, that perform about the same, to have a difference of 100W in power draw between the two.


Actually, there is a perfect reason: Silicon Lottery. Not all dies perform the same. Some can run fast, some can not. Some can run really fast at low power, some need a lot of power to run fast. It just depends on how those GPU dies are binned.


----------



## Alirezatha (Mar 26, 2021)

Please help me between these three cards
Silence, biuld quality and premium feel, cooling is the most important things for
My system
I9-11900
Z590 rog maximus hero
Rog ryujin 360
Rog thor 1200w
Rog helios

Options:
3090 ftw3 with very silnet cooler and good temp

3090 suprim with 4 mlcc and a very nicelooking card and silnet cooling

3090 rog with hisets factory oc and set with all of my other components but noisy


----------



## Vayra86 (Mar 29, 2021)

That rig just screams for a ROG card of course.

You never went for best, you went for ROG and top of the price chart, so why change that?


----------



## Alirezatha (Mar 30, 2021)

You're right. I love Rog very very much. I'll pay for it and enjoy looking at Rog logo on the components more than when I'm playing.

But the suprim card has to key advantage silence and active backplate cooling and both of them very important for me. But still Rog brand is my favorite.
Finally, I'm confused to make the right choice.


----------



## Deleted member 205776 (Mar 30, 2021)

I'd get the Suprim X for that VRAM cooling, the 3090 kinda needs it.

But if you want a full ROG system, nothing wrong with choosing the ROG card.


----------



## Vayra86 (Mar 30, 2021)

Alirezatha said:


> You're right. I love Rog very very much. I'll pay for it and enjoy looking at Rog logo on the components more than when I'm playing.
> 
> But the suprim card has to key advantage silence and active backplate cooling and both of them very important for me. But still Rog brand is my favorite.
> Finally, I'm confused to make the right choice.



There isn't a right choice. I was part joking and part serious... but realistically between the two... if you enjoyed looking at ROG parts and want to keep that intact... I hardly think the differences will make or break either card.



Alexa said:


> I'd get the Suprim X for that VRAM cooling, the 3090 kinda needs it.



Perhaps this one though, as it also speaks for longevity, and this card ain't cheap.


----------



## Deleted member 205776 (Mar 30, 2021)

Vayra86 said:


> Perhaps this one though, as it also speaks for longevity, and this card ain't cheap.


Yeah, heard that the Suprim X has the best VRAM cooling for the 3080/3090s out of all AIBs. I would've gotten a 3070 Suprim X (it was in stock and at the same price as my 520 EUR Gaming X Trio), but it went out of stock. This card's still good though. My GDDR6 doesn't get nearly as hot.


----------



## Alirezatha (Mar 31, 2021)

1.Is active backplate cooling Really effective? 

2. How is the fan noise ? Are they quiet?


----------



## Deleted member 205776 (Mar 31, 2021)

Alirezatha said:


> 1.Is active backplate cooling Really effective?
> 
> 2. How is the fan noise ? Are they quiet?


1. The VRAM gets their own heatpipe and heatsink, and there are thermal pads between the PCB and backplate.

2. Can't speak for the Suprim, but my Gaming X Trio is extremely quiet, probably the quietest thing in my system right now. I think the Suprim also has the exact same fans so, yeah, it'll be quiet.


----------



## Alirezatha (Mar 31, 2021)

Thanks for your help. I probably go for suprim x in my full Rog rig


----------



## lexluthermiester (Mar 31, 2021)

Alirezatha said:


> 1.Is active backplate cooling Really effective?


It can be to limited degree. Your system needs good airflow for it to be effective to a good degree.


----------



## Alirezatha (Apr 1, 2021)

I'll place a 120mm fan on the backplate for the most efficient cooling.


----------

