• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI GeForce RTX 3090 Suprim X

oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!
Not really for 1.5 - 2k $ cards, no; 295x2 for instance approached 700W under stress test
power_maximum.gif

I wouldn't be so sure about that. The 8nm Samsung and 7nm TSMC are fairly similar in density. There are pros and cons on both sides and NV went for better yields and lower price per wafer instead of slightly better power efficiency. Either way it would have been better for NV to go TSMC but saying absolutely no competition if they would have gone 7nm is over exaggerated.
I'd be pretty sure; there already isn't that much competition when looking at the whole picture - particularly the horrendously bad ray tracing performance of RDNA2, which will definitely become much more important before the life cycle of current cards is over (I'd wager that in two years or less more than half of new titles will implement some sort of RT functionality) and last but definitely not least, DLSS which will definitely help to prolong the usability of Ampere cards especially in 4k, but also lower resolutions further down the line. Now if we imagine considerably better power efficiency coupled with notably higher clocks on top of this (from 7nm TSMC), I don't think what I said is over exaggerated at all (although obviously, AMD would have to position their MSRPs much lower in that case)
 
Last edited:
The 8nm Samsung and 7nm TSMC are fairly similar in density.
I keep hearing that 8nm Samsung is 10nm rebrand, so, shrug.

Except that that this card is a HUGE jump in performance over the RTX2000 series cards, something that didn't happen with the Fermi gen cards
An interesting myth.

No, performance was not the issue with Fermi, power consumption and price was. Both are there with Ampere.
 
I'd be pretty sure; there already isn't that much competition when looking at the whole picture - particularly the horrendously bad ray tracing performance of RDNA2, which will definitely become much more important before the life cycle of current cards is over (I'd wager that in two years or less more than half of new titles will implement some sort of RT functionality) and last but definitely not least, DLSS which will definitely help to prolong the usability of Ampere cards especially in 4k, but also lower resolutions further down the line. Now if we imagine considerably better power efficiency coupled with notably higher clocks on top of this (from 7nm TSMC), I don't think what I said is over exaggerated at all (although obviously, AMD would have to position their MSRPs much lower in that case)

I don't think it's bad but it does lack in comparison to NV's cards. I also don't think that Ray Tracing matters the most here at what the state of Ray Tracing is at this point in terms of support for software. But that's just how I see it at this point in time. That is your wager but we can't be certain. Even though the NV RT performance is better than AMD's, it isn't is great for certain games. So if your wager is correct and there will be plenty of new games with RT my wager is they would require more RT performance which current cards either NV or AMD may not have (that includes DLSS like feature)
The thing is, TSMC and NV Ampere graphics card, You don't know how much of an improvement will be there to be honest. It may have been same performance as Samsung's but less power draw.

BTW. The R9 295x2 was dual Graphics chip card. Don't think that is relevant and on a node from the stone age. So if you compare 3090 over 500W peak, the R9 295x2 doesn't look bad as for a dual card and chip done in 28nm.
I keep hearing that 8nm Samsung is 10nm rebrand, so, shrug.
Well yes that's what people say but you already know that 7nm or 8nm is just a naming scheme nothing more.
 
Last edited:
An interesting myth.
No, performance was not the issue with Fermi, power consumption and price was.
Oh really? Let's review.
The first shows Crysis with a GTX260, the next a GTX560. Pay attention to the 1920x1200 scores in particular.

Now let's look at power usage, shall we?
Isn't that interesting?

Now lets look at the top tier models for each of those generations;
Hmm..

Now look at the power consumption for each;
Well isn't that interesting also?

You were saying what now?
Context is important. Try it sometime. You too @ratirt

Ampere is NOT Fermi 2.0. Perfect rubbish notion. To make such a claim is as ridiculous as it is completely lacking in merit.
 
Hmm?

crysis_1680_1050.gif


285 - 26.1FPS
580 - 47fFPS

80% more perf is not a huge perf jump, but 45% more perf is:

relative-performance_3840-2160.png


:peace:

By this metric, Ampere Fermi 2 is 2 times worse than normal Fermi, as it pushed perf by only half of the original.
 
That's not what was said, don't twist words out of context. AMD's current line-up is just as power hungry with very little variation.

oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!
 
oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!
They are not far apart and even if you push 6800XT to high heaven where it consumes at least as much as 3080, it still doesn't match the latter's performance:
power-gaming-peak.png
relative-performance_2560-1440.png
 
They are not far apart and even if you push 6800XT to high heaven where it consumes at least as much as 3080, it still doesn't match the latter's performance:
power-gaming-peak.png
relative-performance_2560-1440.png
don't care LOL one card is 500W the other isn't
 
Fanboy or not the same garbage is being recycled as fact. I just posted links to "fact based review" a while back, waiting for the perpetrators to present their evidence!
 
I was actually considering getting a 3000 series nvidia GPU, but the power consumption turned me off entirely
 
Last edited by a moderator:
I keep hearing this BS from multiple forum members, on various forums! Evidence please, not conjecture or your best guesstimate :rolleyes:

We went from "AMD is so many years behind" to "AMD will maybe compete with 2080Ti, but I doubt it" to full throttle meltdowns.

Have mercy.
 
Hmm?

crysis_1680_1050.gif


285 - 26.1FPS
580 - 47fFPS
Hmm, let's review;
No, performance was not the issue with Fermi, power consumption and price was.
Yup, that's what you said... Nice cherry pick though, in doing so you effectively proved the point I was making, on several levels. Well done!

oh, so AMD's current line-up uses less power than ampere, but it's using just as much power as ampere?! interesting!
Um, YOU are not paying attention!
Not really for 1.5 - 2k $ cards, no; 295x2 for instance approached 700W under stress test
power_maximum.gif
Henry was referring to AMD's previous line up for comparison.
And that shows that when I said...
AMD's current line-up is just as power hungry with very little variation.
...I was spot-on. 328W VS 324W qualifies as very little variation.

Do you two want to continue trying and failing to pick nits?
 
Last edited:
Hmm, let's review;

Yup, that's what you said... Nice cherry pick though, in doing so you effectively proved the point I was making, on several levels. Well done!


Um, YOU are not paying attention!

Henry was referring to AMD's previous line up for comparison.
And that shows that when I said...

...I was spot-on. 328W VS 324W qualifies as very little variation.

Do you two what to continue trying and failing to pick nits?

where did you find that 328W VS 324W figure LOL
in the review you linked peak is 324W vs 350W, and average is 279W vs 339W, you can guess which is which ;)
are you shilling on purpose or are you being ignorant in regards to what you post on purpose?
 
Low quality post by spnidel
What the heck, dude:

perfrel.gif


what is it, on average 56% faster? Yet "performance was not there", but it's there with 45%... :D
You're still missing the context. READ;
No, performance was not the issue with Fermi, power consumption and price was. Both are there with Ampere.
Yup, those are your words. Would you like to continue contradicting yourself?
 
well done MSI:roll:
 
I prefer to see some more silent and efficiency GPU/CPU in this generation gaming rigs.And i will turn to buy a console,maybe?
(i guess intel and nvidia are responsible for this.
 
80'c ? pretty bad. ASUS RTX 3090 STRIX OC = 68'c

ASUS RTX 3090 STRIX OC (Quiet BIOS) was 75°C

 
It's not great, no, but just like with Fermi, they are the undeniably faster cards and in this price bracket that matters (far) more than a couple dozen W of power consumption. If Ampere was on the same 7nm TSMC node though, there would once again be absolutely no competition...however the already bad availability would likely reach epic levels...

if nvidia would have went with 7nm TSMC, the GPU supply shortage would be magnitudes worse. we'd be paying $1500 for a 3080/6800xt, assuming we'd even see any 6800xt at all. nvidia, in their quest to save some money, actually ended up doing us all a favor. i'm not going to pretend i'm an engineer (like some of you here) so i won't pretend to know how well 8nm samsung compares to 10nm samsung or TSMC, HOWEVER, it's clearly better when amd and nvidia don't have to compete over the same node, along with gaming consoles and other contracts TSMC has to fulfil.

oh, so if nvidia does it, then power consumption doesn't matter? got it, thanks!

are we judging all ampere cards based on the performance of one garbage brand that makes garbage products? asus and gigabyte cards pull 100W less than the "suprim" 3090

i personally have a ryzen 5900x, 3090 tuf OC, with a custom loop running a d5 pump, 8 fans, peripherals, plus the obligatory RGB puke, ect. the higher power draw registered on my UPS so far was 512W for the ENTIRE system.

the difference in power consumption between reference 3080 and 6800xt, as well as 3090 and 6900xt, is about 60W. but don't forget that ampere is still faster on average and it can do what navi can't. the extra hardware present on ampere allows for nvenc, faster RT, rtx voice, dlss, etc. big navi is just a faster navi, it doesn't do anything special aside from render frames faster.

now, if you think that the extra features are not worth the extra power draw, then you decision on which card to buy is clear. if i had a choice between a 3080 and a 6800xt, at their proper pricing, i'd pick the 3080 100 times out of 100. but during the fermi days, i had amd cards. i had a 5970 followed by 2x 6950s unlocked with the 6970 BIOS
 
Power usage debate makes no sense to me. As long as it’s not heating up your room, you are spending thousands on high end parts so power cost is too low comparatively to care about. Noise is comparable to other gens.
So unless you’re over clocking, you plug the thing in and get FPS. You would have to be a special-use person to care about power.

the only reason power should matter is if it affects your life somehow. Otherwise it’s hypothetical debates.
 
Power usage debate makes no sense to me. As long as it’s not heating up your room, you are spending thousands on high end parts so power cost is too low comparatively to care about. Noise is comparable to other gens.
So unless you’re over clocking, you plug the thing in and get FPS. You would have to be a special-use person to care about power.

the only reason power should matter is if it affects your life somehow. Otherwise it’s hypothetical debates.
so, 2 things.

first, i upgraded from a 2080 ti to an msi 3090 trio and It actually was heating my room up noticeably. at no point before did i have to stop playing games because sitting next to my computer become uncomfortable. i live in the north east US and this was in the month of December. my computer literally became a space heater. so yes, it did affect my life. the suprim draw even more power than the trio. so i can't imagine it would be any better

and second, there is no reason for the same product (rtx 3090) from two different companies, that perform about the same, to have a difference of 100W in power draw between the two. the msi card is garbage. 3090 strix is slightly faster (about 1% according to TPU) than the 3090 suprim but pulls 100W less power.
 
Back
Top