• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI GeForce RTX 3090 Suprim X

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,640 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The MSI RTX 3090 Suprim X is the company's new flagship card. It is highly overclocked, to 1860 MHz rated boost, and ticks at a power limit of 420 W. In our review, it was the quietest RTX 3090 we've ever tested, quieter than the EVGA FTW3 Ultra, almost whisper-quiet.

Show full review
 
474w peak in games
....oooookay then
fermi 2.0
 
Card like this would definitely benefit from liquid cooling. Heat output similar to 2x 1080ti cards, and I remember how much airflow such a set up required to keep the case relatively cool.
 
80'c ? pretty bad. ASUS RTX 3090 STRIX OC = 68'c
 
Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever o_O
Performance per Watt FPS 1920x1080
Performance per Watt FPS 2560x1440
 
Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever o_O
Performance per Watt FPS 1920x1080
Performance per Watt FPS 2560x1440

Well, it's no RX 590 at least.
 
From reading this review, my take from it is EVGA FTW3 is the better card, even if slightly louder it is a lot cooler and has a much higher power limit.
 
Oh wow, how the mighty have fallen! Don't remember such a massive dip in perf/W with a new uarch+node since ~ well perhaps ever o_O
Performance per Watt FPS 1920x1080
Performance per Watt FPS 2560x1440

let me show you the chart that actually matters:
performance-per-watt_3840-2160.png


The 3070 is actually 20% more efficient than the most efficient Turing GPU and even the 3090 is 10% more efficient.
 
let me show you the chart that actually matters:

Actually that's the one that matters the least if you want to get the most accurate measure of efficiency.

The more frames a GPU renders the least efficient they get because in between frames the energy of the chip goes down, the more peaks and troughs there are the worst the power consumption gets. That happens because as with any other system you need to consume energy to bring it from a low power state to a higher power state. The most efficient scenario is where you maintain a constant amount of load for as long as possible and that happens at 4K where one frame takes the longest to render.

Do you not find it odd that practically all GPUs become more power efficient the higher the resolution gets no matter how old/bad they are ? It's because of what I just explained so if you want to test efficiency you need to look at a workload which generates the most amount of frames/second.
 
So you're saying the most popular (?) gaming resolution or 1440p doesn't matter, mkay if you say so :wtf:

Get some glasses. I said that the best way to measure efficiency is to make the GPU render as many frames as possible because that's the worst case scenario, nothing less nothing more.
 
You sure you're quoting the right person? If not then I guess my reply would change accordingly :confused:

Your comment was right after mine so I could only presume it was addressed to me.
 
Last edited:
Do you not find it odd that practically all GPUs become more power efficient the higher the resolution gets no matter how old/bad they are ? It's because of what I just explained so if you want to test efficiency you need to look at a workload which generates the most amount of frames/second.

We're talking about the relative efficiency here, not efficiency in absolute terms.
Measuring the relative efficiency at 1080p with these GPUs is very demb, not only because you're going to be CPU limited in many games, but also because you have a shitton of bandwidth and memory modules that consume so much power at low resolutions but don't provide any value.
The memory chips alone consume about 70W on the 3090 if I'm not mistaken.
And even if we ignore the higher resolutions the 3070 is still the most efficient Nvidia GPU.
I'm not saying Ampere is the biggest Upgrade Nvidia has ever done, but it's not as bad as some people think.
 
We're talking about the relative efficiency here, not efficiency in absolute terms.

Then don't draw conclusions such as "the only chart that matters". If you want to prove an architecture is not as bad then the only proper and objective way is to measure things in absolute terms.
 
Then don't draw conclusions such as "the only chart that matters". If you want to prove an architecture is not as bad then the only proper and objective way is to measure things in absolute terms.

I said the chart that actually matters, not the only chart that matters.
And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency), based on your magical logic 1080p in this case is the most accurate way to measure efficiency, how about that?
 
Last edited:
based on your magical logic 1080p is the most accurate way to measure efficiency, how about that?

My logic never claimed 1080p is the most accurate way, I simply said that you need a workload that generates the most peaks and troughs in power. If that happens to be playing a game at 1080p then so be it. And I don't think there is anything odd about a 1660ti being almost as power efficient as a chip that has almost 5 times the amount of transistors and runs at a faster clock speed, the bigger and faster the chip the more power leakage you get. If you look at similarly sized GPUs AMD wins by a landslide.
 
I said the chart that actually matters, not the only chart that matters.
And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency), based on your magical logic 1080p in this case is the most accurate way to measure efficiency, how about that?
Well tbf the 3070 is unlikely going to be the most efficient Ampere GPU in Nvidia's lineup, if Nvidia doesn't decide to go TSMC 7nm then it'll probably be the RTX 3060Ti or RTX 3050Ti or something. The point is that Ampere is not that great a uarch especially as compared to Turing, people often bring up Polaris as well now do you recall where it was made? Then Vega on 7nm TSMC & finally RDNA on the same node followed by RDNA2. Whether you agree with everything that's been said here or not, the fact remains this is more like a repeat of Fermi vs Evergreen.
 
Well tbf the 3070 is unlikely going to be the most efficient Ampere GPU in Nvidia's lineup, if Nvidia doesn't decide to go TSMC 7nm then it'll probably be the RTX 3060Ti or RTX 3050Ti or something. The point is that Ampere is not that great a uarch especially as compared to Turing, people often bring up Polaris as well now do you recall where it was made? Then Vega on 7nm TSMC & finally RDNA on the same node followed by RDNA2. Whether you agree with everything that's been said here or not, the fact remains this is more like a repeat of Fermi vs Evergreen.

You're making up nonsense and labelling it as a fact.
Some of the AMD chips back then were literally 75% more efficient than Fermi while RDNA2 is barely 10-15% more efficient than ampere while being on a much better node.
I would go as far as to say that AMD doesn't have any Architectural advantage here if we factor in the process node difference.
 
Last edited:
You're making up nonsense and labelling it as a fact.
Some of the AMD chips back then were literally 75% more efficient than Fermi while RDNA2 is barely 10-15% more efficient than ampere while being on a much better node.
I would go as far as to say that AMD doesn't have any Architectural advantage here if we factor in the process node difference.
Right, so let's see your facts that back up the claim that TSMC 7nm is much more superior than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes, then wake me up! Till then keep your claims to yourself & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks :rolleyes:

Really? You lot still going there? I mean if I really wanted to heat my room I would have bought a 295X2.
Not a bad idea actually, though you have to agree (or not) that this brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.
 
Right, so let's see your facts that back up the claim that TSMC 7nm is much more superior than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes then wake me up! Till then keep your claims to yourself & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks :rolleyes:

Not a bad idea actually, though you have to agree (or not) that this brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.

Anandtech:
"I had mentioned that the 7LPP process is quite a wildcard in the comparisons here. Luckily, I’ve been able to get my hands on a Snapdragon 765G, another SoC that’s manufactured on Samsung’s EUV process. It’s also quite a nice comparison as we’re able to compare that chip’s performance A76 cores at 2.4GHz to the middle A76 cores of the Exynos 990 which run at 2.5GHz. Performance and power between the two chips here pretty much match each other, and a clearly worse than other TSMC A76-based SoCs, especially the Kirin 990’s. The only conclusion here is that Samsung’s 7LPP node is quite behind TSMC’s N7/N7P/N7+ nodes when it comes to power efficiency – anywhere from 20 to 30%."

Now this is not a straight comparison. They're comparing Samsung's 7nm to tsmc's 7nm but it should give you a decent idea of just how much better tsmc's node is.
 
Back
Top