• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI GeForce RTX 3090 Suprim X

Joined
Apr 12, 2013
Messages
7,480 (1.77/day)
Now this is not a straight comparison. They're comparing Samsung's 7nm to tsmc's 7nm but it should give you a decent idea of just how much better Tsmc's node is.
You know that's not how node comparison's work, or do you? While you have physical characteristics that you can direct measure, then electrical characteristics & finally the most important part of the puzzle ~ the uarch. Just because you say TSMC 7nm is "much more better" than SS' 8nm doesn't make it a fact that it'd be also much better for Ampere, unless you have the same GPU/chip/uarch made on the two nodes. The closest comparison IIRC was the VII & Vega64.

Anyone claiming otherwise is just guesstimating! So waiting for you to provide evidence about how superior (or inferior) one is wrt to the other.

Take zen3 for instance, with a (major) tweak in zen2 you not only get higher IPC but also much higher clocks on the same node. Can you now claim that zen3 would clock just as high on 7nm+ or 7LPP, perhaps 6nm?
 
Last edited:

M2B

Joined
Jun 2, 2017
Messages
284 (0.10/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
You know that's not how node comparison's work, or do you? While you have physical characteristics that you can direct measure, then electrical characteristics & finally the most important part of the puzzle ~ the uarch. Just because you say TSMC 7nm is "much more better" than SS' 8nm doesn't make it a fact that it'd be also much better for Ampere, unless you have the same GPU/chip/uarch made on the two nodes. The closest comparison IIRC was the VII & Vega64.

Anyone claiming otherwise is just guesstimating! So waiting for you to provide evidence about how superior (or inferior) one is wrt to the other.

Unlike you, I don't lable my thoughts as facts.
I'm not saying TSMC's N7P or whatever is for definite better than Samsung's 8N, but based on what we've seen before it's a pretty safe bet that the tsmc node is better both in terms of performance and efficiency.
And yes I'm aware that you can't compare different architectures across different nodes so easily, Mr. Electrical Engineer.

I'm basically wasting my time with someone who can easily lie to prove his made up nonsense correct.
 
Last edited:
Joined
Apr 12, 2013
Messages
7,480 (1.77/day)
Oh really, so I guess you must've mislabeled these thoughts as well?
while RDNA2 is barely 10-15% more efficient than ampere while being on a much better node.
Or how about this one?
And also the most efficient RDNA2 chip is barely beating the 12nm 1660Ti at 1080p (in terms of efficiency),
It'd be nice if you stopped at the point where you said the 3070 is the most efficient GPU in their lineup - to which I even said that there could well be more efficient Ampere GPUs released after it - but then you just had to bring in some other assumptions to prove a point huh?
 

M2B

Joined
Jun 2, 2017
Messages
284 (0.10/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
Or how about this one?

What in the actual fuck is wrong with you?
That was a different discussion with someone else to prove my point that 1080p is not the best indicator of the relative efficiency for these high-end cards.
Let's just end this mess here and move on. What a fucktard.
 
Joined
Dec 14, 2011
Messages
275 (0.06/day)
Processor 12900K @5.1all Pcore only, 1.23v
Motherboard MSI Edge
Cooling D15 Chromax Black
Memory 32GB 4000 C15
Video Card(s) 4090 Suprim X
Storage Various Samsung M.2s, 860 evo other
Display(s) Predator X27 / Deck (Nreal air) / LG C3 83
Case FD Torrent
Audio Device(s) Hifiman Ananda / AudioEngine A5+
Power Supply Seasonic Prime TX 1000W
Mouse Amazon finest (no brand)
Keyboard Amazon finest (no brand)
VR HMD Index
Benchmark Scores I got some numbers.
80'c ? pretty bad. ASUS RTX 3090 STRIX OC = 68'c
The Strix default profile is very aggressive. I had to play around a lot to get it to my liking, which wasnt as easy as it should be due to the fact that currently there is an unknown fan offset on afterburner atm - presumably will be fixed in a future update of either the program and/or card. I have mine targeting 2010 / v0.993 / 70C / 60% fan which is reasonably quiet - just audible. There is some variance from game to game.

The out of the box settings for the Suprim gaming bios is much better balanced.
 
Joined
Jul 18, 2016
Messages
354 (0.12/day)
Location
Indonesia
System Name Nero Mini
Processor AMD Ryzen 7 5800X 4.7GHz-4.9GHz
Motherboard Gigabyte X570i Aorus Pro Wifi
Cooling Noctua NH-D15S+3x Noctua IPPC 3K
Memory Team Dark 3800MHz CL16 2x16GB 55ns
Video Card(s) Palit RTX 2060 Super JS Shunt Mod 2130MHz/1925MHz + 2x Noctua 120mm IPPC 3K
Storage Adata XPG Gammix S50 1TB
Display(s) LG 27UD68W
Case Lian-Li TU-150
Power Supply Corsair SF750 Platinum
Software Windows 10 Pro
Aw I was hoping for a 500W power limit after having the highest power limit on the 3080 Suprim X out of all the 3080s.

I really don't get people complaining about the power consumption. If you care so much about it why buy a pre-overclocked card with a huge cooler and a strong VRM? Even if you wanted the cooler but not the power consumption then just lower the limit manually to whatever you want. Higher power limits gives the user freedom to overclock higher and that's never a bad thing.
 
Joined
Dec 6, 2016
Messages
748 (0.26/day)
Really? You lot still going there? I mean if I really wanted to heat my room I would have bought a 295X2.

Yeah, but 295X2 destroyed the competition in 4K:



This one only destroys your wallet :roll:
 
Joined
Nov 11, 2016
Messages
3,393 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
@W1zzard Can you utilize Nvidia PCAT (which does measure performance per watt) and note down the perf/watt of each GPU in a few games and average them down ? I think it would be more precise than using power consumption in only 1 game and 1 resolution at that.

Yeah, but 295X2 destroyed the competition in 4K:
This one only destroys your wallet :roll:

Didn't you read the 2011 memo, Average FPS for Xfire/SLI setup are useless numbers, all you get are incomplete frames or microstuttering, that's why Nvidia burried it when they are the first to invent it.
 
Last edited:
Joined
Apr 29, 2011
Messages
135 (0.03/day)
Isn't this card already recalled? Why people keep arguing about imaginary graphic cards?
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,742 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Can you utilize Nvidia PCAT (which does measure performance per watt) and note down the perf/watt of each GPU in a few games and average them down ? I think it would be more precise than using power consumption in only 1 game and 1 resolution at that.
I've told NVIDIA I want to check out PCAT, they said they'll look into it but so far they dont have enough units to give me one

Isn't this card already recalled?

$1750 card that is only 16% faster than a $700 card getting "highly recommended" badge just doesn't feel right.
I have between $1500 and $2000 to spend on a graphics card, what would you recommend?
 
Joined
Nov 26, 2020
Messages
106 (0.07/day)
Location
Germany
System Name Meeeh
Processor 8700K at 5.2 GHz
Memory 32 GB 3600/CL15
Video Card(s) Asus RTX 3080 TUF OC @ +175 MHz
Storage 1TB Samsung 970 Evo Plus
Display(s) 1440p, 165 Hz, IPS
474w peak in games
....oooookay then
fermi 2.0

No its not, simply look at performance per watt, which looks fine, for reference cards that is, which is the only thing that matters

AMDs 6800 series watt usage also skyrockets when overclocked, 6800 xt goes to 300+ watts easily in gaming when overclocked, 3080 goes to 350 but performs better on avg + has DLSS and better ray tracing

3090 beats them all and hits 400+ at times, im sure 6900 xt is going to be upthere as well post oc

Could not care less about 3090 or 6900 XT since its 1000+ dollars cards (on paper), worst perf value you can get and 6900 xt is going to be missing in action for months

Hell even 6800 series are MiA and most retailers say people will likely wait deep into 2021 to get one, AMD fucked their launch up more than Nvidia
 
Last edited:
Joined
Sep 17, 2014
Messages
22,358 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Right, so let's see your facts that back up the claim that TSMC 7nm is much more superior than whatever Ampere's made on? When you get the numbers, assuming you have the same exact GPU on the two separate nodes, then wake me up! Till then keep your claims to yourself & FYI the most efficient RDNA GPU was in a Mac & way more efficient than the likes of 5700xt so if you think that the 6800 is the efficiency king, just wait till you see these chips go into a SFF or laptops with reasonable clocks :rolleyes:

Not a bad idea actually, though you have to agree (or not) that this brings us virtually a full circle in the computing realm ~ first zen3 & now RDNA2.

Lol. It was exactly the reason AMD was going to catch up prior to this release, in the eyes of many people. And here you are saying the opposite AFTER the cards are out? I think its crystal clear now that AMD gains the advantage through a combination of architecture (they finally !!! caught up, look at how the GPUs work with boost, there is feature set parity, etc.), and having a better node. So yes, its pretty logical to assume - even without a shred of direct evidence beyond benchmarks and perf/watt numbers and die sizes/specs, that TSMC has that much of a better node.

On the other side of the fence, we see Nvidia releases GPUs that are not as optimal as they usually do. Power budgets have risen tremendously, after half a decade of strict adherence to 250 ish watt GPUs on the top. Now they exceed that by a royal margin. Not 20W or something as they did with Turing, but in the case of x80, nearly 100W depending on what gen you compare to. That's huge and its def not up to architecture where Nvidia has been leading and kinda still does, as their time to market was and still is better, despite them 'rushing' Ampere.

I have between $1500 and $2000 to spend on a graphics card, what would you recommend?

Save half, upgrade faster, duh. Or buy two x80s so you can SL...oh wait.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,742 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Joined
Nov 12, 2012
Messages
636 (0.15/day)
Location
Technical Tittery....
System Name "IBT 10x Maximum Stable"
Processor Intel i5 4690K @ 4.6GHz -> 100xx46 - 1.296v
Motherboard MSI MPower Z97
Cooling Corsair H100i + 2x Corsair "HP Edition" SP120's
Memory 4x4GB Corsair Vengence Pro 2400mhz @ 2400MHz 10-11-12-31-1T - 1.66v
Video Card(s) MSI Gaming GTX970 4GB @ 1314 Core/1973 Mem/1515 Boost
Storage Kingston 3K 120GB SSD + Western Digital 'Green' 2TB + Samsung Spinpoint F3 1TB
Display(s) Iiyama Prolite X2377HDS 23" IPS
Case Corsair Carbide 300R
Audio Device(s) Rotel RA-04/Cambridge Audio Azur 540R + B&W DM302/Cerwin Vega AT12 / Sony MDR-XB700 & FiiO E5
Power Supply EVGA NEX650G + Silverstone Extensions
Mouse Always failing me....
Keyboard Endlessly broken.....
Software Windoze 7 Pro 64-bit/Windoze 10 Pro
Benchmark Scores I had some of these once upon a time? Old age has seen me misplace them....
I have between $1500 and $2000 to spend on a graphics card, what would you recommend?
Spending half the amount for 85%+ of the performance and not buying silly overpriced GPU's to support the market of nonsense pricing scheme thats arrived over the last 6 years? Is what I'd personally recommended.... not this.

Put the rest of the money you just saved, in shares for your favorite tech company.


Edit:

"and it should be quiet" - FE edition card with waterblock - still cheaper and probably as fast (if not faster with tweaks)
 
Joined
Apr 1, 2017
Messages
420 (0.15/day)
System Name The Cum Blaster
Processor R9 5900x
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Cooling Alphacool Eisbaer LT360
Memory 4x8GB Crucial Ballistix @ 3800C16
Video Card(s) 7900 XTX Nitro+
Storage Lots
Display(s) 4k60hz, 4k144hz
Case Obsidian 750D Airflow Edition
Power Supply EVGA SuperNOVA G3 750W
No its not, simply look at performance per watt, which looks fine, for reference cards that is, which is the only thing that matters
474w is 474w regardless of what the performance per watt is - it's not a card I would put in my machine

AMDs 6800 series watt usage also skyrockets when overclocked
which is where an undervolt overclock comes into play...

3090 beats them all and hits 400+ at times, im sure 6900 xt is going to be upthere as well post oc
idk given that the 6900 xt has the same tdp as the 6800 xt, and taking into zen 2 and zen 3 tdp numbers into account, I wouldn't be surprised if the power draw difference was minimal between the 6800 xt and 6900 xt
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
@W1zzard

I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)

3090 beats them all and hits 400+ at times
400+ is not how I'd refer to a card ringing at 500W door, and that before being OCed:

1606466511853.png
 
Last edited:
Joined
Apr 1, 2017
Messages
420 (0.15/day)
System Name The Cum Blaster
Processor R9 5900x
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Cooling Alphacool Eisbaer LT360
Memory 4x8GB Crucial Ballistix @ 3800C16
Video Card(s) 7900 XTX Nitro+
Storage Lots
Display(s) 4k60hz, 4k144hz
Case Obsidian 750D Airflow Edition
Power Supply EVGA SuperNOVA G3 750W
@W1zzard

I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)


400+ is not how I'd refer to a card ringing at 500W door, and that before being OCed:

View attachment 177181
yeah there's no way that thing doesn't peak at under 500W when overclocked... power consumption wise these cards are a fucking joke, jesus christ
 
Joined
Apr 12, 2013
Messages
7,480 (1.77/day)
Lol. It was exactly the reason AMD was going to catch up prior to this release, in the eyes of many people.
You can keep LOLing all you want, unless you have a GA104 die on 8nm Samsung & 7nm TSMC you cannot say for sure how the latter is superior (or inferior) to the former ~ that's a fact. Anything else is just conjecture, & that's what I was fighting ~ the assumption that one node is much superior to the other without actual hard numbers to back any claim to the effect. So unless now you're gonna claim that you have that data with us, how about you keep the same assumptions in check?

It wasn't the only reason, if zen2 -> zen3 & RDNA -> RDNA2 has taught us is that uarch is a major part of the equation which simply cannot be ignored. IMO Ampere is a dud relative to Turing, now you can claim that TSMC 7nm would've made a big difference here but until we get real data to cross reference how performance can vary between nodes ~ you're just making castles in thin air.
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,742 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
I realize what I'm asking is lots of work, but I think OC performance is relevant to many reviewers out there.
It would be cool if OCed GPU would be tested across the board (all games/charts, not just one)
I agree it would be cool, but it's simply not feasible with all the tests and games that I have
 
Joined
Nov 26, 2020
Messages
106 (0.07/day)
Location
Germany
System Name Meeeh
Processor 8700K at 5.2 GHz
Memory 32 GB 3600/CL15
Video Card(s) Asus RTX 3080 TUF OC @ +175 MHz
Storage 1TB Samsung 970 Evo Plus
Display(s) 1440p, 165 Hz, IPS
yeah there's no way that thing doesn't peak at under 500W when overclocked... power consumption wise these cards are a fucking joke, jesus christ

I doubt people that pay 1750 dollars for a card will care about it using 100 watts more or less. This card is 31dB load, which is very quiet for this amount of performance and flagship gpu's always use alot of power, big dies

Simply don't buy this card and don't overclock if you think watt usage is important, both AMD 6000 and Nvidia 3000 series use way more power when overclocked and OC scaling is low on both (for 24/7 OC, not LN2 testing and other useless methods)

Custom cards with higher boost clocks yields ~2% extra performane. Powerdraw increases by 20% this is true on both AMD 6000 and Nvidia 3000 - OC some more and see another 2-3% performance, with another 20% bump on power. You are now up 40% in powerusage, but only 4-5% up in performance, worth it?

Not to mention that you pay several hundred dollars extra for some of these custom cards, which delivers almost nothing over the reference solution (which are good on both AMD and Nvidia now - which is why custom cards barely performs better)

Hell some 6800XTs end up at 800 and even 900 dollars, thats insane for a 650 dollar card
And this is the MSRP price, not the scalper price
 
Last edited:

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,010 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
I agree it would be cool, but it's simply not feasible with all the tests and games that I have

For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,742 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.
It's close enough
 
Joined
Sep 17, 2014
Messages
22,358 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
You can keep LOLing all you want, unless you have a GA104 die on 8nm Samsung & 7nm TSMC you cannot say for sure how the latter is superior (or inferior) to the former ~ that's a fact. Anything else is just conjecture, & that's what I was fighting ~ the assumption that one node is much superior to the other without actual hard numbers to back any claim to the effect. So unless now you're gonna claim that you have that data with us, how about you keep the same assumptions in check?

It wasn't the only reason, if zen2 -> zen3 & RDNA -> RDNA2 has taught us is that uarch is a major part of the equation which simply cannot be ignored. IMO Ampere is a dud relative to Turing, now you can claim that TSMC 7nm would've made a big difference here but until we get real data to cross reference how performance can vary between nodes ~ you're just making castles in thin air.

But there Is real data and it says you are wrong every time. Sources were provided, did you even get into them or? Do you read reviews? We know how efficient the arch and the node is, the products are out... We know Ampere has more efficient RT, but less efficient raster. We know the end FPS, bandwidth/memory/power usage, we know die size and we know the architectural improvements. If you still didn't figure it out by now, you never will. Your opinion is irrelevant, the facts are there. Its like a graph with lots of dots, you just need to connect them with straight lines and you've got your comparison well laid out.

For arguments sake, is the OC % gain applicable to most other scenarios? If so, there's absolutely no need to test on all games. A generalisation of 5% better means 5% better. Unless other games don't scale well with clock increases? And then again, these days, across the cards of AMD and Nvidia, we see, what, 5% uplifts? A few fps, or an extra 10 at 200fps... It's 'almost' not worth it.

Exactly. Its as much of a non-issue as the additional OC headroom this specific card offers. Margin of error territory at best.
 
Last edited:
Top