• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

MSI GeForce GTX 1660 Ti Gaming X 6 GB

Wow, somehow I missed that Civ6 supports DX12. Was that added in a later patch? Will use DX12 starting next rebench
Hey W1zzard, would it be feasible to have a separate performance graph for DX12 games only? Might I also suggest some kind of was per-game indication (a small icon or something) t oshow if a given title is AMD or nVidia sponsored?
 
This card is incredible, we are now getting 980 Ti performance at 960 prices. I can't believe how quickly the mid range cards became capable at 1440p and even 4k. You could easily use this card to match an Xbox One X.

Hey W1zzard, would it be feasible to have a separate performance graph for DX12 games only? Might I also suggest some kind of was per-game indication (a small icon or something) t oshow if a given title is AMD or nVidia sponsored?

What difference is there if a game is Nvidia or AMD sponsored. It's not like people are going to pick what games to play based on whose logo shows up in the splash screen. And no fudging the numbers will save the Vega 56 or the 590, this card is better in every way.

EDIT: Yes, I meant Xbox One X.
 
Last edited:
This card is incredible, we are now getting 980 Ti performance at 960 prices. I can't believe how quickly the mid range cards became capable at 1440p and even 4k. You could easily use this card to match an Xbox One S.

What difference is there if a game is Nvidia or AMD sponsored. It's not like people are going to pick what games to play based on whose logo shows up in the splash screen. And no fudging the numbers will save the Vega 56 or the 590, this card is better in every way.

Yeah it's pretty brutal, 2.5 times more efficent than the RX 590, whilst being way faster. If you'd just landed from another planet you'd think the 590 was released three years ago... not 3 months ego. Ouch.
 
People might call me a troll but lack of RTX and DLSS is hardly a negative at this current juncture of all junctures.

The real negative is when you turn RTX/DLSS on and it eats more than half your frames and looks almost identical to ultra quality settings at 1440p. All that extra money spent on an RTX card when all you needed was to run 1440p or 4k resolutions
You do understand RTX is not meant to make the picture look better, right? It makes the picture look more realistic.
It does seem like people don't get what RTRT is doing after all these reviews and articles we got since RTX launch.
 
Im not sure why people are so excited.

This is a 1070 with 2GB VRAM removed, over 2 years past its release, at a minor price cut. Also, Turing is not showing to be very inconsistent compared to Pascal. Even the 2060 is jumping all over the place and this 1660ti is anywhere between a 1060 and a 1070ti... I'd grab a 1070 over this any day of the week...

The 1070 launched at about $350... not sure why we are all excited here. I see 1660ti's at the very same price even today.

You do understand RTX is not meant to make the picture look better, right? It makes the picture look more realistic.
It does seem like people don't get what RTRT is doing after all these reviews and articles we got since RTX launch.

People don't get it because the implementation is different in every game. BFV did reflections, Metro mostly just does a global illumination pass. And that is pretty much all she wrote up until today.
 
Last edited:
People might call me a troll but lack of RTX and DLSS is hardly a negative at this current juncture of all junctures.

The real negative is when you turn RTX/DLSS on and it eats more than half your frames and looks almost identical to ultra quality settings at 1440p. All that extra money spent on an RTX card when all you needed was to run 1440p or 4k resolutions

I think to look at this objectively and without exaggerating the performance hit (30 - 50%) w/ biggest hit at highest resolutions

At 4k, we are talking 1.5% of the market, 95% of which is at 60 Hz. While I can understand the position, Id rather not lose 30-40% of my performance to RT, half the games in TPUs test suite are doing 90 or better fps at 4k.... so if two brothers have same system except for GFX card.

The nVidia 2080 Ti user will have half 90% of his games capped at 60 fps due to monitor limitations, but for 10 of those games, he can turn on RT with little or no penalty, still being above his monitor's limitation. The AMD Radeon VII user will have a bit less of his games capped at 60 fps due to monitor limitations because the Ti is 40% faster. So the point is .... we shouldn't base what card to buy based upon someone who can take advantage of things many can't.

At 1440p, adding up all the fps for games in TPUs test suite the $700 RTX 2080 is 13.3% faster than the $700 Radeon VII. With both cards overclocked, that grows to 21.8 %. So what's the downside of RT ? Let's look at the options here .... $700 Radeon 7 OC'd versus $700 RTX 2080 OC'd

Now at this point in time there's not a lot of games that support it so this is purely conjectional in that we must assume at some point that a % of the games (say 20 - 30% for sake of argument). From Metro article conclusions, Wiz puts the expected hit at 30 - 40% once it's out a bit and tweaked, Ill use 35%. So lets assume for example, that some developers release updated version of their games and I'll pick 25% of the games on the list ... numbers are 1440p w/ both cards overclocked.

Divinity OS2 could be played at 119.0 fps on an OC'd VII, 153.1 on a 2080 or 99.5 on a 2080 w/ RT enabled.
F1 could be played at 133.3 fps on an OC'd VII, 157.9 on a 2080 or 102.7 on a 2080 w/ RT enabled.
GTAV could be played at 150.9 fps on an OC'd VII, 184.3 on a 2080 or 119.8 on a 2080 w/ RT enabled.
Witcher 3 could be played at 103.2 fps on an OC'd VII, 127.0 on a 2080 or 82.5 on a 2080 w/ RT enabled.

Now think of that from a perspective of folks playing at 65, 100, 120 Hz monitors. It's an option and it doesn't cost you a dime. Now if ya built ya computer so you can brag about how many fps you get, fine. But, if you are looking from the perspective of the gaming experience, frankly I don't think that I'd care if it was on or off for 3 of those games. I's take the extra 20-30 fps and enjoy ULMB. However, on Witcher 3, if it came down to playing on a Radeon VII at 103.2 versus having the choice to play at 127.0 with ULMB or 82.5 w/ RT and ULMB, I'd like to experience the latter.

So again, let's look at the options here .... $700 Radeon VII OC'd versus $700 RTX 2080 OC'd

1. Of the 21 games in the test suite, only 1 game under 80 fps, with the 2080 (3 for the Radeon VII) which means turning anykinda-Sync off as using Motin Blur reduction is an option ONLY on the 2080.
2. Of the 21 games in the test suite, with both cards overclocked, 2080 is faster in 19 of them Radeon VII in 2 of them.
3. Overall, the 2080 is 22% faster with both cards OC'd
4. So far .... is RT even a factor in the decision here ?
5. No one is mandating you to use it ... what is the downside ?

It's like going down to buy a new SUV .. and the salesman says, hey ya know what ... "I can sell ya the 2WD model you came here for ... but for the same price I can give ya the RT model and it comes with 4WD, larger more efficient engine means it accelerates faster and uses less gas, comes standard with AC, and runs cooler" and turning it down cause the carpeting in the trunk is red instead of green.

Was not so long ago, that I was saying "the 780 OCd is faster than AMDs offering OCd but below that, weigh your options. Then it was, "Well from the 970 price point on up, nVidia has the edge but below that look at both cards in each price niche ... " And then it was "Well from the xx60 price point on up ...". Saying it isn't so, doesn't change the numbers.

In short, at the $700 price point, like the $300 price point, AMD doesn't really have a horse in the race. Not having RT at this price point is not a deal killer, because no one else has it either. But it does have ULMB which is certainly a significant incentive. And at the upper price tiers, providing the option is an incentive, not as much as ULMB but any tech that gives the user different options to enhance the user experience is a good thing.
 
You could easily use this card to match an Xbox One S.

You mean Xbox One X?
The Xbox One S is so damn weak, a crappy GTX 750 Ti can match it.
 
Ugh. yeap, ofc, great value!

P.S. It's just another poorly priced card for over 300$ - think people here forget about IRL pricing, just as always. 2080 Ti should've been 999$ aswell....
1660 Ti Already for 450$ here, in local stores
The cards are on newegg right now, and listed in stock. This isn't some paper launch. Most of them are the $280 list price.
https://www.newegg.com/GraphicsCardsPromoStore/EventSaleStore/ID-506

Newegg is listing this gaming X at $10 over its $300 MSRP, but the rest are pretty much in line.
 
Last edited:
Yeah it's pretty brutal, 2.5 times more efficent than the RX 590, whilst being way faster. If you'd just landed from another planet you'd think the 590 was released three years ago... not 3 months ego. Ouch.

From Anandtech review:

Performance has gone up versus the GTX 1060 6GB, but card power consumption hasn’t. Thanks to this, the GTX 1660 Ti is not just 36% faster, it’s 36% percent more efficient as well.
The other Turing cards have seen their own efficiency gains as well, but with their TDPs all drifting up, this is the largest (and purest) efficiency gain we’ve seen to date, and probably the best metric thus far for evaluating Turing’s power efficiency against Pascal’s.

https://www.anandtech.com/show/13973/nvidia-gtx-1660-ti-review-feat-evga-xc-gaming/16

These are quite impressive values, considering that we are talking about GPUs with basically the same node.
 
Except VRAM.

Anyone who buys this card to play at 2160p where more than 6GB is needed is making a mistake.

https://www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X_3_GB/26.html

The 1060 6 GB is faster than the 3 GB because it has 11% more shaders. If VRAM was in play here in any conceivable way here as you imply, we should see a massive hit on performance going from 1080p to 1440p. Instead what do we see ?

The 1060 6GB is 6 % faster than the 1060 GB at 1080p. Obviously, if you presumption is correct this should be what ?...10%, 12% at 1440p ? Nope, just the same 6% as at 1080p. The extra 3 GB had no impact on performance.

What else do we see ? The 3 GB 1060 is as fast as the 8 GB 480


Im not sure why people are so excited. The 1070 launched at about $350... not sure why we are all excited here. I see 1660ti's at the very same price even today.

The MSI Gaming X 1070 was $449 MSRP at launch here in US, ... took a while for prices to settle don to that level tho. Bought one for my son, receipt says $429 (5 months after releaseThe 970 was about $350. The MSI 1660 Ti is $310 today, the Ventus is $279. So $449 to $309 is significant ... and that $309 includes tariff and 1st day pricing. The MSI Gaming X cards are usually 1st ones out of stock ....

The 2080 Gaming X have been running about $100 over the least expensive cards, the 2080 Ti Gaming X's are $250 or more higher than competititve offerings.
 
Looking comparatively.

Performance (1440p): Vega 56 has a 3% edge in the " out of the box" performance test, but 1660 Ti gains 9.6%, vega doesn't do well here, Edge =>1660 Ti .. at 1080P even more so.
Power Usage: 141 under peak gaming for the Ti; 237 for Vega 56, Edge (59%) =>1660 Ti
Noise @ Idle: 0 dbA for the 1660 Ti, 25 for the Vega 56, Edge (Infinity) =>1660 Ti
Noise @ load: 32 dbA for the 1660 Ti, 42 for the Vega 56 making it twice as loud, Edge (50%) =>1660 Ti
Temps @ Load: 68C for the 1660 Ti, 75 for the Vega 56, Edge (91%) =>1660 Ti
Price(newegg MSI Gaming X): $310 for the 1660 Ti, $400 for the Vega 56, Edge (77%) =>1660 Ti

The Vega 56 just ceased to be relevant in any way. To get sold, it needs to drop in price 35% to $259
No. It didn't. For whatever reason, TPU seems to insist on using reference cards as the benchmark, which is asinine when doing a comparison like this. This isn't a reference card, so why would you compare it to other reference cards? Your conclusion, and TPU's conclusion about Vega 56 running louder and hotter is, quite frankly, incorrect when doing a more apples-to-apples comparison. Noise at idle for my Red Dragon Vega 56 is also 0. Noise at load is extremely quiet (inadubile from where I sit). Temps at load are virtually identical to the 1660Ti. As for performance, you see that in GN's review (starting with the F1 benchmarks as Steve also uses a Red Dragon)

The only real advantage of the 1660Ti over Vega 56 is power usage. But honestly, and this has been said many times over, most of us aren't gaming 24 hours a day. As far as I'm concerned, for real-world usage, the difference is inconsequential. Of course, that's assuming Vega 56 does indeed drop in price across the board. For the same price, I'd opt for my card over any 1660Ti all day, every day. But if Vega 56 stays $400+ then there's no way to recommend it (I got mine for just over $300 about 6 months ago). I certainly wouldn't recommend a reference Vega 56. Ever.
 
Anyone who buys this card to play at 2160p where more than 6GB is needed is making a mistake.

https://www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X_3_GB/26.html

The 1060 6 GB is faster than the 3 GB because it has 11% more shaders. If VRAM was in play here in any conceivable way here as you imply, we should see a massive hit on performance going from 1080p to 1440p. Instead what do we see ?

The 1060 6GB is 6 % faster than the 1060 GB at 1080p. Obviously, if you presumption is correct this should be what ?...10%, 12% at 1440p ? Nope, just the same 6% as at 1080p. The extra 3 GB had no impact on performance.

What else do we see ? The 3 GB 1060 is as fast as the 8 GB 480

High res texture packs. my 570 has no trouble running them without stutter. Something the 1660 Ti will struggle with even now. I'm so sick of people lapping up getting less memory, whatever. I ain't buying a card with les than 8GB of vram because I'm using almost all of that vram right now.
 
High res texture packs. my 570 has no trouble running them without stutter. Something the 1660 Ti will struggle with even now. I'm so sick of people lapping up getting less memory, whatever. I ain't buying a card with les than 8GB of vram because I'm using almost all of that vram right now.

Nvidia has traditionally better VRAM compression technology. I would say that 2GB difference is negligible if it is just HD textures at 1080p.
 
The only real advantage of the 1660Ti over Vega 56 is power usage. But honestly, and this has been said many times over, most of us aren't gaming 24 hours a day. As far as I'm concerned, for real-world usage, the difference is inconsequential. Of course, that's assuming Vega 56 does indeed drop in price across the board. For the same price, I'd opt for my card over any 1660Ti all day, every day. But if Vega 56 stays $400+ then there's no way to recommend it (I got mine for just over $300 about 6 months ago). I certainly wouldn't recommend a reference Vega 56. Ever.

To think, there was a time when AMD would actually waste time and money to mock Nvidia when it was their "only real advantage", oh how times have changed.

 
For whatever reason, TPU seems to insist on using reference cards as the benchmark, which is asinine when doing a comparison like this.
Let me school you, since your “assenine” comment about the testing here shows you have not been frequenting TPU reviews very long.

As with most reviewers, most cards are not W1zzard’s to keep. Since he has one of the most extensive testing suites in the industry, including both games and multiple previously released cards, he keeps a baseline for comparisons in the future using reference cards.

It’s up to you to use a little common sense and review knowledge of where cards lie on the stack to extrapolate where a particular card you are interested in would lie.

Please be aware, W1zzard already retests every card he keeps in the stable, on each and every game in the testing suite. Each time he tests a new card for review. He already gives literally months of his life in the testing lab each year.

For you to think he should keep a high count of additional cards in stock and test those as well is rude, self-centered, and quite frankly, to use your term, assenine.
 
I'm going to get one to see if it can play Crysis....
 
Let me school you, since your “assenine” comment about the testing here shows you have not been frequenting TPU reviews very long.

As with most reviewers, most cards are not W1zzard’s to keep. Since he has one of the most extensive testing suites in the industry, including both games and multiple previously released cards, he keeps a baseline for comparisons in the future using reference cards.

It’s up to you to use a little common sense and review knowledge of where cards lie on the stack to extrapolate where a particular card you are interested in would lie.

Please be aware, W1zzard already retests every card he keeps in the stable, on each and every game in the testing suite. Each time he tests a new card for review. He already gives literally months of his life in the testing lab each year.

For you to think he should keep a high count of additional cards in stock and test those as well is rude, self-centered, and quite frankly, to use your term, assenine.
Actually I've been reading TPU for years, I simply started commenting recently.

I know how cards are tested. The problem is, if you're comparing a reference card to non-reference models, then you're going to come up with, at best, inaccurate or incomplete conclusions (such as, "Vega 56 runs much hotter and noisier than GTX 1660 Ti" which is both true and untrue based on what you're comparing) . I have nothing against Wizz including reference cards in the benchmark suite (and I think it is quite helpful actually), but there should also be a non-reference card included for comparison if you're going to include those observations in the conclusion. Without it, you're basing your conclusion on incomplete data which does a disservice to the reader. I know that this has been discussed here before as well and that, mostly likely, nothing is going to change so I am going to stand by my "asinine" assertion.
 
Get whatever makes you happy. Be it Vega of Polaris or this.

Indeed, I mean according to this very review the MSI GTX 1660 Ti is only 42% faster than a RX 580 8GB @1080P whilst being twice as efficient too, I guess I can see the appeal of an 8GB Polaris card over it...
 
The power efficiency of this card is tremendous. Just imagine what Turing on 7nm could be.

I would want them to crank the clocks up.

If you'd just landed from another planet you'd think the 590 was released three years ago.

In all honesty, it was probably released longer ago than that. When was the 480 released?
 
Indeed, I mean according to this very review the MSI GTX 1660 Ti is only 42% faster than a RX 580 8GB @1080P whilst being twice as efficient too, I guess I can see the appeal of an 8GB Polaris card over it...

Dude you know all too well some people buy GPUs with hearts and emotions right? For some folks it is a statement of “no” to “evil business practice”. Or out of true love to a specific brand. We gotta respect that man!

I would want them to crank the clocks up.



In all honesty, it was probably released longer ago than that. When was the 480 released?

And they probably will crank the clock even higher.
 
Back
Top