• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI GeForce GTX 1660 Ti Gaming X 6 GB

Ridiculous that anyone should be excited for this release from Nvidia. Same performance as a custom 1070, 2 years later, for the same price you could have picked one of those up for over a year. What do you get? Better efficiency but 2GB less VRAM. No thanks.
 
Yeah, on the other hand we get exciting releases like the RX 590 and Vega VII, both late, slow, and overpriced.
 
I bought my 660Ti more than 5 years ago, performance has improved quite a bit since then with no increase in cost or power draw. Progress is still alive and healthy.
 
Low quality post by Robcostyle
I bought my 660Ti more than 5 years ago, performance has improved quite a bit since then with no increase in cost or power draw. Progress is still alive and healthy.
Well, seems like you and your fanboy friend are a little bit outdated - cause 660 ti came out 7 years ago, but dont worry - you’ll get the idea that people like me trying to deliver. In 2-3 years or so...
 
Seems like a good buy
 
Ridiculous that anyone should be excited for this release from Nvidia. Same performance as a custom 1070, 2 years later, for the same price you could have picked one of those up for over a year. What do you get? Better efficiency but 2GB less VRAM. No thanks.

I agree, however I think their focus here is that it offers a "new" solution at a price that offers 10-20% more performance than the oppositions (read 590) at $20 - $40 more so clearly they think there will be significant market share, whether that's the case with AMD allegedly reducing the price of Vega56 is another matter, you could argue that reducing the price of Vega56 suggests AMD agree with them :D
 
Ridiculous that anyone should be excited for this release from Nvidia. Same performance as a custom 1070, 2 years later, for the same price you could have picked one of those up for over a year. What do you get? Better efficiency but 2GB less VRAM. No thanks.
I understand why people would be upset with price but performance? Nobody screams bloody murder when Nvidia/AMD release entry level cards that can't offer ultra performance at 4k.
 
Why? If we have RTX 2060 on market already, for the same price basically, well, $40 more expensive, and much more fast than GTX 1660 Ti
 
because 2060 is just the 2070 die uncapable to run all 2304 cores and supply may be too low depending on the yields of quality chips.
1660T is cheaper to produce, 36% smaller die.

And still why is it so big. 2070 is 445, this is 284. Almost as if they left the RT/tensor cores intact. Imagine 1536 Cuda / 160 bit cut down version of 2070, that would be 296. And this is 192 bit and 284 sq.mm, there is definitely something fishy about 1660Ti. They wasted good wafer just to add useless RT cores isntead of refreshing 1080/Ti with 16GBps GDDR6 on board and calling it a day.
 
Last edited:
Why? If we have RTX 2060 on market already, for the same price basically, well, $40 more expensive, and much more fast than GTX 1660 Ti

I think you need more fingers and toes to count on, The GTX 1160ti is $279 while the RTX 260 is $349. That a $70 difference in price with TPU having the GTX 1160 win in each resolution by a slight edge over RTX 260. I'm not knocking the RTX 260 other then it's ray tracing performance where it really looks handicapped by it compared to the RTX 270 & 280.

performance-per-dollar_1920-1080.png
 
Wsnt this card replacing GTX 1060, so price should have been lower
Because new phones are cheaper than the previous models? Because cars are? Or maybe sausages or coca cola bottles?

Why are some people making up new laws of economy all the time? More importantly: why are they surprised that these laws don't work?
Why? If we have RTX 2060 on market already, for the same price basically, well, $40 more expensive, and much more fast than GTX 1660 Ti
Because why not? Why is having a more dense product lineup such a problem? Shouldn't we be happy because of the choice Nvidia gives us?

Ridiculous that anyone should be excited for this release from Nvidia. Same performance as a custom 1070, 2 years later, for the same price you could have picked one of those up for over a year. What do you get? Better efficiency but 2GB less VRAM. No thanks.
Why would anyone care about efficiency? Leave it to the engineers at Nvidia.
You get smaller coolers and less noise. That's it.
 
I think you need more fingers and toes to count on, The GTX 1160ti is $279 while the RTX 260 is $349. That a $70 difference in price with TPU having the GTX 1160 win in each resolution by a slight edge over RTX 260. I'm not knocking the RTX 260 other then it's ray tracing performance where it really looks handicapped by it compared to the RTX 270 & 280.
Who care about MSRP prices. Real prices in our Europe is different.
 
Who care about MSRP prices. Real prices in our Europe is different.
Because MSRP is the price a card should be sold for.
It's the best estimator we have for average pricing globally.
Hence, it's the price we reference all the time.

You can't say a card costs X < MSRP because there exists a particular seller that offers it for X.
 
Who care about MSRP prices. Real prices in our Europe is different.
I'm the states and the cards are being sold at msrp, a $70 difference.
 
The power efficiency of this card is tremendous. Just imagine what Turing on 7nm could be...

The price needs to be a little lower, but overall, its a good card. It will probably be the new sales champion in the mid-end, like GTX 1060 was!
I am actually not too impressed by turing. Remember this chip has a die size almost as big as that of the gtx 1080. At 284mm2 its about 10-15% smaller than 314mm2 of the 1080, but is on a smaller 12nm process. And is 15% slower than 1080. Doesn't seem like much improvement as progress is atleast to match 1080 performance at that die size but it didnt. I think it would've been way better for the consumer had nvidia simply rebranded gtx1080 rather than spend money on a new chip and tape out that we the consumers end up paying for.

TLDR: dont just look at core count and compare, look at die size.
 
TLDR: dont just look at core count and compare, look at die size.
Why do you care about the die size? It doesn't fit on the PCB or what?

You don't like the card, fine. You're not forced to give a reason, so why make up something so weird?

It's fast, it's efficient, it's frugal, it's pretty cheap.
If it was small as well, what would you point out? You don't like the colour? :)
 
Last edited:
Why do you care about the die size? It doesn't fit on the PCB or what?

You don't like the card, fine. You're not forced to give a reason, so why make up something so weird?

It's fast, it's efficient, it's frugal, it's pretty cheap.
If it was small as well, what would you point out? You don't like the colour? :)

I was specifically replying to someone who was impressed by Turing's efficiency as something "tremendous". its good and all but hardly anything impressive especially when compared to a gtx1080. Also note i was taking a deeper look at the architecture part in general. I used die size for the argument because it determines how much performance nvidia can get out of each mm2 which ultimately trickles down to how much it costs to produce, and based on that aspect of things; Turing doesn't seem to improve much on pascal(unless i am missing something). Up until now it was hard to have an apple to apple comparison because the rtx Turing cards have all the extra ray tracing cores that take up space and add functionality.
 
I was specifically replying to someone who was impressed by Turing's efficiency as something "tremendous".
He is right. Efficiency is excellent.
its good and all but hardly anything impressive especially when compared to a gtx1080.
How can it not be impressive compared to 1080? It more or less matches it in performance/Watt despite belonging to a much lower segment.
Also note i was taking a deeper look at the architecture part in general
That's exactly what I was talking about. On this forum we're consumers, not tech auditors. We should evaluate products, not their technology - we should be looking at "external" properties: how it performs, how much energy it uses, how much noise and heat it emits. This is what matters.
You think a dense architecture is the sign of technological advancement? Given the choice, would you buy a card that performs worse and is less efficient, just because it has a more dense architecture?

Remember Nvidia is not selling architecture. Their product is the GPU.
So yes, their die size may not be as small as we could hope in 2019. But it's their problem, not ours. It's their job to make an attractive product out of what they have available. And they clearly succeeded. We should actually praise them for being able to beat AMD despite not using a more modern 7nm.

When you go to a restaurant, you either like the meal or not. And even if it's winter and you've ordered a gazpacho, you're happy when the soup you get is tasty, right? They somehow managed it, bravo!
You don't complain that they should've made you wait until summer, because making gazpacho with fresh tomatoes is easier.
I used die size for the argument because it determines how much performance nvidia can get out of each mm2 which ultimately trickles down to how much it costs to produce
But 1660Ti costs a lot less than 1070 at launch and even slightly less than 1070 cost today - 1.5 year after launch. And is faster. So the price is good, right?
Maybe smaller die would make it even cheaper, but they somehow managed anyway. So just like with the soup example, why are you complaining instead of praising Nvidia?
 
Last edited:
He is right. Efficiency is excellent.

How can it not be impressive compared to 1080? It more or less matches it in performance/Watt despite belonging to a much lower segment.

That's exactly what I was talking about. On this forum we're consumers, not tech auditors. We should evaluate products, not their technology - we should be looking at "external" properties: how it performs, how much energy it uses, how much noise and heat it emits. This is what matters.
You think a dense architecture is the sign of technological advancement? Given the choice, would you buy a card that performs worse and is less efficient, just because it has a more dense architecture?

Remember Nvidia is not selling architecture. Their product is the GPU.
So yes, their die size may not be as small as we could hope in 2019. But it's their problem, not ours. It's their job to make an attractive product out of what they have available. And they clearly succeeded. We should actually praise them for being able to beat AMD despite not using a more modern 7nm.

When you go to a restaurant, you either like the meal or not. And even if it's winter and you've ordered a gazpacho, you're happy when the soup you get is tasty, right? They somehow managed it, bravo!
You don't complain that they should've made you wait until summer, because making gazpacho with fresh tomatoes is easier.

But 1660Ti costs a lot less than 1070 at launch and even slightly less than 1070 cost today - 1.5 year after launch. And is faster. So the price is good, right?
Maybe smaller die would make it even cheaper, but they somehow managed anyway. So just like with the soup example, why are you complaining instead of praising Nvidia?


Matching gtx1080 in performance/watt yes, but belonging to a lower segment is completely superficial. It belongs to a lower segment because nvidia wants us to think they are doing more for less. But in reality the production cost for both is very similar, and actually less for gtx1080 if you consider cost for tape out etc. Comparing launch prices is meaningless because gtx1080 came out in may 2016 on what was then a brand new process node, producing that same card costs peanuts right now compared to back then.

Turing has excellent efficiency only because pascal has excellent efficiency too as they are both practically on the same level. We perceive them as excellent because they are better than anything the competition has, but the reason I pay attention to the architectural side of things is because it gives us a clear indication of how progress is taking place, and the conclusion there is that this gen nvidia only added new features but didn't improve efficiency by much if at all. my argument isn't about wishing that the chip was smaller, rather; it is about the somewhat disappointment at how much GPU progress has slowed down.
We desperately need competition in GPUs thats for sure.

On another note: perhaps the one reason i can think of for even justifying this new chip is to streamline the drivers work and optimization by having the whole gen based on turing rather than supporting 2 architectures. More so to do with end of life of cards/generations down the road. This approach is partly why nvidia cards often age pretty badly, but i wont complain here since pascal is already 3 years old and GPU generations have already become far in between
 
Matching gtx1080 in performance/watt yes, but belonging to a lower segment is completely superficial. It belongs to a lower segment because nvidia wants us to think they are doing more for less. But in reality the production cost for both is very similar, and actually less for gtx1080 if you consider cost for tape out etc. Comparing launch prices is meaningless because gtx1080 came out in may 2016 on what was then a brand new process node, producing that same card costs peanuts right now compared to back then.
Why do you talk about production costs so much? Why do you care? You buy a graphics card. It's characterized by price, performance, size and unwanted emissions.
Are you so worried about production costs when you buy cars? Ovens? Tomatoes?
Turing has excellent efficiency only because pascal has excellent efficiency too as they are both practically on the same level. We perceive them as excellent because they are better than anything the competition has
All correct. Pascal was great. Turing is slightly better. I'm happy about slightly better. Slightly better is better than slightly worse (e.g. RX580 vs RX480)
but the reason I pay attention to the architectural side of things is because it gives us a clear indication of how progress is taking place
No, it doesn't. We know way too little details. It's a GPU. There's quantum magic going on inside. Don't think about it too much.
GPUs have properties, which we can measure and compare. Some have actual meaning in real life: price, performance, heat etc. Some have little to no meaning for consumers: size, core count, FP32:FP64 ratio.

You've chosen a property that has absolutely no impact on how people use graphic cards. Most people don't even know how the card looks under the cooler.
And you're trying to argue that there's no progress despite all really important properties suggesting otherwise.
and the conclusion there is that this gen nvidia only added new features but didn't improve efficiency by much if at all
The growth on the feature side is possibly the most prolific one in a decade. RTRT, as a technology (since performance is still lacking), is a huge step in GPU evolution.
And the efficiency gain may not be monstrous, but is certainly visible. Not in some abstract numbers on graphs. In coolers. 1660Ti coolers are much smaller or much quieter than what we were getting with 1070.
 
Back
Top