Friday, March 8th 2019

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).
The reference clock speed of GTX 1660 non-Ti is expected to be 1530/1785 MHZ. Custom models, such as the pictured GAMING X, will boost up to 1860 MHz. All graphics cards feature a single 8-pin power connector for increased power delivery. It's expected the GTX 1660 will launch on March 14th, at the $219 mark.
Source: Videocardz
Add your own comment

55 Comments on Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

#1
Robcostyle
So, what do we have? Card, that should've been GTX 1160/2060, for reasonable xx60 price, at 1070 perfomance level.
At last?
Or no?
Posted on Reply
#2
dicktracy
Navi killer is already here. XD
Posted on Reply
#3
ArbitraryAffection
RobcostyleSo, what do we have? Card, that should've been GTX 1160/2060, for reasonable xx60 price, at 1070 perfomance level.
At last?
Or no?
1660 Ti with 1536 CUDA and GDDR6 only just matches 1070. this iwll be 15% slower I think. But honestly for $219 it will be a fantastic 1080p gaming card and great value. I may get one if they land at or below £200 here.
dicktracyNavi killer is already here. XD
Ehm, Navi isn't even here. For all we know, Navi could be 2070 performance and $249 which would make all cards under 2080 useless for most gaming. :)
Posted on Reply
#4
Fluffmeister
Yeah Navi is still 6 months away, this is the final nail in the Polaris coffin.
Posted on Reply
#5
ArbitraryAffection
FluffmeisterYeah Navi is still 6 months away, this is the final nail in the Polaris coffin.
Yeah and good riddance too haha. I mean, bless my 570 i do love it but this damn chip's been hanging around for what, 3 years? XD I think at this point it is more legendary than even Pitcairn the Immortal GPU.
Posted on Reply
#6
MEC-777
ArbitraryAffectionYeah and good riddance too haha. I mean, bless my 570 i do love it but this damn chip's been hanging around for what, 3 years? XD I think at this point it is more legendary than even Pitcairn the Immortal GPU.
I don't understand this mentality that 'once a GPU is a couple years old, it should die'. If it still performs adequately within it's pricing bracket, then it's fine. Age is irrelevant, unless there are some new(er) features that become implemented later on that cannot be fully or properly utilized. That's not the case with the current RX 570/80/90 cards. The 570 and current Polaris-based cards are still great cards in terms of performance/value per dollar.

The long life of AMDs GPUs is a result of their slower development cycles compared to Nvidia, and their ability to continuously bring more performance over the entire life of that GPU through driver optimizations, until it's replacement is finally released (and beyond).

Pitcairn, I believe was around for longer than Polaris has been. I could be wrong though. I know it started in the HD 7000 series and lasted all the way to what, the R7 360 or 370, IIRC? That is a pretty darn long life for a GPU, lol.
Posted on Reply
#7
ArbitraryAffection
MEC-777I don't understand this mentality that 'once a GPU is a couple years old, it should die'. If it still performs adequately within it's pricing bracket, then it's fine. Age is irrelevant, unless there are some new(er) features that become implemented later on that cannot be fully or properly utilized. That's not the case with the current RX 570/80/90 cards. The 570 and current Polaris-based cards are still great cards in terms of performance/value per dollar.

The long life of AMDs GPUs is a result of their slower development cycles compared to Nvidia, and their ability to continuously bring more performance over the entire life of that GPU through driver optimizations, until it's replacement is finally released (and beyond).

Pitcairn, I believe was around for longer than Polaris has been. I could be wrong though. I know it started in the HD 7000 series and lasted all the way to what, the R7 360 or 370, IIRC? That is a pretty darn long life for a GPU, lol.
Is mainly me joking but Polaris has worse API support and features than newer cards like Vega and Turing. So in a way it holds back the adoption of those features if it stays around too long.

Polaris lacks dx12_1 support and cannot acellerate lower precisions. It needs to be replaced. But honestly with Nvidia marketshare being so much higher and Turing having an excellent feature set, those features may be adopted now:)
Posted on Reply
#8
crazyeyesreaper
Not a Moderator
MEC-777I don't understand this mentality that 'once a GPU is a couple years old, it should die'. If it still performs adequately within it's pricing bracket, then it's fine. Age is irrelevant, unless there are some new(er) features that become implemented later on that cannot be fully or properly utilized. That's not the case with the current RX 570/80/90 cards. The 570 and current Polaris-based cards are still great cards in terms of performance/value per dollar.

The long life of AMDs GPUs is a result of their slower development cycles compared to Nvidia, and their ability to continuously bring more performance over the entire life of that GPU through driver optimizations, until it's replacement is finally released (and beyond).

Pitcairn, I believe was around for longer than Polaris has been. I could be wrong though. I know it started in the HD 7000 series and lasted all the way to what, the R7 360 or 370, IIRC? That is a pretty darn long life for a GPU, lol.
The think the problem with the 570/580/590 is that really they dont offerr much more than the 470/480 and 390 and 290 series did before. VRAM is upgraded and sure power consumption is down. But depending on the titles performance remains similar between the 290 / 390 / 480 / 580 series of cards. When you consider the R9 290X released back in 2013 well thats 6 years ago.

So other than Fury / Vega AMD has been stuck at the same performance level for 6 years. having only released 4 GPUs that ended up faster.

Another way to look at it.. If you are gaming at 1080p AMD STILL doesn't really have a replacement for the ancient R9 290X unless your willing to pay a hefty price. Otherwise no real performance gains to be had from the red team.
Posted on Reply
#9
xorbe
So the $220-230 price point (1060 => 1660) barely gets any faster this generation.
Posted on Reply
#10
Robcostyle
MEC-777I don't understand this mentality that 'once a GPU is a couple years old, it should die'.
I remember my sapphire HD 3870 DDR4 *special*, that couldn't hanlde BF3 even in 1280x720, with lowest preset possible, without heavy artifacts (literally, image artifacts), stuttering and constant game freeze in 10-15 min.
Card was totally fine - it just couldn't stand frostbite.

So thanks for that at least, that your 3-5 y.o. card could be still relevant. Say hello GTX 980 or R9 290X
Posted on Reply
#11
flmatter
Here is a quick screen shot of gpu-z of my MSI 1660ti gaming x that my wife bought me. It holds its own against the 1070 and I feel sometimes it is better. YRMV
Posted on Reply
#12
ArbitraryAffection
xorbeSo the $220-230 price point (1060 => 1660) barely gets any faster this generation.
IDK, but with Turing higher performance per CUDA core and other improvement and 128 more CUDA cores, it should be around 20% faster than 1060 if I had to guess. So yeah not really a huge gain but Turing is expensive and people are having to foot the bill. I think real gains will come when both AMD and NVidia have low cost 7nm cards available.
flmatterHere is a quick screen shot of gpu-z of my MSI 1660ti gaming x that my wife bought me. It holds its own against the 1070 and I feel sometimes it is better. YRMV
Nice, it is a good card, better feature set than 1070 too. Only complaint of mine is 2Gigs less vram. but now i am of the persuasion that it matters not really a lot at 1080p which you are gaming at :) but anyway enjoy the card, it is a good one:) I am thinking of 1660 non ti to give me a small bump in perf and a huge one in perf/watt as i wish to start F@H :) i would like to spend no more than £200 on a GPU :x
Posted on Reply
#13
MEC-777
RobcostyleI remember my sapphire HD 3870 DDR4 *special*, that couldn't hanlde BF3 even in 1280x720, with lowest preset possible, without heavy artifacts (literally, image artifacts), stuttering and constant game freeze in 10-15 min.
Card was totally fine - it just couldn't stand frostbite.

So thanks for that at least, that your 3-5 y.o. card could be still relevant. Say hello GTX 980 or R9 290X
The difference in relative performance between the HD 3000 series and the HD 6/7k series, when BF3 released, was far more significant than the generational performance increases we see today. So yeah, I get where the mentality comes from, lol, it's just not like that anymore though.

These days you can easily get away with using cards from 5-6 years ago and still have a decent gaming experience.
Posted on Reply
#14
flmatter
ArbitraryAffectionOnly complaint of mine is 2Gigs less vram
With the gddr6 it less of an issue. I have not noticed any issues running my games with it. The Division and Beta Division 2 it runs fine, grim dawn, PoE and World of tanks look great. I had the same concerns but after putting this card thru its paces 6gigs is fine at 1080p. Valley Benchmarks 3481 on 1660ti and 3119 for the 1070 on 1080 ultra/extreme settings
Posted on Reply
#15
TheGuruStud
Intel: We have the most useless and redundant skus.

Nvidia: Hold my beer.
Posted on Reply
#16
ArbitraryAffection
TheGuruStudIntel: We have the most useless and redundant skus.

Nvidia: Hold my beer.
Lol, what? I favour Radeon but this comment makes no sense. GTX 1660 at $219 makes a lot of sense. It will be 15-20% faster than RX 590 but 12% cheaper. It is also going to be over twice as efficient. At its current price, the only card that is useless and redundant, is the RX 590.
Posted on Reply
#17
Turmania
Unfortunately, I gave up on my next build being all AMD. They have been major disappointment on GPU side. I don't know when Nvdia will release their 7nm process GPU. Don't think it will be this summer but till then I will hold on to Pascal.
Posted on Reply
#18
megamanxtreme
GTX 1060 6G was still competing with all the refreshes of the RX 480, so I figure the GTX 1660 will have reigns as much, but I'm not going to overlook the Navi cards. Still, we all make our choices to our own personal regards. Mine are based on (in any order)Heat, Power Consumption, Performance, and check what falls in my price range that excel those requirements. Needing 1 8-pin on the 1660 is odd if the 1660Ti consumes 120W, let alone the 1660 Ti as well, but free boost when needed.
Posted on Reply
#19
cucker tarlson
ArbitraryAffectionLol, what? I favour Radeon but this comment makes no sense. GTX 1660 at $219 makes a lot of sense. It will be 15-20% faster than RX 590 but 12% cheaper. It is also going to be over twice as efficient. At its current price, the only card that is useless and redundant, is the RX 590.
lol,he's apparently hurt that rx590 is a sku that's been the laughing stock for people all around tech sites and channels lately.can't blame them tho.it's like amd said "let's sell overclocked polaris under a higher sku name,but with a bigger price premium than last time"
a slightly cut die with same amount of memory but it's ddr5 this time,it'll end up at 10% slower than 1660Ti. Sorta like 1070Ti was to 1080. meanwhile,a $60 price difference is pretty noticeable for people buying in this segment.
Posted on Reply
#20
ArbitraryAffection
TurmaniaUnfortunately, I gave up on my next build being all AMD. They have been major disappointment on GPU side. I don't know when Nvdia will release their 7nm process GPU. Don't think it will be this summer but till then I will hold on to Pascal.
For a high end build, yeah; Radeon isn't amazing right now. I too was let down by Radeon VII, it feels very Fury X/Vega-y. I could go on but power efficiency, performance and even the reference design are all quite mediocre. (It pains me to say that btw). I'm very happy with my RX 570 8G though, would've kept it till Navi for sure, but now my interest in Distributed Computing is piqued i would like to put my GPU to work in Folding also. So I am thinking of making the switch to Nvidia in a few months (if no super awesome Navi news comes out). GTX 1660 non Ti looking very nice :D

As for Nvidia 7nm it will be 1 full cycle after 12nm Turing I think. 12nm will be a short-lived process. That doesn't mean 12nm Turing is bad, per se, just means that the next generation is going to be, hopefully a lot more compelling. I expect a 2080 replacement card around middle of next year, and 2080 ti performance but potentially 2060 power use and 2080 pricing. Nvidia have pulled off some mental perf/watt gains in the past so i think this is reasonable given the full node shrink. Turing itself is ripe to be shrunk down so I think the gains will be from similar or more cores on smaller dies and clock speeds/bandwidth increases. (Like Maxwell -> Pascal)

Hopefully if Navi is good and Intel coming into the dGPU market, too, 2020 should be a very interesting year for PC graphics^^
Posted on Reply
#21
cucker tarlson
ArbitraryAffectionFor a high end build, yeah; Radeon isn't amazing right now. I too was let down by Radeon VII, it feels very Fury X/Vega-y. I could go on but power efficiency, performance and even the reference design are all quite mediocre. (It pains me to say that btw). I'm very happy with my RX 570 8G though, would've kept it till Navi for sure, but now my interest in Distributed Computing is piqued i would like to put my GPU to work in Folding also. So I am thinking of making the switch to Nvidia in a few months (if no super awesome Navi news comes out). GTX 1660 non Ti looking very nice :D

As for Nvidia 7nm it will be 1 full cycle after 12nm Turing I think. 12nm will be a short-lived process. That doesn't mean 12nm Turing is bad, per se, just means that the next generation is going to be, hopefully a lot more compelling. I expect a 2080 replacement card around middle of next year, and 2080 ti performance but potentially 2060 power use and 2080 pricing. Nvidia have pulled off some mental perf/watt gains in the past so i think this is reasonable given the full node shrink. Turing itself is ripe to be shrunk down so I think the gains will be from similar or more cores on smaller dies and clock speeds/bandwidth increases. (Like Maxwell -> Pascal)

Hopefully if Navi is good and Intel coming into the dGPU market, too, 2020 should be a very interesting year for PC graphics^^
What I'm concerned about is them staying with gcn for navi.They'll be dead in the water.I hope they won't.
Posted on Reply
#22
ArbitraryAffection
cucker tarlsonWhat I'm concerned about is them staying with gcn for navi.They'll be dead in the water.I hope they won't.
Yeah. GCN is pretty much a zombie architecture at this point: it is just shambling along and refuses to die. They need a clean slate, but i don't think Navi is it. Hopefully though it will be competitive, if they address major issues with GCN: such as workload distribution in 3D loads, balance the engine out to favour a higher Geometry/Pixel rate to Compute ratio, and improve bandwidth efficiency. So many shaders on Vega just literally doing nothing because the chip is stalled with other parts of the pipeline.
Posted on Reply
#23
efikkan
MEC-777I don't understand this mentality that 'once a GPU is a couple years old, it should die'. If it still performs adequately within it's pricing bracket, then it's fine. Age is irrelevant, unless there are some new(er) features that become implemented later on that cannot be fully or properly utilized.
Age becomes relevant at some point, especially since AMD tend to drop driver support after a few years, in a couple of cases even products that are still for sale.
MEC-777The long life of AMDs GPUs is a result of their slower development cycles compared to Nvidia, and their ability to continuously bring more performance over the entire life of that GPU through driver optimizations, until it's replacement is finally released (and beyond).
The tale of the continuously improving driver updates from AMD is a myth. Nvidia also work on their drivers, and have made numerous improvements over the last decade. The argument that a lesser card from AMD will overcome a "better" card from the competitor over time has basically never panned out.
Posted on Reply
#24
cucker tarlson
ArbitraryAffectionYeah. GCN is pretty much a zombie architecture at this point: it is just shambling along and refuses to die. They need a clean slate, but i don't think Navi is it. Hopefully though it will be competitive, if they address major issues with GCN: such as workload distribution in 3D loads, balance the engine out to favour a higher Geometry/Pixel rate to Compute ratio, and improve bandwidth efficiency. So many shaders on Vega just literally doing nothing because the chip is stalled with other parts of the pipeline.
improve the load distribution ? who needs that if you have enhanced async in RVII already
Posted on Reply
#25
ArbitraryAffection
cucker tarlsonbalance the load distribution ? who needs that if you have enhanced async in RVII already
Lol, i have heard that Fiji and Vega are having some major issues getting work sent to the compute units effectively enough. Whether this is due to stalling in other parts of the pipeline (Geometry) or scheduling/driver issues IDK. But Nvidia also has a similar issue with their larger dies, but they are much, much better at it. GV100 has a similar issue in 3D workloads to Vega, look at 5120 CC TV vs 4352 CC 2080 Ti, they did some massive improvements in that regard to Turing. GV100 can also concurrently excecute integer and floating point ops. (Also check: Something about GV100 dropping Instruction Level Parallelism, and the dual-issue dispatch engines, instead focusing on Thread-level parallelism. (better for Compute I think).

Given a simple compute task; GCN is pretty damn good. It scales well to the entire Compute engine, but 3D graphics are a bit different I think. "Workload distribution" was a major problem area that Vega was supposed to address, also primitive rate / geometry (Remember Primitive Shaders?) That never came to fruition due to issues with the design, the silicon and/or the driver. That's what I heard. Vega is unfinished and missed the performance target AMD was aiming for (1080 Ti). But i digress. Ironic it's called "Graphics" core next, it should be called CCN: "Compute Core Next" XD
Posted on Reply
Add your own comment
Dec 22nd, 2024 03:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts