Tuesday, January 22nd 2019

NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060

Thai PC enthusiast TUM Apisak posted a screenshot of an alleged GeForce GTX 1660 Ti Ashes of the Singularity (AoTS) benchmark. The GTX 1660 Ti, if you'll recall, is an upcoming graphics card based on the TU116 silicon, which is a derivative of the "Turing" architecture but with a lack of real-time raytracing capabilities. Tested on a machine powered by an Intel Core i9-9900K processor, the AoTS benchmark was set to run at 1080p and DirectX 11. At this resolution, the GTX 1660 Ti returned a score of 7,400 points, which roughly compares with the previous-generation GTX 1070, and is about 16-17 percent faster than the GTX 1060 6 GB. NVIDIA is expected to launch the GTX 1660 Ti some time in Spring-Summer, 2019, as a sub-$300 successor to the GTX 1060 series.
Source: TUM_APISAK (Twitter)
Add your own comment

155 Comments on NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060

#76
Casecutter
Gone are the days (and in some ways gamers) who remember the days when you got a "decent performance jump", at a reduced MSRP. I guess if we are generous the GTX 1070 8Gb was perhaps truly leveled off at +$350 in Nvidia mind. Will a 16% increase, for 15% less cash feel like progress? Let hope competition from Navi can break such lack luster stagnation.

www.guru3d.com/news-story/nvidia-reduces-price-of-regular-gtx-1070.html
Posted on Reply
#77
OneMoar
There is Always Moar
I mean 16% at the same tdp is better then what amd has done so yea ...
Posted on Reply
#78
steen
THANATOSHaving RT cores for anything under 2060 is pointless in my opinion so this is a good move. Even keeping the Tensor cores is questionable.
Then what of TU107? DLSS -> tensor cores.

The real question is whether "TU116" is a different die as speculated or a salvage yield of TU106 (RTX fused) given the purported 2060 PCB.
What I don't like about 2060 and this cards is the 192bit bus.
That is a big disadvantage because they can use either 3GB or 6GB of Vram.
They should have used 256bit and then they could use 4GB or 8GB of Vram. With a 33% wider bus they could use a cheaper GDDR5 memory at least for this card.
I understand, esp given the 2060/70 FE have the same PCB, but bandwidth scaling & TU uarch improvements mean it does more with less. For 8GB frame buffer there's a less crippled TU106, the 2070 & TU107 2050 with 4GB. Marketing I'm afraid.
bugThe thing is, when you only have one or two competitors, you're really, really careful not to ruin them. Otherwise you'd become a monopoly and subject to all sorts of extra attention. It's less of hassle to keep them around.
It's likely predatory pricing might draw more undue attention than other marketing programs... ;)
Incidentally (and this is just speculation on my side) that's why Nvidia has surrendered consoles to AMD: to provide AMD a lifeline when they were getting hammered on the PC.
More a case of Nvidia being surrendered to the sidelines... There's the issue of not playing well with others and having burnt bridges with the big consoles. (also with Apple, Intel, Dell/HP, AIBs, etc.) Add to that their SOC sucked by comparison & no x86 licence. Custom semi is a relatively low margin but high volume business that NV might prefer to avoid when they see lower hanging, higher margin fruit anyway.
notbNo Every company wants to be a monopolist, unless being owned by the government. Being a monopolist sucks is awesome and but makes the whole business cost to the rest of the economy much more expensive.
Fixed. MC=MR.
Not to mention the state would do everything possible to divide such a company anyway
Debatable. It's not the US of the 1900s & Rockefeller only had 90% market power. The legal & regulatory burden of proof requires horizontal & vertical integration with market abuses. Given decades of deregulation & a generally pro-right numbered supreme court, good luck with that.

Sorry for the long multi-quote.
Posted on Reply
#79
Ruru
S.T.A.R.S.
unikinGeForce GTX 1660 Ti 6GB -> 1536 CU + 1770 MHz/6000 MHz -> 5.44 TFLOPs = $279
GeForce GTX 1660 Ti 6GB -> 1280 CU + 1785 MHz/4000 MHz -> 4.57 TFLOPs = $250

If true, NVidia just pooped on us again. Waiting +2 years for 15-20 % performance increase? Shame, shame, shame on you NGreedia!
I doubt, a typical gamer wouldn't probably know what he/she is buying. At least 1060 3GB vs 6GB was "more is more", when 3GB had less shaders with its lower VRAM.
Posted on Reply
#80
ArbitraryAffection
<£200 and it could be interesting. Btw I think the performance in Ashes isn't a good overall test of this thing. Let's wait for proper reviews. I expect it will be 30%+ faster in most games.

Honestly I dont want to regress in VRAM capacity and my 570 was £150 and already offers good 1080p performance and has 8GB so i'm gonna pass on this thx Nvidia.
Posted on Reply
#81
OneMoar
There is Always Moar
nvidias lossless memory compression more then quadruples effective memory bandwith in a lot of workloads buss width doesn't really matter anymore
Posted on Reply
#82
illli
This whole generation of cards has sucked. Price/performance has gone way down.
This card should be $199 at most. The 2060 should be a $250 part, the 2070 should be the $350 part and so on.
But no, NV got greedy.. and rightly so. They're basically like intel now, controlling most of the market, demanding inflated prices because they can get away with it.
Posted on Reply
#83
Blueberries
notb2060 priced very affordably? It's a mid-range card, placed above the previous generation (1060 was $250 at launch).
So Nvidia now needs 2 chips below - actual successors to 1050 and 1060.

I expected them to just go for 2030 and 2050, but it seems the 2000-series is RTX only.
1660 is going to replace 1060 and, clearly, there has to be some "1550" in the works as well.
I don't expect to get a 2019 Corvette for the same price as the 2016 Corvette either. The 2060 is priced correctly, I don't know why everyone expects everything handed to them. $350 is not a lot of money, that's a car payment.
Posted on Reply
#84
Vya Domus
Blueberries$350 is not a lot of money
Maybe, who is to say ?
Posted on Reply
#85
Blueberries
Vya DomusMaybe, who is to say ?
Anyone who's bought a house, car, refrigerator, a new set of furniture, a mattress...

It's not a lot of money. It's a luxury item. You don't need THE LATEST video card, and if you WANT it, you'll pay a premium. Welcome to the 21st century.
Posted on Reply
#86
Vya Domus
BlueberriesAnyone who's bought a house, car, refrigerator, a new set of furniture, a mattress...

It's not a lot of money. It's a luxury item. You don't need THE LATEST video card, and if you WANT it, you'll pay a premium. Welcome to the 21st century.
Maybe you also don't need a car, or a new set of furniture. It's not a lot of money for you but not for everyone, it's a premium if you think it is. This is purely subjective.
Posted on Reply
#87
Blueberries
Vya DomusMaybe you also don't need a car, or a new set of furniture. It's not a lot of money for you but not for everyone, it's a premium if you think it is. This is purely subjective.
That was the point.
Posted on Reply
#88
efikkan
While most agree that Turing could use a little price cut, we still have to acknowledge the fact that production costs in general are increasing. If we are to expect significant improvements in the next years, then we probably have to accept minor price increases, as new nodes and new memory is increasingly expensive. We seem to be beyond the point where the benefits from the new nodes are great enough to offset higher wafer costs.

AMD struggled to make money on Vega 56/64 due to high production costs, and it's no accident their upcoming Radeon VII is priced at $700. While the price increase from Pascal to Turing might be a little more than just production costs (2080 Ti in particular), a sensible price is probably somewhere in the middle. While we want a sane competitive market, competition can also push prices too low and push parties out of the market.
Posted on Reply
#89
Nkd
OneMoarnvidias lossless memory compression more then quadruples effective memory bandwith in a lot of workloads buss width doesn't really matter anymore
Are you saying you don't need more vram because of that? Is so, you can have all the bandwidth but running out of vram is just that running out of it. Check out gtx 1060 ray tracing or ultra setting benchmarks. Hardocp did an in-depth analysis and 2060 was choking and those minimum frames were basically it running out of ram. The issue is not even the 6gb for me, issue is charging 350-400 for cards and having only 6gb of ram? Hard to recommend 2060 at this point.

Now to Nvidia's benefit yes the dies are bigger and the production cost is higher. But they needed to stop forcing RTX down people's throat with a card that can barely do it and runs out of ram at 1080p+ ultra settings. There is simply no excuse for 2060 being where its at.
Posted on Reply
#90
OneMoar
There is Always Moar
what on earth do you need more then 6gb of vram for at 1440p ?
Posted on Reply
#91
Casecutter
OneMoarI mean 16% at the same tdp is better then what amd has done so yea
How do we know about the TDP of this supposed GTX 1660... was that divulged yet?
Cause the GTX 1060 6Gb was a 120W TDP, a RTX 2060 is 160W TDP. (fixed that number miss-typed my bad)
The RX 580 is a 185W TDP, while the RX 590 is 175W TDP.

www.techpowerup.com/gpu-specs/
efikkanAMD struggled to make money on Vega 56/64 due to high production costs
At first perhaps but their overall sale on the interposer package is higher and the mark-up/profit verses a chip only. That adds to their bottom line revenue.
Posted on Reply
#92
OneMoar
There is Always Moar
CasecutterHow do we know about the TDP of this supposed GTX 1660... was that divulged yet?
Cause the GTX 1060 6Gb was a 160W TDP, a RTX 2060 is 160W TDP.
The RX 580 is a 185W TDP, while the RX 590 is 175W TDP.

www.techpowerup.com/gpu-specs/

At first perhaps but their overall sale on the interposer package is higher and the mark-up/profit verses a chip only. That adds to their bottom line revenue.
the tdp on the 1060 6gb is 120w knowing that and knowing what the tdp on the 2060 is it should be in that ballpark
Posted on Reply
#93
efikkan
Nkd
OneMoarnvidias lossless memory compression more then quadruples effective memory bandwith in a lot of workloads buss width doesn't really matter anymore
Are you saying you don't need more vram because of that? Is so, you can have all the bandwidth but running out of vram is just that running out of it.
I just want to add, while memory compression doesn't help as much as OneMoar claims, Nvidia does draw more benefits from their compression than AMD.
This compression saves both bandwidth and memory usage, but the gains depends on the type of data. Most textures will not be compressed at all, but certain types of rendering buffers are mostly emptiness and can be massively compressed.
Posted on Reply
#94
notb
BlueberriesI don't expect to get a 2019 Corvette for the same price as the 2016 Corvette either. The 2060 is priced correctly, I don't know why everyone expects everything handed to them. $350 is not a lot of money, that's a car payment.
1) Not everyone wants to spend a lot on a PC (it's not that important in their life).
2) $350 could be a "car payment" in some place and monthly salary in another.

Thing is: Nvidia has been selling huge numbers of $100-200 GPUs and there's no reason not to do so any more. Not making such cards will only push more people to consoles - a market in which Nvidia is not a big player.
BlueberriesIt's not a lot of money. It's a luxury item. You don't need THE LATEST video card, and if you WANT it, you'll pay a premium. Welcome to the 21st century.
A GPU is a luxury item? Seriously? And you mix up "latest" with "greatest", right?

Sure, we could have a model where only $300+ GPUs are released and they go down in price after 2-3 years.
So, for example, instead of buying a GTX 1050Ti, you could get a refreshed GTX 780.
It's a bit like what AMD is doing. And this leads to a situation where all cards (fast and slow) suck a lot of power. And where manufacturers have to support architectures for longer.
Anyway, people have chosen Nvidia's more purpose-built products. I don't understand why anyone would want them to become more like AMD, when we know this doesn't work.
Posted on Reply
#95
Nkd
OneMoarwhat on earth do you need more then 6gb of vram for at 1440p ?
Really? lol! Games are using more and more ram, not less. Look at hardocp review how the card choked on some of the games with ultra settings and especially RTX on. I wouldn't touch a card with less then 8gb of rams these days. Even 1080p ultra can choke the gameplay experience where you get minimum fps drop and stutter as it has to swap in and out of memory.
Posted on Reply
#96
Vya Domus
OneMoarnvidias lossless memory compression more then quadruples effective memory bandwith in a lot of workloads
That is compression for color. Shaders do a lot more than that, Nvidia has no where near the advantage that you claim (by the way where did you even got that from ? ) And aldo, everyone has color compression, even low power ARM GPU cores.
OneMoarbuss width doesn't really matter anymore
Bus width is critical, Nvidia and AMD would use even more chips if they could but they are limited by PCB layouts. HBM also says hi, that memory technology relies a lot on very wide bus connections.
Posted on Reply
#97
EarthDog
Vya DomusBus width is critical
To what? High res gaming shows gddr5x and gddr6 have plenty. Remember when HBM w as first announced and didn't show gains low... but when it went against the same cards at higher res, it closed the gap. Last I recall that doesnt happen with gddr5x/6...is my memory messing with me?
Posted on Reply
#98
Vya Domus
EarthDogTo what?
Memory bandwidth performance.
Posted on Reply
#99
EarthDog
That's what I thought.. the rest of my post applies. :)

Edit: Confirmed. At least looking 2070+ anyway. All make the gap bigger as res goes up. This isnt a totally me.ory constrained thing, but that washvms big thing ...and at least in gaming, it's not doing much...compared to gddr6 that isnt 192bit bus. :)
Posted on Reply
#100
Vya Domus
Confirmed what ? I seriously don't known what are you on about. I simply said the bus width is a critical factor in deciding how the memory assembly will perform coupled with any GPU.
Posted on Reply
Add your own comment
Dec 21st, 2024 07:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts