Tuesday, January 22nd 2019
NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060
Thai PC enthusiast TUM Apisak posted a screenshot of an alleged GeForce GTX 1660 Ti Ashes of the Singularity (AoTS) benchmark. The GTX 1660 Ti, if you'll recall, is an upcoming graphics card based on the TU116 silicon, which is a derivative of the "Turing" architecture but with a lack of real-time raytracing capabilities. Tested on a machine powered by an Intel Core i9-9900K processor, the AoTS benchmark was set to run at 1080p and DirectX 11. At this resolution, the GTX 1660 Ti returned a score of 7,400 points, which roughly compares with the previous-generation GTX 1070, and is about 16-17 percent faster than the GTX 1060 6 GB. NVIDIA is expected to launch the GTX 1660 Ti some time in Spring-Summer, 2019, as a sub-$300 successor to the GTX 1060 series.
Source:
TUM_APISAK (Twitter)
155 Comments on NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060
www.guru3d.com/news-story/nvidia-reduces-price-of-regular-gtx-1070.html
The real question is whether "TU116" is a different die as speculated or a salvage yield of TU106 (RTX fused) given the purported 2060 PCB. I understand, esp given the 2060/70 FE have the same PCB, but bandwidth scaling & TU uarch improvements mean it does more with less. For 8GB frame buffer there's a less crippled TU106, the 2070 & TU107 2050 with 4GB. Marketing I'm afraid. It's likely predatory pricing might draw more undue attention than other marketing programs... ;) More a case of Nvidia being surrendered to the sidelines... There's the issue of not playing well with others and having burnt bridges with the big consoles. (also with Apple, Intel, Dell/HP, AIBs, etc.) Add to that their SOC sucked by comparison & no x86 licence. Custom semi is a relatively low margin but high volume business that NV might prefer to avoid when they see lower hanging, higher margin fruit anyway. Fixed. MC=MR. Debatable. It's not the US of the 1900s & Rockefeller only had 90% market power. The legal & regulatory burden of proof requires horizontal & vertical integration with market abuses. Given decades of deregulation & a generally pro-right numbered supreme court, good luck with that.
Sorry for the long multi-quote.
Honestly I dont want to regress in VRAM capacity and my 570 was £150 and already offers good 1080p performance and has 8GB so i'm gonna pass on this thx Nvidia.
This card should be $199 at most. The 2060 should be a $250 part, the 2070 should be the $350 part and so on.
But no, NV got greedy.. and rightly so. They're basically like intel now, controlling most of the market, demanding inflated prices because they can get away with it.
It's not a lot of money. It's a luxury item. You don't need THE LATEST video card, and if you WANT it, you'll pay a premium. Welcome to the 21st century.
AMD struggled to make money on Vega 56/64 due to high production costs, and it's no accident their upcoming Radeon VII is priced at $700. While the price increase from Pascal to Turing might be a little more than just production costs (2080 Ti in particular), a sensible price is probably somewhere in the middle. While we want a sane competitive market, competition can also push prices too low and push parties out of the market.
Now to Nvidia's benefit yes the dies are bigger and the production cost is higher. But they needed to stop forcing RTX down people's throat with a card that can barely do it and runs out of ram at 1080p+ ultra settings. There is simply no excuse for 2060 being where its at.
Cause the GTX 1060 6Gb was a 120W TDP, a RTX 2060 is 160W TDP. (fixed that number miss-typed my bad)
The RX 580 is a 185W TDP, while the RX 590 is 175W TDP.
www.techpowerup.com/gpu-specs/ At first perhaps but their overall sale on the interposer package is higher and the mark-up/profit verses a chip only. That adds to their bottom line revenue.
This compression saves both bandwidth and memory usage, but the gains depends on the type of data. Most textures will not be compressed at all, but certain types of rendering buffers are mostly emptiness and can be massively compressed.
2) $350 could be a "car payment" in some place and monthly salary in another.
Thing is: Nvidia has been selling huge numbers of $100-200 GPUs and there's no reason not to do so any more. Not making such cards will only push more people to consoles - a market in which Nvidia is not a big player. A GPU is a luxury item? Seriously? And you mix up "latest" with "greatest", right?
Sure, we could have a model where only $300+ GPUs are released and they go down in price after 2-3 years.
So, for example, instead of buying a GTX 1050Ti, you could get a refreshed GTX 780.
It's a bit like what AMD is doing. And this leads to a situation where all cards (fast and slow) suck a lot of power. And where manufacturers have to support architectures for longer.
Anyway, people have chosen Nvidia's more purpose-built products. I don't understand why anyone would want them to become more like AMD, when we know this doesn't work.
Edit: Confirmed. At least looking 2070+ anyway. All make the gap bigger as res goes up. This isnt a totally me.ory constrained thing, but that washvms big thing ...and at least in gaming, it's not doing much...compared to gddr6 that isnt 192bit bus. :)