Wednesday, January 16th 2019
NVIDIA Readies GeForce GTX 1660 Ti Based on TU116, Sans RTX
It looks like RTX technology won't make it to sub-$250 market segments as the GPUs aren't fast enough to handle real-time raytracing, and it makes little economic sense for NVIDIA to add billions of additional transistors for RT cores. The company is hence carving out a sub-class of "Turing" GPUs under the TU11x ASIC series, which will power new GeForce GTX family SKUs, such as the GeForce GTX 1660 Ti, and other GTX 1000-series SKUs. These chips offer "Turing Shaders," which are basically CUDA cores that have the IPC and clock-speeds rivaling existing "Turing" GPUs, but no RTX capabilities. To sweeten the deal, NVIDIA will equip these cards with GDDR6 memory. These GPUs could still have tensor cores which are needed to accelerate DLSS, a feature highly relevant to this market segment.
The GeForce GTX 1660 Ti will no doubt be slower than the RTX 2060, and be based on a new ASIC codenamed TU116. According to a VideoCardz report, this 12 nm chip packs 1,536 CUDA cores based on the "Turing" architecture, and the same exact memory setup as the RTX 2060, with 6 GB of GDDR6 memory across a 192-bit wide memory interface. The lack of RT cores and a lower CUDA core count could make the TU116 a significantly smaller chip than the TU106, and something NVIDIA can afford to sell at sub-$300 price-points such as $250. The GTX 1060 6 GB is holding the fort for NVIDIA in this segment, besides other GTX 10-series SKUs such as the GTX 1070 occasionally dropping below the $300 mark at retailers' mercy. AMD recently improved its sub-$300 portfolio with the introduction of Radeon RX 590, which convincingly outperforms the GTX 1060 6 GB.
Source:
VideoCardz
The GeForce GTX 1660 Ti will no doubt be slower than the RTX 2060, and be based on a new ASIC codenamed TU116. According to a VideoCardz report, this 12 nm chip packs 1,536 CUDA cores based on the "Turing" architecture, and the same exact memory setup as the RTX 2060, with 6 GB of GDDR6 memory across a 192-bit wide memory interface. The lack of RT cores and a lower CUDA core count could make the TU116 a significantly smaller chip than the TU106, and something NVIDIA can afford to sell at sub-$300 price-points such as $250. The GTX 1060 6 GB is holding the fort for NVIDIA in this segment, besides other GTX 10-series SKUs such as the GTX 1070 occasionally dropping below the $300 mark at retailers' mercy. AMD recently improved its sub-$300 portfolio with the introduction of Radeon RX 590, which convincingly outperforms the GTX 1060 6 GB.
64 Comments on NVIDIA Readies GeForce GTX 1660 Ti Based on TU116, Sans RTX
The biggest question is really just what name they will give it. It seems like they want to separate RTX 2xxx from everything else very clearly, which also hints at RTX's future - not very rosy. If they keep it segmented off for a high end proposition, it will die. For the current high end, we may yet see them fill up the entire product stack with non-RT capable Turing cards. Don't be surprised if they do - after all, shareholders rule the game, not gamers. If they can't sell Turing RTX cards, they will start pushing alternatives. They might also not be so keen on lowering prices because Turing RTX dies are still massive, yields won't be fantastic, and a non-RTX die can easily be 20% smaller.
I just need to look at my own situation: RTRT is a complete dud so far, my 1080 runs everything I throw at it and if I was high-res gaming, a 2080 would be the only possible move forward but its already priced way out of my comfort zone. 1080ti's are pretty much gone by now. So what's left? A 2070 for nonexistant RT and zero extra performance? A Radeon VII if I'm really desperate? And all that trouble just to get 1080ti performance two years after it released? Nvidia can't explain to any shareholder that the entire Pascal high end customer base has no incentive to upgrade within their product stack. Radeon VII, even if its only a symbolic launch, is right there ready to throw a spanner in the works up unto 2080 levels as well - even just the chance of AMD changing course and pushing more of those out at aggressive pricing can be enough to make buyers wait it out. Shareholders aren't going to buy the RTX marketing BS like some consumers did - they're not going to sit there and wait because Nvidia says so. Money rules, look at the chart below, factor in Nvidia's outlook, and consider you having a bunch of shares - buy, keep or sell? I'd be selling right now; Nvidia isn't going to release another Pascal anytime soon and mining is dead.
Remember, you're looking at a company whose stock price took a nosedive this year and set them back to 2H2017. All bets are off.
Even on this very forum there is a whole bunch of users that upgraded to a 2070 or 2080 because the 1080ti's were all gone or priced out of reach. That alone answers your questions perfectly. People don't buy RTX for the RTX. They buy it for performance over what they had prior to it.
Nvidia sincerely believed they had it all sorted out with RTX: they own the high end market, and they were going to give it something new to spend on. It is pure arrogance and it always meets the same end: a slow, painful death. Gsync is a recent example of that - they're still trying to market that as something that is on another level as VRR while everybody with two brain cells knows its not. PhysX: the only part of it that is alive is the shared CPU code. Not the GPU acceleration that is proprietary. GameWorks: the only tech that survives more than a handful of games is that which is picked up as industry standard, stuff like FXAA for example. That is also the irony of all that proprietary tech; if its truly good, its almost impossible to not make it a standard; the market almost forces you to do so.
So, why 11xx after 20xx? Right now you've got tons of gamers with fat wallets sitting on their money, waiting for something worth spending on. They still want their high refresh 1440p content driven, they still want their 4K ultra, and they still want their 1080p120+, and they are done paying a premium on it when there is no need to. 11xx can easily capture that market right now, minds are ready for it. Look at how happy people (even W1zzard) became when the 2060 rivalled 1080 performance at a 350 dollar price point. Progress! An actually interesting product in an otherwise desolate landscape. I will say this: when an Nvidia x60 card is the greatest reason for joy something is very, very wrong with your proposition :D
i knew it... lol
Meanwhile the prices kept getting worse, and we are guilty too. People going crazy because the 7th, I repeat, 7th fastest gpu in the world right now costs "only" 420€ here on Europe (rtx 2060). So they buy it like hot cakes. Pathetic.
It is too soon to bury RTX my fren. Have you seen the recently released Atomic Heart RTX demo? Man I never want to see those old geometric fake shadows in a game again. There's a lot of misunderstanding in the community regarding RTX: Theres a huge chunk of people who believe RTX is nothing more than a dumb reflection in a puddle. I've been analyzing RT from the Metro Exodus released last year, it's absolutely magnificent.
The recent Atomic Heart demo showcased a new side of RT, I didn't know it could have such a beautiful effect on one-directional light (and the resulting shadows).
People are only checking the damn FPS numbers, but absolutely underestimate the beautiful results of RT.
Too soon, I'm not ready to bury RT, no it's not a dud. I gladly take an FPS hit in exchange for immersive and realistic graphics/shadows
- I'm looking at 12nm (already maxed out) and then a 7nm node after that... and nothing else in the pipeline for the foreseeable future after that. I still haven't seen how RTX is going to hit the masses at a reasonable price point. The 2060 certainly isn't it, and we also certainly will need more than 10 gigarays (2080ti) to get that demo we've seen interactive at decent FPS. Decent being 40-60. Not even 60 locked. Port Royal gets 37 FPS averages using those 10 gigarays, on the largest ever gaming GPU die, on a resolution that is old news.
- There are many other things to do that benefit the visual appearance of gaming. And they want a LOT of GPU horsepower too - but they are readily available in terms of content and require no extra dev work either. The aforementioned 1440p120+, the 4K60 experience. The ever larger ultrawides. Those are things people buy other hardware for. Its a primary concern. RTRT is just a quality setting - and its fighting an uphill battle against these resolutions and refresh rates, because when people buy into those, they want them in their games too - with or without RT. Will you really downscale your content just to have RT at 25% of the res you can actually play on (4K > 1080p)? Or suffer bad pixel mapping because 1440p won't scale nicely to 1080p? Oh yeah, and don't forget its on a per-game basis too.
- Consoles, so far, aren't moving along. The majority of the gaming market is based on console content releases. That ties into the chicken vs egg situation with RT enabled content. Today, there's barely anything to see and even the list of games that will use it, is very limited, and PC only. PC only also means the target market for every dev that uses RTRT is limited. Not very attractive to spend money on. Its not going to make you a different game and you can't charge a premium for it either. And here's the kicker: DXR is DirectX = Xbox only. You know, that console that is losing the war every time, designed by a company that is still pushing all the wrong buttons for gamers at large.
So... nope. Not happening.
The TU116 is already cut down on the hardware level by 20%, thus putting it below a plain GTX 1070 already, if they don't touch the clocks.
It's just a jab at Polaris V1.3 as the GTX 1060 GP106 or 104 based is on it's way out / off the shelves.
This thing if it's coming at all is just to bridge the price and performance gap between the RTX 2050 (if we get it) and the RTX 2060.
www.gamersnexus.net/hwreviews/3427-nvidia-rtx-2060-founders-edition-review-benchmark-vs-vega-56
The lack of "bus width" has nothing to do with frametime consistency, the GDDR6 memory makes up for that.
Frametime inconsistency might happen because of the limited framebuffer at 4K but it's very rare.
RTX 2060 is a 1080p/1440p card after all, and even if you wish to game at 4K with it you're gonna lower the settings anyway so the RTX 2060 benchmarks at 4K/ultra in recent demanding games are useless.
I see your point, some very memory intensive games such as Wolfenstein 2 do need more than 6GB to run at higher resolutions without any problems but that doesn't mean the RTX 2060 has frametime issues in general.