• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.

It's even more saddening to see others yell 'pre order nao!' before performance is known. That just needs culling and this is what we're doing. Its all about peer pressure - that is why Nvidia tries to get 'influencers' on the net to join their dark side, using free hardware and bags of money. That should raise eyebrows more so than people here predicting (conservative) performance numbers: an architecture such as Pascal literally just sold itself because it was a leap forward. Turing really is not and this is clear as day.

Realistically, looking at the numbers, historically we have always been very accurate at making assumptions on performance. Simply because the low hanging fruit in CUDA really is gone now. We won't see tremendous IPC jumps and if we do, they will cost something else that also provides performance (such as clockspeeds). Its really not rocket science and the fact is, if you can't predict Turing performance with some accuracy, you just don't know enough about GPU.

So let's leave it at that, okay? This has nothing to do with triggered immature kids - in fact, that is the group spouting massive performance gains based on keynote shouts and powerpoint slides. The idiots who believe everything Nvidia feeds them. The Pascal card owners have nothing to be unhappy about - with Turing's release, the resale value of their cards will remain stagnant even though they become older. That is a unique event and one that will cause a lot of profitable used Pascal card sales. I'm not complaining!

If you don't like it, don't visit these threads or tech forum in general...
 
It's even more saddening to see others yell 'pre order nao!' before performance is known. That just needs culling and this is what we're doing. Its all about peer pressure - that is why Nvidia tries to get 'influencers' on the net to join their dark side, using free hardware and bags of money.
Tsukiyomi91 did no such thing, and if you read his/her posts you'll see it was criticism of anyone prejudging the performance.

an architecture such as Pascal literally just sold itself because it was a leap forward. Turing really is not and this is clear as day.
Pascal had the advantage of a node shrink, and it enabled much more aggressive boosting. Turing is a refinement of Volta, which is the first major architecture since Maxwell, that is clear as day.

Realistically, looking at the numbers, historically we have always been very accurate at making assumptions on performance. Simply because the low hanging fruit in CUDA really is gone now. We won't see tremendous IPC jumps and if we do, they will cost something else that also provides performance (such as clockspeeds). Its really not rocket science and the fact is, if you can't predict Turing performance with some accuracy, you just don't know enough about GPU.
And those estimates have usually been off when comparing across architectures. Most estimates for Kepler were completely off, this was back when Nvidia moved from their old "hot clock" to a new redesigned SM. Maxwell were not according to predictions either.
Turing have the largest SM change since Kepler. The throughput is theoretically double, >50% in compute workloads and probably somewhat less in gaming.
 
Tsukiyomi91 did no such thing, and if you read his/her posts you'll see it was criticism of anyone prejudging the performance.


Pascal had the advantage of a node shrink, and it enabled much more aggressive boosting. Turing is a refinement of Volta, which is the first major architecture since Maxwell, that is clear as day.


And those estimates have usually been off when comparing across architectures. Most estimates for Kepler were completely off, this was back when Nvidia moved from their old "hot clock" to a new redesigned SM. Maxwell were not according to predictions either.
Turing have the largest SM change since Kepler. The throughput is theoretically double, >50% in compute workloads and probably somewhat less in gaming.
Hype ,heard of it, your contradictory too , double is not 50% and as ever thats in select workloads, using the same methods and double compute my Vega's a winner ,but it is not in reality while gaming, though it's not as bad as some say either.

Hype sells.
And picking out edge cases such as you have there when hot clock shaders came in , they also brought hot chips that you neglected to mention and their own drama too ,it doesn't help.
People don't guess too wide of the mark here imho.
 
DLSS performance could potentially be very important to me since I use a lot of AA even at 1440p 24",but in order to say that 2x figure apllies to what I experience it's gonna have to be supported in more than a bunch of games. Still,if 20 series has advantage in DLSS,Vulkan/DX12 and in performance in HDR mode, on top of let's say 40% over last gen, it's overall performance increase compared to pascal will look pretty good across the board even witohut adding RT into the equasion.
 
In the case that all of that positive stuff happens, is it still worth the massive price increase though?
 
No one is looking at paper specs when referring to RT performance, they're basing it off how it actually performed. I'd say it's a lot worse to be extolling Turing's virtues than it is to maintain healthy skepticism when performance is unknown.
That "actually performed" has a huge asterisk next to it. This opinion is based on early versions of Shadow of Tomb Raider, Metro Exodus and Battlefield V, in that order. Assuming that developers (who had the actual cards for around two weeks before the event) do no further optimizations. Convienently things that do work get ignored. Pica Pica comes to mind as do a bunch of professional pieces of software that do not necessarily do real-time raytracing but offer considerable speedups in normal rendering, Autodesk Arnold for example. Because Quadro release was a bit earlier and software companies were more involved they had more time to implement RT bits and pieces.

Anyhow, slides will be out on Friday and reviews next Monday. We will see how things fare.
Games and RT-wise, we will probably have to wait for a couple months until first games with DXR support are out.

In the case that all of that positive stuff happens, is it still worth the massive price increase though?
Nope. It is a balancing act though. RTX cards are prices into enthusiast range and beyond, this is the target market that traditionally is willing to pay more for state of the art hardware. Whether Nvidia's gamble will pay off in this generation remains to be seen.

DLSS performance could potentially be very important to me since I use a lot of AA even at 1440p 24",but in order to say that 2x figure apllies to what I experience it's gonna have to be supported in more than a bunch of games. Still,if 20 series has advantage in DLSS,Vulkan/DX12 and in performance in HDR mode, on top of let's say 40% over last gen, it's overall performance increase compared to pascal will look pretty good across the board even witohut adding RT into the equasion.
40% is optimistic. No doubt Nvidia chose the best performers to put onto their slides. Generational speed increases have been around 25% for a while and it is likely to be around that this time as well, across a large number of titles at least.

DLSS is a real unknown here. In theory, it is free-ish AA (due to being run on Tensor cores) that would be awesome considering how problematic AA is in contemporary renderers. Whether it will pan out as well as current marketing blabber makes it out to be. The potential is there.
 
Last edited:
That "actually performed" has a huge asterisk next to it. This opinion is based on early versions of Shadow of Tomb Raider, Metro Exodus and Battlefield V, in that order. Assuming that developers (who had the actual cards for around two weeks before the event) do no further optimizations. Convienently things that do work get ignored. Pica Pica comes to mind as do a bunch of professional pieces of software that do not necessarily do real-time raytracing but offer considerable speedups in normal rendering, Autodesk Arnold for example. Because Quadro release was a bit earlier and software companies were more involved they had more time to implement RT bits and pieces.

Anyhow, slides will be out on Friday and reviews next Monday. We will see how things fare.
Games and RT-wise, we will probably have to wait for a couple months until first games with DXR support are out.
It's still the only real information we have on these cards. I'd be against pre-ordering these even if we had more information because it's daft to pre-order a GPU. We should hopefully have 3D Mark's RT benchmark at or around launch.
 
Back
Top