• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

It's to the point now that other methods are probably far superior. And the games are going to need 12 threads to run properly, b/c RT has to be processed on CPU. Surprise, vid card is worthless for what it was supposedly made for.

I told you, they're scam cores, and it's all for marketing. Real time reflections aren't new, but robbing this much perf is lol

It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.

LOL honestly when I saw the batterfield demo I was like o cool and the metro demo. But then I played strange brigade and I was like is this shit doing ray tracing on my gtx 1080? Hahahha. I guess I was just paying too much attention to shadows just because it was all about light and shadows. I was like but strange brigade looks just like that its so shiny and got great light.. Hahahha.
 
LOL honestly when I saw the batterfield demo I was like o cool and the metro demo. But then I played strange brigade and I was like is this shit doing ray tracing on my gtx 1080? Hahahha. I guess I was just paying too much attention to shadows just because it was all about light and shadows. I was like but strange brigade looks just like that its so shiny and got great light.. Hahahha.

It should only be used for shadows, then maybe it wouldn't suck. It should be offloading rasterization and increasing perf, not this crap.
 
LOL honestly when I saw the batterfield demo I was like o cool and the metro demo. But then I played strange brigade and I was like is this shit doing ray tracing on my gtx 1080? Hahahha. I guess I was just paying too much attention to shadows just because it was all about light and shadows. I was like but strange brigade looks just like that its so shiny and got great light.. Hahahha.
Well, if the past is any indication, don't worry. When PhysX came out programmers forgot how to program physics on the CPU and they absolutely could not do physics without PhysX. Programmers will soon forget how to do rays and shadows without ray tracing.
 
Would you not want to see RTX support on the most popular GPUs? If the majority of end users will not use it, why will developers support it?

There is a difference between having RTX and actually being able to use it. If RTX 2080Ti barely has the power to run things at fluid framerate, you can be sure RTX 2060 won't be able to run at any playable framerate with RTX enabled. So, literally, what's the point? It's about the same as having a pixels hader graphic card back in the day that was barely capable of running game seven without pixels haders, let alone with them because you ended up watching slideshows instead of real-time game...

It's to the point now that other methods are probably far superior. And the games are going to need 12 threads to run properly, b/c RT has to be processed on CPU. Surprise, vid card is worthless for what it was supposedly made for.

I told you, they're scam cores, and it's all for marketing. Real time reflections aren't new, but robbing this much perf is lol

It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.

I'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
 
So is TU106 1/2 of TU102. or 1/3.

TU104 would have not existed if this generation was built on 7nm see.

In case 2070 is TU106 that is. TU106 would be TU206/192bit/7nm, and TU102 becomes TU204/256 bit with 2x core count.

TU104 only exists to counter possible VEGA64 on 7nm and 1.2Ghz HBM2.
 
IMO, they can't/shouldn't brand a card as RTX if it doesn't have the extra h/w they are claiming for the 2080/ti. If it doesn't have those it should stay as GTX or it's very misleading.

Given this is NV though, it would not surprise me if they do this even if it doesn't have the h/w and does the RT via the regular cores and as a result has terrible RT performance. This could be "3.5GB" all over again if they do that.

I'm really hoping AMD will put DXR etc. support in a driver release soon (maybe the big EoY one) for Vega and it will be interesting to see how that performs and if it supports more than one card, e.g. rasterising done on one card and RT overlay effects on the 2nd or similar.

It would be quite funny if the next driver had it and is released over the next week or so just before the 20th launch.
 
It's to the point now that other methods are probably far superior. And the games are going to need 12 threads to run properly, b/c RT has to be processed on CPU. Surprise, vid card is worthless for what it was supposedly made for.

I told you, they're scam cores, and it's all for marketing. Real time reflections aren't new, but robbing this much perf is lol

It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.
All the CPU cores you can get still can't achieve the raytracing efficiency of Turing. CPU raytracing is strictly for non-realtime.

There is a difference between having RTX and actually being able to use it. If RTX 2080Ti barely has the power to run things at fluid framerate, you can be sure RTX 2060 won't be able to run at any playable framerate with RTX enabled. So, literally, what's the point? It's about the same as having a pixels hader graphic card back in the day that was barely capable of running game seven without pixels haders, let alone with them because you ended up watching slideshows instead of real-time game...
What's the point of AMD and Nvidia releasing low-end GPUs with 4 GB memory and full API support, including tessellation, which they have no way of using? What was the point of AMD boasting about their Direct3D 12 support in early GCN models?
It's the chicken and the egg problem, you need wide hardware support to get games to use it, even if it means most cards will have limited to no real use for it.

Useful or not, the support in the Turing GPUs is not going to hurt you. Even if you never use the raytracing, Turing is still by far the most efficient and high-performing GPU design. The only real disadvantage is the larger dies and very marginal wasted power consumption.

Full scene real time raytracing at a high detail level are probably several GPU generations away, but that doesn't mean it wouldn't have some usefulness before that. We've seen people do the "Cornell box" with realtime voxelised raytracing with CUDA, not as advanced as Nvidia did it with RTX of course, but still enough to get realistic soft shadows and reflections. With RTX, some games should be able to do good soft shadows and reflections in full scenes, while retaining good performance.

I don't get why the tech demos from Nvidia focus so much on sharp reflections. Very few things in the real world have sharp reflection, and even my car with newly applied "HD Wax" from AutoGlym achieves the glossiness of that car in the Battlefield V video. I do understand that shiny things sell, but I wish they focused more on realistic use cases.

I'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
How could that happen? R9 Fury doesn't have hardware accelerated raytracing, it only has general compute.
 
It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.
I'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
I'm really hoping AMD will put DXR etc. support in a driver release soon (maybe the big EoY one) for Vega and it will be interesting to see how that performs and if it supports more than one card, e.g. rasterising done on one card and RT overlay effects on the 2nd or similar.
While the comparisons in Nvidia's presentation were all over the place and 6 times faster than Pascal (whichever GPU they mean) sounded suspicious, on the same slide they also noted Turing (most likely Quadro RTX 8000) is 18% faster than DGX (4x Volta V100). And V100s are beasts. No, Fury and Vega do not have a chance.
 
Last edited:
While the comparisons in Nvidia's presentation were all over the place and 6 times faster than Pascal (whichever GPU they mean) sounded suspicious, on the same slide they also noted Turing (most likely Quadro RTX 8000) is 18% faster than DGX (4x Volta V100). And V100s are beasts. No, Fury and Vega do not have a chance.

Made up nvidia numbers are truth, now? Faster in doing one particular calculation that is useless. It's the intel way of marketing. Kaby lake is 15% faster than skylake!

Plus, you should know that's physically impossible or did the die become 4x larger? What? The dies are the same just cut down from Volta (with some scam cores sprinkled in)? Well, you don't say.

Four V100s would be able to run the game at 4k...not 1080p 40 fps LOLOLOLOL
 
Last edited:
Plus, you should know that's physically impossible or did the die become 4x larger? What?
Dedicated hardware. Much more efficient than generalized hardware.
What did you think RT cores are?
 
340 sq.mm 192 bus is nonsens, unless they are preparing to shrink all the chips to 1/2 their size on 7nm keeping the bus intact.

it is 256 bit at least. so 2060 is the cut down version of 2070. similar to GTX 760/770.
 
340 sq.mm 192 bus is nonsens, unless they are preparing to shrink all the chips to 1/2 their size on 7nm keeping the bus intact.

it is 256 bit at least. so 2060 is the cut down version of 2070. similar to GTX 760/770.
Precisely how is a 192-bit bus nonsense?
GP106 (GTX 1060) uses a 192-bit bus, and features 3/6 GB of memory.

192-bit memory bus actually means 3×64-bit memory controllers, each are actually dual 32-bit controllers. Chips can in fact be designed with any multiple of 32-bit memory controllers.
 
IDGAF to what ppl say. that RTX2060 looks promising with that price range. If it performs as good as a GTX1070 in non ray tracing benchmarks & gaming, then I'll say I get one.
 
GDDR6 provides 75% more bandwidth 1536 represents 20% improvement over 1280 Cuda. In theory 45% improvement overall same as 2070 to 1070. But why not go for 2070 instead, 2060 will be too slow at this point in time. the --60 card usually releases as it is just about to become obsolete. At least mine did. Had only crushing experience with these.
 
Oooooh, I see. Power consumption isn't important anymore. Let's waste some.
It's still more efficient than anything else on the market, how can you complain about that?

GDDR6 provides 75% more bandwidth 1536 represents 20% improvement over 1280 Cuda. In theory 45% improvement overall same as 2070 to 1070. But why not go for 2070 instead, 2060 will be too slow at this point in time. the --60 card usually releases as it is just about to become obsolete. At least mine did. Had only crushing experience with these.
Turing is a new architecture, you should not assume performance based on CUDA cores across architectures.

Still, the xx60 cards are only the entry product for gaming, if you want more, you got to pay more.
 
its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
 
its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
No one is looking at paper specs when referring to RT performance, they're basing it off how it actually performed. I'd say it's a lot worse to be extolling Turing's virtues than it is to maintain healthy skepticism when performance is unknown.
 
its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
Yes, it's sad how the forums and the usual suspects in youtube channels keep flooding the Internet about how terrible Turing is… But the same people would have been dancing in the streets if AMD had released it instead. Just look at the insane hype for Polaris, Vega and now Navi(just the other day). Anyone remember Polaris was supposed to yield 2.5× performance per watt? And Turing may only yield ~35% performance gains, I wonder which company made a turd?
 
Polaris was a disappointment, Vega is a flop. Let's HOPE Navi can bring something good to the table, despite being late.
 
Polaris was a disappointment
Well, no. I'd take 580 over 1060 any day.

Vega is a flop.
Performance wise it's alright, gtx 1080 match. It's the very late release date,low availability,price and power consumption that made it an unattractive purchase for most. even now v64 is pretty pricey here, sells at +3000pln while 1080 is 2300-2400 and cheapest 1080Ti start at 3200. rtx 2080 duke is 3500 for preorder.
 
Last edited:
Polaris was a disappointment, Vega is a flop. Let's HOPE Navi can bring something good to the table, despite being late.
Navi is still going to be GCN, and we still don't know what changes are coming. AMD doesn't have a new architecture scheduled before ~2021. The changes from Pascal to Turing are larger than what we've seen so far within GCN.

Well, no. I'd take 580 over 1060 any day.
You are free to make your own (misguided) choices. GTX 1060 is a better choice than RX 580, unless we are talking of edge cases.

Vega was a flop; primarily because it fell further behind Nvidia in performance and efficiency, but also because AMD failed to make good profit on them. Most people have long forgotten that AMD actually tried to sell them at $100 over MSRP, but changed after a lot of bad press.
 
Navi is still going to be GCN, and we still don't know what changes are coming. AMD doesn't have a new architecture scheduled before ~2021. The changes from Pascal to Turing are larger than what we've seen so far within GCN.


You are free to make your own (misguided) choices. GTX 1060 is a better choice than RX 580, unless we are talking of edge cases.

Vega was a flop; primarily because it fell further behind Nvidia in performance and efficiency, but also because AMD failed to make good profit on them. Most people have long forgotten that AMD actually tried to sell them at $100 over MSRP, but changed after a lot of bad press.

well word is Next gen architecture should be coming out in 2020 not 2021. I am not sure where 2021 started. I think it was wccftech throwing dates out there. Their roadmap for GPU after Navi labeled next gen was around 2020 I always thought it was 2020, but started hearing 2021 out of no where. I would be very surprised if we don't see a high end gpu in 2020. There is also a guy on hardforum, he usually is fairly spot on with AMD stuff. Not sure if works there or what but he doesn't reveal too much just little info. He is pretty damn accurate. He responded to my post just a couple of weeks back saying AMD is busting their ass on next gen and its ahead of schedule and should be out 2020 the latest. We will see. But I would bet on 2020 since intel will have their card in 2020 as well.
 
It's also good to see that AMD's R&D funding is back up to 2012 levels and climbing rapidly.

Now that there is more money going into R&D (it looks like over 50% more than in 2014) that should help move things along.
 
Back
Top