• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

Joined
Sep 17, 2014
Messages
22,438 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.

It's even more saddening to see others yell 'pre order nao!' before performance is known. That just needs culling and this is what we're doing. Its all about peer pressure - that is why Nvidia tries to get 'influencers' on the net to join their dark side, using free hardware and bags of money. That should raise eyebrows more so than people here predicting (conservative) performance numbers: an architecture such as Pascal literally just sold itself because it was a leap forward. Turing really is not and this is clear as day.

Realistically, looking at the numbers, historically we have always been very accurate at making assumptions on performance. Simply because the low hanging fruit in CUDA really is gone now. We won't see tremendous IPC jumps and if we do, they will cost something else that also provides performance (such as clockspeeds). Its really not rocket science and the fact is, if you can't predict Turing performance with some accuracy, you just don't know enough about GPU.

So let's leave it at that, okay? This has nothing to do with triggered immature kids - in fact, that is the group spouting massive performance gains based on keynote shouts and powerpoint slides. The idiots who believe everything Nvidia feeds them. The Pascal card owners have nothing to be unhappy about - with Turing's release, the resale value of their cards will remain stagnant even though they become older. That is a unique event and one that will cause a lot of profitable used Pascal card sales. I'm not complaining!

If you don't like it, don't visit these threads or tech forum in general...
 
Joined
Jun 10, 2014
Messages
2,986 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
It's even more saddening to see others yell 'pre order nao!' before performance is known. That just needs culling and this is what we're doing. Its all about peer pressure - that is why Nvidia tries to get 'influencers' on the net to join their dark side, using free hardware and bags of money.
Tsukiyomi91 did no such thing, and if you read his/her posts you'll see it was criticism of anyone prejudging the performance.

an architecture such as Pascal literally just sold itself because it was a leap forward. Turing really is not and this is clear as day.
Pascal had the advantage of a node shrink, and it enabled much more aggressive boosting. Turing is a refinement of Volta, which is the first major architecture since Maxwell, that is clear as day.

Realistically, looking at the numbers, historically we have always been very accurate at making assumptions on performance. Simply because the low hanging fruit in CUDA really is gone now. We won't see tremendous IPC jumps and if we do, they will cost something else that also provides performance (such as clockspeeds). Its really not rocket science and the fact is, if you can't predict Turing performance with some accuracy, you just don't know enough about GPU.
And those estimates have usually been off when comparing across architectures. Most estimates for Kepler were completely off, this was back when Nvidia moved from their old "hot clock" to a new redesigned SM. Maxwell were not according to predictions either.
Turing have the largest SM change since Kepler. The throughput is theoretically double, >50% in compute workloads and probably somewhat less in gaming.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Tsukiyomi91 did no such thing, and if you read his/her posts you'll see it was criticism of anyone prejudging the performance.


Pascal had the advantage of a node shrink, and it enabled much more aggressive boosting. Turing is a refinement of Volta, which is the first major architecture since Maxwell, that is clear as day.


And those estimates have usually been off when comparing across architectures. Most estimates for Kepler were completely off, this was back when Nvidia moved from their old "hot clock" to a new redesigned SM. Maxwell were not according to predictions either.
Turing have the largest SM change since Kepler. The throughput is theoretically double, >50% in compute workloads and probably somewhat less in gaming.
Hype ,heard of it, your contradictory too , double is not 50% and as ever thats in select workloads, using the same methods and double compute my Vega's a winner ,but it is not in reality while gaming, though it's not as bad as some say either.

Hype sells.
And picking out edge cases such as you have there when hot clock shaders came in , they also brought hot chips that you neglected to mention and their own drama too ,it doesn't help.
People don't guess too wide of the mark here imho.
 
Joined
Aug 6, 2017
Messages
7,412 (2.78/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
DLSS performance could potentially be very important to me since I use a lot of AA even at 1440p 24",but in order to say that 2x figure apllies to what I experience it's gonna have to be supported in more than a bunch of games. Still,if 20 series has advantage in DLSS,Vulkan/DX12 and in performance in HDR mode, on top of let's say 40% over last gen, it's overall performance increase compared to pascal will look pretty good across the board even witohut adding RT into the equasion.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
In the case that all of that positive stuff happens, is it still worth the massive price increase though?
 
Joined
Feb 3, 2017
Messages
3,749 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
No one is looking at paper specs when referring to RT performance, they're basing it off how it actually performed. I'd say it's a lot worse to be extolling Turing's virtues than it is to maintain healthy skepticism when performance is unknown.
That "actually performed" has a huge asterisk next to it. This opinion is based on early versions of Shadow of Tomb Raider, Metro Exodus and Battlefield V, in that order. Assuming that developers (who had the actual cards for around two weeks before the event) do no further optimizations. Convienently things that do work get ignored. Pica Pica comes to mind as do a bunch of professional pieces of software that do not necessarily do real-time raytracing but offer considerable speedups in normal rendering, Autodesk Arnold for example. Because Quadro release was a bit earlier and software companies were more involved they had more time to implement RT bits and pieces.

Anyhow, slides will be out on Friday and reviews next Monday. We will see how things fare.
Games and RT-wise, we will probably have to wait for a couple months until first games with DXR support are out.

In the case that all of that positive stuff happens, is it still worth the massive price increase though?
Nope. It is a balancing act though. RTX cards are prices into enthusiast range and beyond, this is the target market that traditionally is willing to pay more for state of the art hardware. Whether Nvidia's gamble will pay off in this generation remains to be seen.

DLSS performance could potentially be very important to me since I use a lot of AA even at 1440p 24",but in order to say that 2x figure apllies to what I experience it's gonna have to be supported in more than a bunch of games. Still,if 20 series has advantage in DLSS,Vulkan/DX12 and in performance in HDR mode, on top of let's say 40% over last gen, it's overall performance increase compared to pascal will look pretty good across the board even witohut adding RT into the equasion.
40% is optimistic. No doubt Nvidia chose the best performers to put onto their slides. Generational speed increases have been around 25% for a while and it is likely to be around that this time as well, across a large number of titles at least.

DLSS is a real unknown here. In theory, it is free-ish AA (due to being run on Tensor cores) that would be awesome considering how problematic AA is in contemporary renderers. Whether it will pan out as well as current marketing blabber makes it out to be. The potential is there.
 
Last edited:
Joined
Feb 16, 2017
Messages
494 (0.17/day)
That "actually performed" has a huge asterisk next to it. This opinion is based on early versions of Shadow of Tomb Raider, Metro Exodus and Battlefield V, in that order. Assuming that developers (who had the actual cards for around two weeks before the event) do no further optimizations. Convienently things that do work get ignored. Pica Pica comes to mind as do a bunch of professional pieces of software that do not necessarily do real-time raytracing but offer considerable speedups in normal rendering, Autodesk Arnold for example. Because Quadro release was a bit earlier and software companies were more involved they had more time to implement RT bits and pieces.

Anyhow, slides will be out on Friday and reviews next Monday. We will see how things fare.
Games and RT-wise, we will probably have to wait for a couple months until first games with DXR support are out.
It's still the only real information we have on these cards. I'd be against pre-ordering these even if we had more information because it's daft to pre-order a GPU. We should hopefully have 3D Mark's RT benchmark at or around launch.
 
Top