- Joined
- Dec 18, 2015
- Messages
- 142 (0.04/day)
System Name | Avell old monster - Workstation T1 - HTPC |
---|---|
Processor | i7-3630QM\i7-5960x\Ryzen 3 2200G |
Cooling | Stock. |
Memory | 2x4Gb @ 1600Mhz |
Video Card(s) | HD 7970M \ EVGA GTX 980\ Vega 8 |
Storage | SSD Sandisk Ultra li - 480 GB + 1 TB 5400 RPM WD - 960gb SDD + 2TB HDD |
Cost and scalability; Nvidia's Turing dies had to be effin' huge, so an older node was the cheap route. Clocks might play some role, but I think Turing is what it looks to be, a rough version of Pascal X Volta with RT, a stepping stone to the real deal in terms of architectural updates. It has likely provided a wealth of info about what degree of RT performance devs would need going forward, as well.
Note that right now some leaks also point at two nodes coming into use, 7nm and Samsung 8nm, with the better node being for the higher tier products. I'm not sure how much sense that really makes, but we do know TSMC's 7nm yield is pretty good. That would lend credibility to making bigger dies on it.
About clocks, in the end clocking is mostly about power/voltage curves and what you can push through on a certain node... Nvidia already achieved much higher clocks with Pascal on a smaller node and a big one in that was power delivery changes. They will probably only improve on that further, and Turing pushed clocks back a little bit, however minor; probably related to introduction of RT/Tensor (power!). Also, TSMC's early 7nm was DUV which wasn't specifically suited for clocking high.
But nvidia usually takes it easy and chooses the path of efficiency, this helps to keep some margin to overclock the opposite of AMD that puts the chips almost on edge. Nvidia would just show that ampere isn't such an impressive architecture after all.