Monday, August 20th 2018
NVIDIA GeForce RTX 2080, 2070, and 2080 Ti Specifications Revealed
(Update 1: NVIDIA at its Koln event also revealed that these graphics cards were made for overclocking, with highly-improved power regulation and management systems designed just for that purpose. Jensen Huang himself added that these new graphics cards, with their dual 13-blade radial fan design, work at one fifth the audio levels of a previous-gen GeForce GTX 1080 Ti.
With product pages and pricing of the GeForce RTX 2080, RTX 2070, and the RTX 2080 Ti going up ahead of formal unveiling. The RTX 2080 leads the pack with 2944 CUDA cores, 1515 MHz GPU clocks, 1710 MHz boost, and 14 Gbps memory; while the RTX 2070 is equipped with 2304 CUDA cores, 1410 MHz GPU clocks, 1620 MHz boost, and an unchanged 14 Gbps GDDR6 memory. The RTX 2080 Ti leads the pack with 4352 CUDA cores, 1350 MHz core, 1545 MHz boost, and 14 Gbps memory, but a wider 352-bit memory interface.
With product pages and pricing of the GeForce RTX 2080, RTX 2070, and the RTX 2080 Ti going up ahead of formal unveiling. The RTX 2080 leads the pack with 2944 CUDA cores, 1515 MHz GPU clocks, 1710 MHz boost, and 14 Gbps memory; while the RTX 2070 is equipped with 2304 CUDA cores, 1410 MHz GPU clocks, 1620 MHz boost, and an unchanged 14 Gbps GDDR6 memory. The RTX 2080 Ti leads the pack with 4352 CUDA cores, 1350 MHz core, 1545 MHz boost, and 14 Gbps memory, but a wider 352-bit memory interface.
59 Comments on NVIDIA GeForce RTX 2080, 2070, and 2080 Ti Specifications Revealed
But considering it will bring more fp64 performance and probably marginally higher clocks, what relevance does it have for the consumer market? At best we're looking at a symbolic top model performing a little better than Vega 64, which by that time will be competing with GTX 2060 in the $300 segment. Really? In which timeframe? You do know that Pascal offers 80-85% more performance per watt vs. Polaris and Vega, and Volta and Turing is even better. Even with Nvidia focusing on RTX, they still have a historic gap to close.
But AMD is certainly best at one thing; they have the biggest room for improvement!
I still wish prices were lower though…
Fury X: 8.9 million transistors, 596 mm², 28 nm TSMC
1080 Ti: 11.8 million transistors, 471 mm², 16 nm TSMC
Vega 64: 12.5 million transistors, 510 mm², 14 nm GloFo
2080 Ti: 18.6 million transistors, 754 mm², 12 nm TSMC
I think NVIDIA wasting their time on RT/Tensor gave AMD a window to steal the gaming crown.
I still don't understand the relevance of Vega 20 as you referred to, we both know it wouldn't make a real difference vs. Turing.
Literally the GTX 2070 is the same price as the current GTX 1080
AMD's problem isn't lack of brute force power, it's lack of resource management.
Above all, Nvidia is trying to propel the industry forward with new technologies and more efficient ways of extracting performance! It's not the simple core increase, clock increase, to continue to give more frames!
Of course it will not be immediate, this is just the beginning, but no doubt that this will be the future of gaming. The trade-off for now is the price. Next year, with the density of 7nm, maybe it allow smaller and cheaper chips.
One part that I found very interesting, is using the Tensor Cores for anti-aliasing (DLSS), which in theory can lead to minimal or nonexistent losses, than the others AA methods.
It would indeed be something fantastic, if we can have AA in its splendor without loss of performance. The demo of UE4 they showed at 4K with AA, the 1080Ti can only run it at 30fps and the RTX with the Tensor Cores, ran it at 73fps or 78fps.
Concurrent FP and INT execution: FP for colors, INT for addresses, for example"
No its not. Its in Volta. Its new to GeForce. Turing is a revised Volta and we can look to Titan V on what to expect.
Also bare in mind that 16 nm TSMC is better than 14 nm GloFo. We don't know how 7 nm GloFo will stack up to 12 nm TSMC. Primitive Shaders? No one. Async compute? Can see it working with ReLive, DirectX 12, and Vulkan. Because no one is saying that other than NVIDIA.
But it's funny to see all the opinionators on Youtube having a meltdown over this launch.
This time we have Turing and AMD doesn't even have a public plan about their next step.
Anyways...
With AMD having no luck or having time constraints in releasing their upcoming Navi or shrunken down Vega cores to keep up with Nvidia's new silicon, I can say that they've lost the battle yet again & the end users; which is us, are paying the price.