Thursday, August 23rd 2018
NVIDIA "TU102" RT Core and Tensor Core Counts Revealed
The GeForce RTX 2080 Ti is indeed based on an ASIC codenamed "TU102." NVIDIA was referring to this 775 mm² chip when talking about the 18.5 billion-transistor count in its keynote. The company also provided a breakdown of its various "cores," and a block-diagram. The GPU is still laid out like its predecessors, but each of the 72 streaming multiprocessors (SMs) packs RT cores and Tensor cores in addition to CUDA cores.
The TU102 features six GPCs (graphics processing clusters), which each pack 12 SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and 1 RT core. Each GPC packs six geometry units. The GPU also packs 288 TMUs and 96 ROPs. The TU102 supports a 384-bit wide GDDR6 memory bus, supporting 14 Gbps memory. There are also two NVLink channels, which NVIDIA plans to later launch as its next-generation multi-GPU technology.
Source:
VideoCardz
The TU102 features six GPCs (graphics processing clusters), which each pack 12 SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and 1 RT core. Each GPC packs six geometry units. The GPU also packs 288 TMUs and 96 ROPs. The TU102 supports a 384-bit wide GDDR6 memory bus, supporting 14 Gbps memory. There are also two NVLink channels, which NVIDIA plans to later launch as its next-generation multi-GPU technology.
65 Comments on NVIDIA "TU102" RT Core and Tensor Core Counts Revealed
4352 cuda
544 tensor
68 rt
272 tmus
88 rops
68 sm
All this is, is the full chip specs for the die. Not 2080 ti
@btarunr
2080 Ti = GTX 570 (from cut downs).
www.techpowerup.com/gpudb/3305/geforce-rtx-2080-ti
Not the exact settings used, not the locations in game.
It is extremely vague and should be disregarded.
The "X2 1080" claim in the title is ridicules.
capitalism 101 at its finest :D
even if total numbers don't apply, you can see that turing it does something better in one game than it does in others, that's what I meant but you took four words and made a fuss. And who is that ridicules ? Sounds like a mentally challenged brother of Hercules.
1. DLSS was never benched on Pascal.
2. The entire DLSS green bar is non existant performance on Pascal, rebenched on Pascal. Its like taking the steering wheel of a car and telling you it doesn't drive without an engine, but the complete car (Turing) does. Just like that sad 'Turing = 5X Pascal' statement when it comes to RT performance.
3. Conclusion: take the Shadows of the Tomb Raider bar for a realistic performance scenario of 1080 vs 2080. Give or take 30-35%. In other words you're better off upgrading to a 1080ti.
You can find more hints and confirmations of a 30 odd percent jump when you compare clocks and shader counts between 1080 and 2080 as well.
Thank me later ;)
You would get
- no cut down chip, all enabled
- GPU RDMA (does it still make sense with nvlink and cpus like skylake xeon/ epyc which have separated SRIO)
- more vram and also ecc
- some more OpenGl extensions with custom high performance extensions (does it still make sense with Vulkan + RTX) ?
- 4 display ports (the best feature IMHO vs 3 displays ports + kinky hdmi)
My bet is there will be a Turing Titan or a Titan Turing which will be a good compromise from features most people will not need vs decent price
Nv's arrogance is going to cost them a gen.
I probably missed it... Don't remembered NV mentioning Async Computing...
Oh well, just have to wait a little longer, I guess.
Yeah, we have been here before, big whoop.