Monday, December 5th 2022
January 5 Release Date Predicted for NVIDIA GeForce RTX 4070 Ti
A January 5, 2023 release date is being mooted by retailers for NVIDIA's upcoming GeForce RTX 4070 Ti graphics card, which is widely expected to be a re-branding of what would have been the RTX 4080 12 GB. Italian retailer Drako started a countdown for an ASUS TUF Gaming RTX 4070 Ti O12G custom-design graphics card, which winds down to January 5, and aligns with the rumored January 3 announcement of the card. It is also expected that reviews of the RTX 4070 Ti will be allowed to go live on January 4.
The GeForce RTX 4080 12 GB was supposed to max out the 4 nm AD104 silicon, featuring 7,680 CUDA cores across 60 SM (streaming multiprocessors), 60 RT cores, 240 Tensor cores, 240 TMUs, and 80 ROPs. The GPU features a 192-bit wide GDDR6X memory interface, to which NVIDIA is giving 21 Gbps-rated memory, yielding 504 GB/s of memory bandwidth. Its most interesting aspect is its power configuration, with a typical board power of 285 W, which makes it technically possible for board partners to use two 8-pin PCIe power connectors, unless they've been asked nicely to implement the 16-pin 12VHPWR connector.
Sources:
Drako.it, Wccftech, VideoCardz
The GeForce RTX 4080 12 GB was supposed to max out the 4 nm AD104 silicon, featuring 7,680 CUDA cores across 60 SM (streaming multiprocessors), 60 RT cores, 240 Tensor cores, 240 TMUs, and 80 ROPs. The GPU features a 192-bit wide GDDR6X memory interface, to which NVIDIA is giving 21 Gbps-rated memory, yielding 504 GB/s of memory bandwidth. Its most interesting aspect is its power configuration, with a typical board power of 285 W, which makes it technically possible for board partners to use two 8-pin PCIe power connectors, unless they've been asked nicely to implement the 16-pin 12VHPWR connector.
59 Comments on January 5 Release Date Predicted for NVIDIA GeForce RTX 4070 Ti
I mean, I wouldn't blame them. Who thought Etherum was going to switch to Proof-of-Stake, effectively crashing the mining market? It was hanging in the air for years, but nobody knew when it would actually happen, or if it would at all.
Edit: I find it supported by the fact that Ada is basically a larger, more efficient Ampere made on a smaller node. Not much effort has been made to improve on the architecture, which makes me believe that gaming wasn't the main focus in development. Sure, there's DLSS 3 and stuff, but those improvements are usually a side note, not the main focus.
And then have the audacity to tell us that's really expensive, so we should be glad it's only 70% price increase.
In the end its a chip that works and improves through iterative technological upgrades/adjustments. Both Nvidia and AMD have historically been doing a dance with what they do or don't enable on these chips, and how this suits different purposes or markets. But in the end its a huge amount of programmable cores that can work in parallel. We've seen Nvidia's CUDA based arch evolve over time, and Geforce was already thinned down to pure gaming (Pascal), GPGPU capability removed; then we got Volta and they borrowed tensor cores from it to transplant into the Pascal 'base' (=Turing) and added some cache and sauce so now we have an RT core too. But in the usage/purpose, there are similarities between everything Nvidia releases, and always have been. The usage of machine learning/'AI' to produce frames is effectively also a gaming improvement that originates from another corner of the market.
I think they've reached the end of the line with the fruits they can harvest for a more efficient gaming GPU based on raster alone. The marketing on Geforce kinda follows that logic, its getting separated further and further from reality, just like with Intel's chaotic explanation and changes to how turbo and frequency works, resulting in product stacks that become largely unusable or provide parts you'd rather undervolt; or priced way out of comfort because they must adhere to inflated marketing claims. You're right, what used to 'seem' side note like DLSS version upgrade is now front and center; but compare it to Maxwell > Pascal! That was similarly mostly just a good shrink and some improvements to power delivery (Ada has almost the same powerpoint slides :D) enabling higher frequencies. Except today, they barely pay off on their own; diminishing returns happen, especially as lots of raster components in the pipeline are becoming more dynamic.
TL DR I agree with the idea the focus has indeed shifted since Volta, but it remains plausible that still was the best way forward for their gaming lineup.
Remember the GeForce RTX 2080 Ti Cyberpunk 2077 Edition, that came out months before game was finally launched, and was actually then too slow to run the ray-tracing in game properly?
:-D
If Nvidia was actively trying to target miners, there's a lot of GPU silicon they could remove, building dedicated, smaller dies. As it stands, their compute cards only use the same dies meant for gaming, albeit with (a lot of) disabled shaders.