Wednesday, November 4th 2020
NVIDIA Reportedly Working on GeForce RTX 3080 Ti Graphics Card with 20 GB GDDR6X VRAM
A leak from renowned (and usually on-point) leaker Kopite7kimi claims that NVIDIA has finally settled on new graphics cards to combat AMD's RX 6800 threat after all. After the company has been reported (and never confirmed) to be working on double-memory configurations for their RTX 3070 and RTX 3080 graphics cards (with 16 GB GDDR6 and 20 GB GDDR6X, respectively), the company is now reported to have settled for a 20 GB RTX 3080 Ti to face a (apparently; pending independent reviews) resurgent AMD.
The RTX 3080 Ti specs paint a card with the same CUDA core count as the RTX 3090, with 10496 FP32 cores over the same 320-bit memory bus as the RTX 3080. Kopite includes board and SKU numbers (PG133 SKU 15) along a new GPU codename: GA102-250. The performance differentiator against the RTX 3090 stands to be the memory amount, bus, and eventually core clockspeed; memory speed and board TGP are reported to mirror those of the RTX 3080, so some reduced clocks compared to that graphics card are expected. That amount of CUDA cores means NVIDIA is essentially divvying-up the same GA-102 die between its RTX 3090 (good luck finding one in stock) and the reported RTX 3080 Ti (so good luck finding one of those in stock as well, should the time come). It is unclear how pricing would work out for this SKU, but pricing comparable to that of the RX 6900 XT is the more sensible speculation. Take this report with the usual amount of NaCl.
Sources:
Kopite7kimi @ Twitter, via Videocardz
The RTX 3080 Ti specs paint a card with the same CUDA core count as the RTX 3090, with 10496 FP32 cores over the same 320-bit memory bus as the RTX 3080. Kopite includes board and SKU numbers (PG133 SKU 15) along a new GPU codename: GA102-250. The performance differentiator against the RTX 3090 stands to be the memory amount, bus, and eventually core clockspeed; memory speed and board TGP are reported to mirror those of the RTX 3080, so some reduced clocks compared to that graphics card are expected. That amount of CUDA cores means NVIDIA is essentially divvying-up the same GA-102 die between its RTX 3090 (good luck finding one in stock) and the reported RTX 3080 Ti (so good luck finding one of those in stock as well, should the time come). It is unclear how pricing would work out for this SKU, but pricing comparable to that of the RX 6900 XT is the more sensible speculation. Take this report with the usual amount of NaCl.
140 Comments on NVIDIA Reportedly Working on GeForce RTX 3080 Ti Graphics Card with 20 GB GDDR6X VRAM
For comparison, my 2080 has 200-210W of the reported power draw on the GPU and about 20W on RAM plus minor amounts left over for other stuff. I seem to remember GPU taking the majority of power budget from earlier generations as well. Was something changed in how this stuff is reported or is GDDR6X really this power hungry?
Just as an example, the first 3080 GPU-Z screenshot Google search returned:
Instead they just launch new SKU's to compete.
2080TI $999 -> 3080TI +$999 probably,
2080 $699 -> 3080 $699
2070 $499 -> 3070 $499
2060S 399$ -> 3060TI $399 probably
xx60 MID RANGE class GPU costing the same amount as an entire console. Pure madness.
Welcome to the Nvidia Screw-over train, we take your money and screw you over after a little while :roll:
Same happened to Titan Pascal owners with the when GTX 1080 Ti launched it was cheaper, faster for a lot of things then the Titan Pascal was then we out phase because we can't have a card that can out perform our Titan, and then a new Titan Pascal was launched with even more CUDA cores :laugh:
Unfortunately it'll be priced $200 too high most probably.
I am so happy with this news. haha.
another nuclear reactor to warm up the house in the upcoming winter season.
surviver? :(
on my 290x 4gb i've played titles that "required" 6 or even 8gb vram just fine. i've dialed up texture quality above recommended if i didnt liked how the game looked and still hadnt problems because of lack of vram. so judging what is enough based on game requirements is a bit pointless.
set a price range. check what meets your performance requirements. buy the card with highest amount of vram that fit your price and you are good to go. by the time the games look too ugly because you had to lower textures the card would be long dead fps wise.
as for 970 the problems never was in the amount of vram. slow 0.5gb was what caused the problems as it tanked performance very hard. when nivida isolated those 0.5gb with drivers 970s worked fine even with titles that required 4+ gb vram.
on tech level both camps have different approach for solving vram limitation.
nvidia's lossless compression allows them to have lower capacities and bus but to preserve higher performance. so they fit as min as possible memory for bigger margins.
with gnc amd had to throw a lot of memory bandwidth (bus for 7970 was 384bit, 290x was 512bit, fury, vega and vii were 1024bit) to provide enough "fuel" for gpu but it never was enough. from rdna amd have memory bus topped at 256bit which before was for their midrange cards (no doubts 5700xt itself is midrange card) and now with rdna2 even their top tier 6900 has 256bit bus. sure new cash provides higher speeds but still you need to feed this cash with adequate speeds and amd thinks that what was before bus suitable for mid range cards is now enough even for flag ship.
i think 16gb ram in amd's cards is more targeted at feeding the cash (like load all textures in vram so cash can have instant access w/o need of calls from ram/storage) and/or they believe the can have significant performance boost from direct cpu access to vram so they made sure they provide enough vram for devs to play with.
it will be interesting to see if those thing will really help amd i dont have to imagine anything. they already did it with hairworks, forced tessellation and gameworks (or wharever it was called) extensions and physx. i dont remember the outrage thou. :rolleyes:
now that amd holds consoles and devs has to do optimization for amd's hardware the coin has flipped and nvidia is quite jumpy when something becomes close to take away "performance crown".
a single game announcement is enough to cause... leakages :rolleyes:
btw physx is open source for some time now ;)
The 3090 is a workstation class GPU that will be sold in droves to VFX studios and freelancers, who will buy them in pairs to use NVlink and get a much needed 48 GB for 3D rendering.
The 3080 Ti fills the gap for low end workstations. The 3080 10 GB just don't cut it for rendering or even complex video editing and FX. AMD's 6900 XT was looking like the right purchase until this announcement.
That's why this 3080 Ti makes tons of sense outside gaming. I for one, will buy it the instant I can find it in stock.
Those VFX studios are buying Mac Pro's
"Auto-Notify"
"Not-In-Stock"
"Backordered"