Thursday, July 27th 2023

NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

NVIDIA has reportedly shelved plans in the short term to release the rumored GeForce RTX 4090 Ti flagship graphics card, according to Kopite7kimi, a reliable source with NVIDIA leaks. This card had been extensively leaked over the past few months as featuring a cinder block-like 4-slot thickness, and a unique PCB that's along the plane of the motherboard, rather than perpendicular to it. From the looks of it, sales and competition in the high-end/halo segment are too slow, the current RTX 4090 remains the fastest graphics card you can buy, and the company seems unfazed by the alleged Radeon RX 7950 series, given that AMD has already maxed out the "Navi 31" silicon, and there are only so many things the red team can try, to beat the RTX 4090.

That said, the company is reportedly planning more SKUs based on the AD103 and AD106 silicon. The AD103 powers the GeForce RTX 4080, which nearly maxes it out. The AD104 has been maxed out by the RTX 4070 Ti, and there could be a gap between the RTX 4070 Ti and the RTX 4080 that AMD could try to exploit by competitively pricing its RX 7900 series, and certain upcoming SKUs. This creates scope for new SKUs based on cut-down AD103 and the GPU's 256-bit memory bus. The AD106 is nearly maxed out with the RTX 4060 Ti, however there's still room to unlock its last remaining TPC, use faster GDDR6X memory, and attempt to slim the vast gap between the RTX 4060 Ti and the RTX 4070.
In related news, Kopite7kimi also claims that NVIDIA's next-generation flagship GPU could feature a 512-bit wide memory interface, in what could be an early hint that the company is sticking with GDDR6X (currently as fast as 23 Gbps), and not transitioning over to the GDDR7 standard (starts at 32 Gbps), which offers double the speeds of GDDR6.
Sources: VideoCardz, kopite7kimi (Twitter), kopite7kimi (Twitter)
Add your own comment

75 Comments on NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

#1
xrli
512-bit wide GDDR6X sounds great. Hopefully the next gen can also get 48GB or more of VRAM, it would benefit the local LLM community massively.
Posted on Reply
#2
TheLostSwede
News Editor
Maybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
Posted on Reply
#3
nguyen
TheLostSwedeMaybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
Chips better used on 7000usd Quadro
Posted on Reply
#4
HBSound
TheLostSwedeMaybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
They just need to come with the next round 50 series and keep going. I personally never understood the purpose of TI's.

Get to the next step of hardware.
Posted on Reply
#5
TheDeeGee
Still want to see that cooler and PCB design in the future for other cards, it's too interesting.
Posted on Reply
#6
ARF
In related news, Kopite7kimi also claims that NVIDIA's next-generation flagship GPU could feature a 512-bit wide memory interface, in what could be an early hint that the company is sticking with GDDR6X (currently as fast as 23 Gbps), and not transitioning over to the GDDR7 standard (starts at 32 Gbps), which offers double the speeds of GDDR6.
There is GDDR6 memory 20 Gbps. 32 is not double of 20.
Posted on Reply
#7
TheLostSwede
News Editor
TheDeeGeeStill want to see that cooler and PCB design in the future for other cards, it's too interesting.
It looks slot breaking...
Posted on Reply
#9
phanbuey
TheLostSwedeIt looks slot breaking...
I bet that's closer to the reason why they cancelled it haha
Posted on Reply
#11
TheLostSwede
News Editor
phanbueyI bet that's closer to the reason why they cancelled it haha
Maybe even the reinforced slots couldn't handle it?
Posted on Reply
#12
Tomgang
Dosent matter for me. Just means my rtx 4090 will stay the fastest card until nvidia or amd next gen/refresh cards comes out.
Posted on Reply
#13
ARF
TheLostSwedeMaybe even the reinforced slots couldn't handle it?
Can't they make it dual-slot liquid cooled from the get-go?
Posted on Reply
#14
N/A
If GDDR6X remains perhaps ada- ext is 800mm2 still on N4. They are going for a 2080Ti extremely big die. And 600W to max out the 12VHPWR connector.
Posted on Reply
#15
crlogic
ARFIt depends on AMD. If it releases the long awaited Radeon RX 7950 XTX, then nvidia will have to do something.


www.techpowerup.com/gpu-specs/radeon-rx-7950-xtx.c3980
the company seems unfazed by the alleged Radeon RX 7950 series, given that AMD has already maxed out the "Navi 31" silicon, and there are only so many things the red team can try, to beat the RTX 4090.
Wiz's review shows you can gain about 7% from an overclock on 7900XTX, so I'm not sure how they're going to pull off a 30% uplift when they've already run out of core. Here's hoping though!
Posted on Reply
#16
ARF
crlogicWiz's review shows you can gain about 7% from an overclock on 7900XTX, so I'm not sure how they're going to pull off a 30% uplift when they've already run out of core. Here's hoping though!
Fermi is a great example. GF110 vs GF100. New revision, vastly improved performance and even lowered power consumption.



GTX 480:



GTX 580:

Posted on Reply
#17
crlogic
ARFFermi is a great example. GF110 vs GF100. New revision, vastly improved performance and even lowered power consumption.



GTX 480:



GTX 580:

Those efficiency and IPC gains are a great point but in that case NIVIDA also added more shaders and TMUs between those two cards whereas AMD is already using their biggest die with all components enabled. I don't think we'll see a revision like that on a mid cycle refresh, especially since the RX6X50 cards were also just an overclock and no increase in core count or IPC
Posted on Reply
#18
Darmok N Jalad
Perhaps they want to take an even better crack at the power connector. These would pull the most current through the connector and be even more likely to fail from a bad connection.
Posted on Reply
#19
mxthunder
Was looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.
Posted on Reply
#20
oxrufiioxo
mxthunderWas looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.
Unless they where planning a 200+ price cut to the 4080 a 4080ti doesn't make a lot of sense. Currently it could slot in around 1400 usd I doubt that would be overly appealing to most.
Posted on Reply
#21
Prima.Vera
That 4 slot abomination must be the winner of the World's Ugliest Video Card.

That reminded me of another monster, but winner of the World's Coolest Video Card:
Posted on Reply
#22
Noreng
crlogicWiz's review shows you can gain about 7% from an overclock on 7900XTX, so I'm not sure how they're going to pull off a 30% uplift when they've already run out of core. Here's hoping though!
The solution is obvious: crank the power limit to 750W (5x 8-pin), and stack some VCache on the MCDs.
Posted on Reply
#23
cvaldes
TheLostSwedeMaybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
Nah, they figured they could charge double for an AI accelerator card for enterprise customers.
Posted on Reply
#24
HBSound
What is sad is that the GPU is getting more expansive. GPUs are getting larger and larger, and cell phones are getting smaller and smaller!
It is sad not to see how a good redesign compresses the modern-day GPU. At this rate, every time the GPU is updated (2 years max) you have to get a new case and PSU.
Posted on Reply
#25
swaaye
I seem to remember rumors of an even bigger Navi product than Navi 31, but they apparently decided to not bother with it. I suppose another configuration of chiplets. Maybe that's been used for a Radeon Pro board. Perhaps the scaling just isn't there to get it past 4090 in a reasonable manner.
Posted on Reply
Add your own comment
Dec 19th, 2024 20:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts