Thursday, July 27th 2023

NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

NVIDIA has reportedly shelved plans in the short term to release the rumored GeForce RTX 4090 Ti flagship graphics card, according to Kopite7kimi, a reliable source with NVIDIA leaks. This card had been extensively leaked over the past few months as featuring a cinder block-like 4-slot thickness, and a unique PCB that's along the plane of the motherboard, rather than perpendicular to it. From the looks of it, sales and competition in the high-end/halo segment are too slow, the current RTX 4090 remains the fastest graphics card you can buy, and the company seems unfazed by the alleged Radeon RX 7950 series, given that AMD has already maxed out the "Navi 31" silicon, and there are only so many things the red team can try, to beat the RTX 4090.

That said, the company is reportedly planning more SKUs based on the AD103 and AD106 silicon. The AD103 powers the GeForce RTX 4080, which nearly maxes it out. The AD104 has been maxed out by the RTX 4070 Ti, and there could be a gap between the RTX 4070 Ti and the RTX 4080 that AMD could try to exploit by competitively pricing its RX 7900 series, and certain upcoming SKUs. This creates scope for new SKUs based on cut-down AD103 and the GPU's 256-bit memory bus. The AD106 is nearly maxed out with the RTX 4060 Ti, however there's still room to unlock its last remaining TPC, use faster GDDR6X memory, and attempt to slim the vast gap between the RTX 4060 Ti and the RTX 4070.
In related news, Kopite7kimi also claims that NVIDIA's next-generation flagship GPU could feature a 512-bit wide memory interface, in what could be an early hint that the company is sticking with GDDR6X (currently as fast as 23 Gbps), and not transitioning over to the GDDR7 standard (starts at 32 Gbps), which offers double the speeds of GDDR6.
Sources: VideoCardz, kopite7kimi (Twitter), kopite7kimi (Twitter)
Add your own comment

75 Comments on NVIDIA Cancels GeForce RTX 4090 Ti, Next-Gen Flagship to Feature 512-bit Memory Bus

#51
chrcoluk
I have resigned to that I will probably be buying a used GPU, when those that upgrade every gen sell of their 4000 series card after 5000 series is out. If Nvidia do a end of gen 1080ti style discount, it might make me buy new at end of this gen though. As I do need a VRAM upgrade.

Shame AMD have no SGSSAA else I would have hopped over.
Posted on Reply
#52
Dan.G
TheLostSwede£3.50?
Per 1-bit? :D
And I can sell only 1 kidney...
Posted on Reply
#53
bearClaw5
TheLostSwedeMaybe Nvidia figured no-one wanted to pay $3,000 for a consumer graphics card?
With Nvidia making so much from AI, they probably don't even care about the consumer graphics business.
Posted on Reply
#55
Wye
This should be tagged as rumor by an honest publicist.
But nah, anything for a few more clicks, amirite?
Posted on Reply
#56
Assimilator
WyeThis should be tagged as rumor by an honest publicist.
But nah, anything for a few more clicks, amirite?
It does say "reportedly".
Posted on Reply
#57
Macro Device
Darmok N Jaladif Navi was starved for bandwidth though.
It's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.

As per RTX 4090 Ti, launching it would've made some sorta "sense" only if AMD or anyone else had something faster than the plain 4090. And this never happened. Jacketguy doesn't have to put an effort in Ada. Focusing on getting as much profit as inhumanly possible from RTX 5000 series is his only sensible way of investing his time as of now.
Posted on Reply
#58
Lew Zealand
Beginner Micro DeviceIt's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.
No it's not. The most recent GPU review on TPU: www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
Posted on Reply
#59
80251
Lew ZealandNo it's not. The most recent GPU review on TPU: www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
Posted on Reply
#60
Lew Zealand
80251Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
Not owning a 7900 XTX I really don't know but the OC potential of some of the XTXs suggest that AMD is conservatively clocking them resulting in lower performance, see the OC Cyberpunk performance here:

www.techpowerup.com/review/xfx-radeon-rx-7900-xtx-merc-310-oc/39.html

There are good guesses why this is happening but nobody knows for sure outside of AMD. Hell, they don't seem to know hence the missing 7700 and 7800 series.
Posted on Reply
#61
dicobalt
Flagship? Those are cool I guess, but I care about the 5060 TI price range much more. Maybe that'll give me a reason to upgrade my 3060 TI.
Posted on Reply
#62
wheresmycar
80251Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
XTX gets more cache, bandwidth and memory but the 4080 uses faster memory... possibly one of the contributing factors?

Outside of architectural advances/hardware prowess theres no doubt some games are just better optimised for Nvidia cards. There are several contributing factors which puts Nvidia in a more rewarding position with developers/game engines. The obvious one: Nvidias market share (gaming) is MASSIVE!! Almost the whole cake! The bigger the player the greater the influence (IMO). Nvidia uses this influence through dedicated dev-interaction departments (or think tanks) translating to increased developer/game engine relations, shared proprietary tech for testing/implementation (some games are better optimised on Nvidia tech) and then theres sponsored favouritism - bigger pockets, greater reach. AMD's no different but with smaller pockets on a smaller scale (they've got a long way to play catch up with the king of the hill). In short, a ~2% margin is best ignored.

Nowadays i don't concern myself with dev/GE interactions but question whether there's some level of premeditated conformity between both manufacturers in playing the market. You scratch my back and i'll scratch yours is good business sense (under the table of course). I think i better shut up and go back to being a good law abiding consumer :respect:
Posted on Reply
#63
80251
@wheresmycar
I'm guessing your "premeditated conformity" between GPU manufacturers only applies to the duopoly right? Because Intel is far too small a player now in the GPU market to shoulder them aside.
Posted on Reply
#64
mama
80251Why doesn't the 7900 XTX have a greater performance delta over the 4080? It has a LOT more cache and bandwidth but it's TPU performance metric only has it pegged at 2% faster? The 7900 XTX even has more RT cores than the RTX 4080.
The rumour mill suggests the card was gimped because of an artifacting issue. The story goes that AMD released higher than actual performance estimates before release expecting this artifacting issue to be resolved. It wasn't at launch so they released the card with lower performance to avoid a major drama. Personally I give this little credence as I would have expected such an issue to be resolved by now. Unless you believe another a conspiracy theory that goes like this...
Posted on Reply
#65
wheresmycar
80251@wheresmycar
I'm guessing your "premeditated conformity" between GPU manufacturers only applies to the duopoly right? Because Intel is far too small a player now in the GPU market to shoulder them aside.
Yep, a 2-prong hunch. Hope Intel goes the full mile this time around.
Posted on Reply
#66
ToxicTaZ
mxthunderWas looking forward to the 4080Ti. Really no talk about that. I wanted a cheaper AD102 card to buy.
I'm waiting as well for the RTX 4080Ti hopefully Q4 2023. Something should be said by Intel 14th Generation launch party in September hopefully.

Cheers
Posted on Reply
#67
Ruru
S.T.A.R.S.
Lew ZealandNo it's not. The most recent GPU review on TPU: www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/30.html

The 6700 XT is a little ahead of the 3060 Ti at 1080p and 1440p by a similar margin and falls slightly behind at 4K, where both cards are useless at 46fps. 3070 is ~10% faster at both 1080p and 1440p. The 6700 XT is a well-balanced card at the resolutions it's targetted to.

If you want to look at the 6800 XT and 3080, they are separated by less than 3% at all resolutions. The 3080 scales slightly better to 4K but we're talking about a few frames, which is nothing you will notice while playing a game. These are very small differences.
I play fine at 4K60 with my 6700 XT OC. Though I use FSR quality if needed.
Posted on Reply
#69
ToxicTaZ
KonceptzSo, what about the 4080ti?
Think there's a lot of us that want to entry level AD102 GPU.... save a few hundred bucks with similar performance as a RTX 4090....

Since there's a huge gap and nvidia's timeline now no new gpus till 2025 I would say there would be a refresh launch Q4 this year or maybe at CES probably a super or something series I would expect.

Cheers
Posted on Reply
#70
Blueberries
Would love to trade my 4090 in for one.

Nvidia's ability to iterate on PCB and cooler design in such a short period of time is seriously impressive. I hope they keep this momentum going forward.
Posted on Reply
#71
PapaTaipei
Possible reason: the marketing of a 600 to 900 watt card might be more detrimental than releasing it.
Posted on Reply
#72
bearClaw5
Beginner Micro DeviceIt's EXTREMELY starved. It loses way more perfromance at 4K compared to competition Team Green GPUs. You don't notice this starvation on obsolete and tiny resolutions such as 720p and 1080p but at 3440x1440 and beyond, it becomes obvious all AMD GPUs of latest two gens (RX 6800 non-XT excluded) are vastly suffering from insufficient VRAM performance. 6700 XT, for example, is very close to RTX 3070 at 1080p and is slower than 3060 Ti at 4K. Their "super dooper cache" can only help when the resolution is low. High resolutions require REAL bandwidth, yet AMD hasn't provided with it.

As per RTX 4090 Ti, launching it would've made some sorta "sense" only if AMD or anyone else had something faster than the plain 4090. And this never happened. Jacketguy doesn't have to put an effort in Ada. Focusing on getting as much profit as inhumanly possible from RTX 5000 series is his only sensible way of investing his time as of now.
What is unique with the 6800 non-xt? It has the same memory set up as the other Navi 21 cards.
Posted on Reply
#73
80251
PapaTaipeiPossible reason: the marketing of a 600 to 900 watt card might be more detrimental than releasing it.
According to the TPU GPU DB: "Being a triple-slot card, the NVIDIA GeForce RTX 4090 Ti draws power from 2x 16-pin power connectors, with power draw rated at 600 W maximum."

600 Watts isn't that much more than what 4090's can draw now. My 4090, when playing Dishonored 2 at maxed DSR resolution, posted the following peak power draw stats:
"aftrbrner: 574.9 Watts, GPU 16-pin HVPWR power max.: 551.2 Watts, GPU PCIe +12V Input Power max.: 15.6W"
Posted on Reply
#74
Macro Device
bearClaw5What is unique with the 6800 non-xt?
Its core, it's too weak to actually benefit from having more than 500 GBps VRAM bandwidth (gain is real, yet is far from beling linear) and is too strong to have less than 400 GBps VRAM bandwidth (losses are very close to linear). The actual b/w is 512 GBps minus the latencies. All other Navi 20 GPUs, on the contrary, are able to make use of more VRAM b/w than they already have. The greater the resolution the greater the effect.
Posted on Reply
#75
Falcon216
80251According to the TPU GPU DB: "Being a triple-slot card, the NVIDIA GeForce RTX 4090 Ti draws power from 2x 16-pin power connectors, with power draw rated at 600 W maximum."

600 Watts isn't that much more than what 4090's can draw now. My 4090, when playing Dishonored 2 at maxed DSR resolution, posted the following peak power draw stats:
"aftrbrner: 574.9 Watts, GPU 16-pin HVPWR power max.: 551.2 Watts, GPU PCIe +12V Input Power max.: 15.6W"
Peak draw is not the same thing as sustained average. My 1070Ti can peak at 250 watts, but with the A/F curve set in afterburner, averages 180.

Anyways as for the 4090Ti "cancellation" Nvidia is not selling anywhere near enough 4090s and even less 4080s. I will bet hard $$$ that the slow high end sales is why the 5000s is "delayed until 2025" and the 4090Ti is "cancelled"
Posted on Reply
Add your own comment
Dec 20th, 2024 02:43 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts