Friday, October 14th 2022

NVIDIA Cancels GeForce RTX 4080 12GB, To Relaunch it With a Different Name
NVIDIA has decided to cancel the November 2022 launch of the GeForce RTX 4080 12 GB. The company will relaunch the card under a different name, though it didn't announce the replacement name just yet. The naming of the RTX 4080 12 GB was cause for much controversy. With the RTX 40-series "Ada," NVIDIA debuted three SKUs—the already launched RTX 4090 which is in stores right now; the RTX 4080 16 GB, and the RTX 4080 12 GB. Memory size notwithstanding, the RTX 4080 12 GB is a vastly different graphics card from the RTX 4080 16 GB.
The RTX 4080 12 GB and RTX 4080 16 GB didn't even share the same silicon. While the 16 GB model is based on the larger "AD103" silicon, has 9,728 CUDA cores, and a 256-bit wide GDDR6X memory bus; the RTX 4080 12 GB is based on the smaller "AD104" silicon, has just 7,680 CUDA cores (21% fewer CUDA cores); and a meager 192-bit wide GDDR6X memory bus. This had the potential to confuse buyers, especially given the $900 price. With criticism spanning not just social media but also bad press, NVIDIA decided to pull the plug on the RTX 4080 12 GB. The company will likely re-brand it as a successor to the RTX 3070 Ti, although then it will have a hard time justifying its $900 price-tag. The RTX 4080 16 GB, however, is on track for a November 16 availability date, with a baseline price of $1,200.
Source:
NVIDIA
The RTX 4080 12 GB and RTX 4080 16 GB didn't even share the same silicon. While the 16 GB model is based on the larger "AD103" silicon, has 9,728 CUDA cores, and a 256-bit wide GDDR6X memory bus; the RTX 4080 12 GB is based on the smaller "AD104" silicon, has just 7,680 CUDA cores (21% fewer CUDA cores); and a meager 192-bit wide GDDR6X memory bus. This had the potential to confuse buyers, especially given the $900 price. With criticism spanning not just social media but also bad press, NVIDIA decided to pull the plug on the RTX 4080 12 GB. The company will likely re-brand it as a successor to the RTX 3070 Ti, although then it will have a hard time justifying its $900 price-tag. The RTX 4080 16 GB, however, is on track for a November 16 availability date, with a baseline price of $1,200.
423 Comments on NVIDIA Cancels GeForce RTX 4080 12GB, To Relaunch it With a Different Name
Everyone:
Hmm... "hero... chosen one... stop the evil..."
Intel Arc A770 left the chat...
So yeah, would like one to also document performance gains over driver iterations.
But no pre orders either. Just nothing. How absolutely pathetic canada has become beyond just our general stupidity. Can't even get goods now.
I would have purchased a 4090 if it was cheaper, like $1200 but at 1600 usd it's just too much.
I don't have much faith in AMD either honestly.
I've been telling people, want 4K (even if it isn't actually 4K) gaming? Cheap? Get a ps5 or a Xbox series x.
Right now, even with GPU prices as low as they are, it still is abysmal.
The texture compression and lower quality rendering cheats of the old FX series
The 970 having 3.5GB and not 4GB VRAM
Deliberately selling cards in tiers that make them obsolete faster (1050Ti 4GB vs 1060 3GB - they'd BOTH be better off with the VRAM amounts swapped)
Then the modern shenanigans with FE cards being limited to certain countries, the 4080 12GB being a 4070 at 4080 prices
Hell look at the laptops for the worst things where product names became meaningless, a laptop 1060 could have been just about anything, they had products using names to mislead people as well as different TDP variants with drastically differing perfomance they did their best to keep hidden.
Theres more and AMD is not above this sort of thing either - i think all brands have done dodgy shit over the years.
Only the 1030 shaenigans with the ddr4 vs gddr5 I consider to be a bs move, because even an informed buyer couldn't actually distinguish between the products
- The 3090 Ti that only came out to milk flagship hunters beyond imagination after they've got their 3090s already,
- The 3080 having 10 GB VRAM initially, with a 12 GB version unexpectedly coming out later,
- The 3070 Ti 16 GB version being scrapped to force buyers into planned obsolescence with a choice of more VRAM on a lesser card or more performance with less VRAM,
- The 3060 having 12 GB VRAM, basically pissing on everything that has only 8 or 10 GB even several tiers above, despite the fact that it doesn't necessarily need that much,
- The 3050 with questionable performance for its tier selling for way more than it should just because it's RTX, and...
- Having nothing below that. The 3050 is still based on a scrappy version of the mid-tier GA106, which means Nvidia is openly pissing on gamers on a budget.
The 20-series Super cards established a trend which Ampere only continued. That trend is "milk gamers now, then make them realise their mistake when the proper version comes out later". I'm pretty sure every "mistake" we see in Ada cards is planned ahead of time, and we'll see them somewhat rectified in Ti/Super releases next year. Only somewhat because someone will have to buy 50-series, too.Edit: Before someone argues it, these are not dealbreaker moves, just plain scummy ones.
Like you said, AMD isn't innocent, either (their laptop chip naming scheme is horrendous), but at least they give you the best performance physically possible (fully enabled chips with decent amounts of VRAM), and not crippled versions that only tease you into the "refresh" coming next year.
Those are a different matter, because the 3800X for example, was a fully enabled chip with full capability compared to the 3800XT. Same as the 6900XT compared to the 6950XT. The only thing you lose with the "lesser" version is a couple percent performance, max, which you can bring back with overclocking if you're lucky. Nvidia's "Super" game is different, because nowadays, you mostly see vanilla (non-Ti) releases with crippled chips, which you cannot necessarily overcome with overclocking, especially if you pair it with crippled VRAM capacity as well. Additionally, my personal note is that anyone who knows a thing or two about GPUs can suspect from the crippled chips that the good ones are reserved for something coming later. They give you the scraps to tease you into the good stuff you didn't wait for - this is the scummy side of it, imo.
Edit: It's like a girlfriend that gives you the worse slice of pizza, then halfway through her better slice, realizes that she's full, and you're left thinking "great, I could have had that one right from the start".
For example, even though they were sold as the same product, if you happen to buy a 1st gen ryzen close to launch compared to 10 months later, they were completely different products in terms of ocing. My 1600 needed an insane amount of voltage to hit 3.6ghz, while 2 i bought for my friends were casually hitting 4ghz with minimal amount.
We get it, you love Nvidia and wont admit that they did something shitty, even though they've themselves have admitted the naming was confusing. But like you said, I am sure you know better than a multibillion dollar company who decided to retract a product because they were too dumb or something, I don't know. What even is an informed consumer ? Millions of people play video games, it's evident that most wont have technical knowledge about these things, that's why there are laws against false advertising in the first place.
OCing is a different matter. The 1700X one bought at launch is a fully enabled chip, and the same 1700X one bought a year later when you compare stock settings. Intel/AMD/Nvidia aren't selling OC - that's something you do for yourself.
What im saying is naming is irrelevant as long as an informed consumer can differentiate between your products on a shelf. That wasnt the case with the 2 versions of the 1030 for example, but it was the case with the 2 versions of the 4080.
What do you think will actually change now that they took it back? Theyll rename it to 4070 and sell it for the same price. WOW, huge win for the consumer right? They dont only hit nvidia. Amd does it, intel does it. Sure you can get down to the technicalities and claim its different cause nvidia changes the amount of cudas but is it really different? Is 10 % more cuda cores immoral but 10% more clockspeeds are fine?
If I'm mistaken, and they actually do sell it for that price, then I'll agree with you that people are stupid for buying it (besides being disappointed by the human intellect for the Nth time). Sorry, I edited my post a bit late, so I'll write it down again (my bad, really).
Nvidia is the only company I've ever seen to pull off a full launch of an entirely new generation without one single product based on a fully enabled die anywhere in the product stack. Why is that? If Intel can release the 12900K together with the 12700K and 12600K, if AMD can release the 7950X and 7700X together with the 7900X and 7600X (or the 6900XT together with the 6800 and 6800XT for that matter), then why can't Nvidia do the same?