Wednesday, October 19th 2022

NVIDIA GeForce RTX 4070 isn't a Rebadged RTX 4080 12GB, To Be Cut Down

It turns out that NVIDIA didn't just cancel (unlaunch) the GeForce RTX 4080 12 GB last week, but also shelved the SKU until it is needed in the product stack. This is probably because NVIDIA intended to sell it at $900, and will find it difficult to justify a xx70-class SKU at this price-point. A Moore's Law is Dead report goes into the possible reasons NVIDIA shelved the RTX 4080 12 GB, and why it won't be rebadged as the RTX 4070.

The RTX 4070, although expected to be based on the same AD104 silicon as the RTX 4080 12 GB, won't have the same configuration. The RTX 4080 12 GB maxed out the AD104, enabling all 7,680 CUDA cores on the silicon. It's likely that the RTX 4070 will have fewer CUDA cores, even if it retains the 192-bit memory interface and 12 GB memory size. The memory clock could be changed, too. The RTX 4080 12 GB was essentially NVIDIA trying to upsell the successor of the RTX 3070 Ti (maxed out GA104) as an xx80-class SKU, at a higher price-point. Moore's Law is Dead also showed off possible designs of the RTX 4070 Founders Edition, revealing a compact design with many of the same design improvements implemented with the RTX 4090 FE. This card comes in a strictly 2-slot design.
Sources: harukaze5719 (Twitter), Moore's Law is Dead (YouTube)
Add your own comment

121 Comments on NVIDIA GeForce RTX 4070 isn't a Rebadged RTX 4080 12GB, To Be Cut Down

#1
thelawnet
"The RTX 4080 12 GB maxed out the AD102, enabling all 7,680 CUDA cores on the silicon."

this should say 'AD104', not 'AD102'
Posted on Reply
#2
Bwaze
Price / performance chart of Ada cards is going to be hilarious.

Unless Nvidua convinces the reviewers to only look at the frame doubled performance.
Posted on Reply
#3
HisDivineOrder
It doesn't really surprise me that they'd launch the 4070 before the 4070 Ti. I assume they'll bring the latter out with the 4080 12G specs around this time next year for a slightly lower price.
Posted on Reply
#4
Guwapo77
Replying to Source 4: So even the 4070TI wouldn't use a full AD104?
Posted on Reply
#5
AusWolf
I can smell a "cheap" 4070 coming with a severely cut-down chip for $700, then a 4070 Ti with the same config as the 4080 12 GB would have been for $850. And then, Jensen won't understand why isn't everybody happy.

It really puzzles me how Nvidia doesn't have a well-thought out plan for the whole product stack before the launch of the flagship the way AMD and Intel do.
Posted on Reply
#6
Tsukiyomi91
they really did shelved the 4080 12GB, huh... vendors are gonna waste their money, time and effort again for damage control. Guess that EVGA saying nope to the 40 Series was probably a "good move" on their part. It's no wonder this launch was utter chaos.
Posted on Reply
#7
evernessince
AusWolfI can smell a "cheap" 4070 coming with a severely cut-down chip for $700, then a 4070 Ti with the same config as the 4080 12 GB would have been for $850. And then, Jensen won't understand why isn't everybody happy.

It really puzzles me how Nvidia doesn't have a well-thought out plan for the whole product stack before the launch of the flagship the way AMD and Intel do.
I mean the 4080 12GB was already cut down to 46% of a 4090. How much more can you cut that down and charge north of $700? At that point you are better off with numerous other cards already on the market. Heck right now you can get 6900XTs for under $700.
Posted on Reply
#8
Ando
Guwapo77Replying to Source 4: So even the 4070TI wouldn't use a full AD104?
Same result: they'd be admitting they tried to scam people if they just change the name or price with the same die.
Honestly since they went so far to reveal it at their launch announcement, Nvidia probably would have been better off just going ahead with it. This wishy-washy approach puts them in an even more awkward situation than they were already in. Amazing that so many well paid individuals in their company couldn't come to the conclusion that this was a terrible idea from the start.
Posted on Reply
#9
AusWolf
Tsukiyomi91they really did shelved the 4080 12GB, huh... vendors are gonna waste their money, time and effort again for damage control. Guess that EVGA saying nope to the 40 Series was probably a "good move" on their part. It's no wonder this launch was utter chaos.
It appears to me that they didn't launch Ada - they only launched the 4090 with zero idea what the rest of the product stack will look like. It shouldn't be so hard to figure out since Ada is just Ampere Refresh which is just Turing Refresh, but for some reason, it is. Maybe the hard part is figuring out how to milk people for small updates on the same architecture again and again while brute forcing some improvements in them to keep everybody quiet and praise them for their "efforts".
evernessinceI mean the 4080 12GB was already cut down to 46% of a 4090. How much more can you cut that down and charge north of $700? At that point you are better off with numerous other cards already on the market. Heck right now you can get 6900XTs for under $700.
It wasn't cut down. It was a fully enabled AD104.
Posted on Reply
#10
Chaitanya
So now nGreedia are going to rebadge 60 class chips as 70 series and sell upwards of $600.
Posted on Reply
#11
evernessince
AusWolfIt wasn't cut down. It was a fully enabled AD104.
It's a fully enabled AD104, which is a cutdown version of AD102. Pointless semantics, you know what I meant.

Instead of replying constructively you decided to reply with that instead. The question remains, how much more can you cut down the die compared to the 4090 and sell it for north of $700; disagreements on my usages of cutdown aside.
Posted on Reply
#12
Unregistered
Normally the xx70' performance is equal to last highest performing gaming card, yet this generation it seems no to be the case.
Only the 4090 is a reasonable uplift in performance but due to the high price it isn't impressive.
AMD should take this opportunity and crush nVidia's disappointing Alda Lovelace GPUs.
#13
john_
BwazeUnless Nvidua convinces the reviewers to only look at the frame doubled performance.
They don't even need to try. They are already convinced. Tech sites are turning again pro Nvidia in their articles. Slowly back they are.

I believe that after AM5's fail to become a clear winner over Intel and the power of RTX 4090 seen in reviews combined with DLSS3 and the lack of information about RDNA3 cards, plus Intel's enter in the GPU market, they see a lack of ability from AMD to continue as an effective and strong competitor to the other two in the retail market. So, they are starting to play it nice with Nvidia. See the latest articles about GPU prices. Nvidia still sells above MSRP most of it's cards, RTX 4090 comes at $1600 to over $2000 and tech sites post articles with titles saying that GPU prices are going down and that normality in GPU pricing is back. They do say somewhere that AMD is undercutting Nvidia in pricing and that mid range Radeon cards are better, but those remarks look more like necessary small text lost in the articles than something to clearly point to and turn the reader's focus at it.
Posted on Reply
#14
N/A
I'd like 4070 based on AD103, sealing 25% off the CUDA count for a 8064-8704 /w regular GD6 and price it 35% lower at 799.
Use AD104 only for 4060 Ti 60 and 50. with 192 160 and 128 bit bus.
Posted on Reply
#15
Pumper
The renamed 12GB at $700 would have pushed it, but they are not even releasing it at those specs? Fuck them.
Posted on Reply
#16
cvaldes
This whole thing is bizarre.

NVIDIA knows for each Ampere card, the price, COGS, gross margin, number of each type of core, memory size, memory bus width, memory bandwidth, pixel fillrate, texture fillrate, peak FP32, etc.

And they made Ada Lovelace GPUs, they know the same performance figure for the new generation silicon. It's their freakin' design.

They know what VRAM capacities are available to a particular memory bus width.

All they need to do is put together a simple spreadsheet with model numbers from 3050 to 3090 Ti and an adjacent cells listing model numbers from 4050 to 4090 Ti. Look for anomalies, try to fix those and price accordingly.
Posted on Reply
#17
Vayra86
evernessinceI mean the 4080 12GB was already cut down to 46% of a 4090. How much more can you cut that down and charge north of $700? At that point you are better off with numerous other cards already on the market. Heck right now you can get 6900XTs for under $700.
The 4080 (both) is positioned too far away from the 4090 really; and that happened because the 4090 really isnt that big of a jump where it matters: raster perf.

Now they have no real space above the Ampere stack to do anything meaningful gen to gen without looking like clowns. And it appears AMD is being more competent about it. And precisely: last gen is turning highly competitive now all of a sudden. Working as intended?! Not if there are sub 700 dollar top end cards destroying your new offering...

As said before, my popcorn is ready :D
Posted on Reply
#18
AusWolf
evernessinceIt's a fully enabled AD104, which is a cutdown version of AD102. Pointless semantics, you know what I meant.

Instead of replying constructively you decided to reply with that instead. The question remains, how much more can you cut down the die compared to the 4090 and sell it for north of $700; disagreements on my usages of cutdown aside.
It's not pointless semantics. AD104 is not a cut down version of the GA102. It is a much smaller chip to begin with, and as such, a lot more of it can be manufactured per wafer. The defective parts of each fully manufactured chip are cut down to make lesser performing ones. Either you don't understand how chip design and manufacturing works and really believe that every single GPU is a cut down version of another, or it's you who decided to pick a pointless fight about semantics when you knew very well what I meant. If "cutting down" means what you seem to think it means (using the same architecture to make smaller chips), then what is the 1030 compared to the 1080 Ti or the 710 compared to the 780 Ti? C'mon...

If only products based on chips with defective parts form the basis of the initial launch, then I want to know where fully working chips go - probably to storage to be sold for an even higher price later. This is why I was scratching my head during the Ampere launch, and this is why I'm scratching my head now.
Posted on Reply
#19
Vayra86
cvaldesThis whole thing is bizarre.

NVIDIA knows for each Ampere card, the price, COGS, gross margin, number of each type of core, memory size, memory bus width, memory bandwidth, pixel fillrate, texture fillrate, peak FP32, etc.

And they made Ada Lovelace GPUs, they know the same performance figure for the new generation silicon. It's their freakin' design.

They know what VRAM capacities are available to a particular memory bus width.

All they need to do is put together a simple spreadsheet with model numbers from 3050 to 3090 Ti and an adjacent cells listing model numbers from 4050 to 4090 Ti. Look for anomalies, try to fix those and price accordingly.
This is the struggle of having to cut back on margins and you can see the internal fight of the company bleed out into the product stack. Greed, is what happened here, followed by sharp market realities.
Posted on Reply
#20
AusWolf
Vayra86The 4080 (both) is positioned too far away from the 4090 really; and that happened because the 4090 really isnt that big of a jump where it matters: raster perf.
It's not that big of a jump, because it's the same architecture with improvements in the RT and Tensor cores. The rest of the design is just a node shrink and brute force.
Posted on Reply
#21
siluro818
MLID doesn't do "reports". They only speculate out loud stuff that seems logical to the average viewer. Tbh after their Arc "report" everyone should just ignore this "source" forever.
Posted on Reply
#22
Guwapo77
AndoSame result: they'd be admitting they tried to scam people if they just change the name or price with the same die.
Honestly since they went so far to reveal it at their launch announcement, Nvidia probably would have been better off just going ahead with it. This wishy-washy approach puts them in an even more awkward situation than they were already in. Amazing that so many well paid individuals in their company couldn't come to the conclusion that this was a terrible idea from the start.
I ended up watching his video, I caught that part. But it seems really odd that they wouldn't use it as the TI version.
Posted on Reply
#23
AusWolf
siluro818MLID doesn't do "reports". They only speculate out loud stuff that seems logical to the average viewer. Tbh after their Arc "report" everyone should just ignore this "source" forever.
I couldn't agree more - except that I don't even look at them as "source" (maybe source of pointless speculation, but nothing else).
Posted on Reply
#24
Dr. Dro
It seems obvious to me that the full die will be reserved for the RTX 4070 Ti to be released in the mid-gen refresh in about a year's time. The same as the RTX 3070 Ti using the full GA104 over the 3070's moderately cut down configuration.
Posted on Reply
#25
Bwaze
Nvidia did a full spreadsheet with all the models, performance and prices. They did it when the cryptomining was still a thing, and the did it several times afterwards. And they're still doing them, on the go.

They know they own the market. Even if AMD turns out with better price/performance cards, they can't make enough of them to really matter. And scalpers are going to be great levellers of the game now.

And why all the clusterfuck with announcing RTX 4080 12GB, and unannouncing it afterwards? Well, marketing departments are apparently getting so responsive now we see their reactions in almost real time. Don't worry, it will be us, buyers that will pay for all the rebadging and repackaging.

Remember, Turing released with zero price/performance increase compared to Pascal. All you had was a promise of RTX abd DLSS - and it took a long time to be at least partially fulfilled, and only the top end really had a semi-useful ray-tracing capability.

And now you have a frame doubling promise. Even if it really increases the latency, works only in high FPS (again limiting it to top end cards), and introduces very noticeable artifacts (moving GUI elements being hit the worst).

Enough to sell a 4080 16GB for $1200, RTX 4070 for north of $700? Only we, the buyers, will decide that. But that doesn't mean Nvidia has to change the price. They can remain expensive, show falling quarterly revenues, and bitch about the backstabbing gamers for years.

And wait for the real customers. Next crypto wave. Which their not-so-secret crypto department is surely planning.
Posted on Reply
Add your own comment
May 21st, 2024 05:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts