Wednesday, September 21st 2022

ICYMI, NVIDIA GeForce RTX 4080 12GB Uses 192-bit Memory Bus

Amid the fog of rapid announcements and AIC graphics card launches, this little, interesting detail might have missed you, but the new GeForce RTX 4080 12 GB graphics card announced yesterday; features a memory bus-width of just 192-bit, which is half that of the RTX 3080 12 GB (384-bit). The card uses 21 Gbps GDDR6X memory, which at 192-bit bus-width, works out to just 504 GB/s bandwidth. In comparison, the RTX 3080 12 GB uses 19 Gbps GDDR6X memory, which at 384-bit bus width, produces 912 GB/s. In fact, even the original RTX 3080 with 10 GB of GDDR6X memory across a 320-bit bus, has 760 GB/s on tap.

The bigger RTX 4080 16 GB variant uses 256-bit memory bus, but faster 23 Gbps GDDR6X memory, producing 736 GB/s of memory bandwidth, which again, is less than that of the original 10 GB RTX 3080. It's only the RTX 4090 that has an unchanged amount of memory bandwidth over the previous generation—1008 GB/s, which is identical to that of the RTX 3090 Ti, and a tad higher than the 936 GB/s of the RTX 3090 (non-Ti). Of course, memory bandwidth is no way to compare the RTX 40-series from its predecessors, there are a dozen other factors that weigh into performance, and what matters is you're getting generationally more memory amounts with the RTX 4080-series. The RTX 4080 12 GB offers 20% more memory than the RTX 3080, and the RTX 4080 16 GB offers 33% more than the RTX 3080 12 GB. NVIDIA tends to deliver significant performance gains with each new generation, and we expect this to hold up.
Add your own comment

81 Comments on ICYMI, NVIDIA GeForce RTX 4080 12GB Uses 192-bit Memory Bus

#51
BoboOOZ
It"s just marketing. Last gen Nvidia introduced a new tier, the 90s in order to be able to upsell their customers. Now they are calling the 4070 Ti a 4080. Merketing, all that matters is price/performance, and that we'll have to see.

The price is very stiff, already, for a world in recession. Let's hope performance makes up for it, at least partly.
Posted on Reply
#52
lexluthermiester
hatThey said the same thing when bus width was being reduced, starting with the GTX280 at 512 bit, then down to 384 bit with the 480 and 580
True and that was a bit of thing, but things worked out. And if they work out for this gen in a positive way, then ok.
hatI'm more concerned with there being two variants of 4080s with no clear distinction
Agreed. This is not good.
hatAnyway, the reviews will tell the tale and show whether or not the card really does suffer from a narrower bus.
Also agreed. The proof is in the pudding, as they say...
Posted on Reply
#53
ratirt
ChomiqThey'll cut the PCI lanes.
Or both. PCI lanes and memory bandwidth.
Posted on Reply
#54
GreiverBlade
12gb over 196bits 192bits is fine, although that put that 4080 in the league of the RX 7700 XT if that one keep the RX 6700 XT memory/bus pattern

well at least 196bits 192bits does not hinder the RX 6700 XT :oops:

edit: the 4080 12gb is technically the 4070 o_O i wonder how the 4070 will look .... because if it has better specs on memory and close to the 4080 12gb in cuda ... the 4080 12gb is DOA basially
Posted on Reply
#55
Vayra86
GreiverBlade12gb over 196bits is fine, although that put that 4080 in the league of the RX 7700 XT if that one keep the RX 6700 XT memory/bus pattern

well at least 196bits does not hinder the RX 6700 XT :oops:
Those 4 extra bits do make all the difference! :cool:
Posted on Reply
#56
GreiverBlade
Vayra86Those 4 extra bits do make all the difference! :cool:
corrected :laugh:

2 bit eyesight today ahah..ahahah :oops:
Posted on Reply
#57
john_
What many people are saying. This was the original 4070 that Nvidia decided to name "4080 12GB" and is now trying to sell for $200 more than what they where probably thinking.
Posted on Reply
#58
pavle
It's GTX 970 moment all over again, but amplified and with x80 chip (what was once the flagship single chip card of the generation). For shame.
Waiting for 3dfx like dramatic deflation of nvidia and shameful slip into oblivion. :D
Posted on Reply
#59
DarkS0ul
12GB and 192 bit membory bus for 899$ ?! :kookoo: Crazy cryptocurrency prices kept. Nvidia wants to sell ~ RTX 4060 12GB as RTX 4080 :nutkick:
RTX 20XX series - 5-10% performance increase to GTX 10XX. RT on the first RTX series cards is useless in 4K(2K). RT on RTX 30XX series cards without DLSS, not practically usable for playing.
The prices of graphics cards didn't rise, they were detached from the hardware market. These are prices from the cryptocurrency market, completely detached from reality.
In addition, the performance without artificial enhancers was tragic - the RTX 3090Ti offered 30-60 FPS in 4K with RT for $ 2,000 :banghead: Now probably the new RTX with the new DLSS will not be compatible with the two previous RTX series.
1st- then it could be exactly the same with each series, you either buy a new series or lose support and playability. Like loss of support when releasing a new Android or iOS version.
2nd- the prices are so high that you can buy an OLED TV, console, and the most expensive card still costs more, just GPU. The prices of PC components from the basic home appliance, entertainment have become insane, like some premium goods.
Posted on Reply
#60
NC37
192bit is the bare minimum I'd ever go and have before, in a high midranger. Just, never seen one in a high end card before...if they charge bundles for it, lol. Can see this card becoming a discount king and possibly devaluing the 4070s. Although, this does raise the question if nVidia is going to water down the 4000 series under the 4080. Does not bode well. I can see them doing just that and charging big bucks for 4070s and under with 192bit. 4060s going back to 128bit and so on.

AMD might just win this next gen if they can get their act together. 7000 series had been their best era in the past.
Posted on Reply
#61
thelawnet
Lol.

The 4080 12gb is £950 (USD to gbp current exchange plus 20% vat)

The equivalent previous gen card, the 3060 ti, cost just £350 on the same basis, at launch

This is almost three times the price.

I hope some of the people ranting about the 6500 xt will give this turd the proper slating it deserves
Posted on Reply
#62
ModEl4
GALAX confirms AD102-300, AD103-300 and AD104-400 GPUs for GeForce RTX 4090/4080 series



It was expected of course based on the final specs.
So $900 for a 12GB 192bit bus < 300mm² chip!
The leakers are saying only Navi31 this year and the max cut-down Navi31 shouldn't be less than 160RBs/8960SP/320 bit bus/20GB in worst case scenario and at least 4080 16GB raster performance (and logically a lot more depending frequency and how far cut-down is it)
So Nvidia will start this year at $899, I wonder what SRP AMD will give for cut-down Navi31 if it has only Navi31 for this year!
Posted on Reply
#63
mb194dc
If those prices are correct, good luck to them selling them. 4080 12GB looks it'll be gimped in the only use case scenario's where it would make sense to buy one.
Posted on Reply
#64
londiste
Of course, memory bandwidth is no way to compare the RTX 40-series from its predecessors, there are a dozen other factors that weigh into performance, and what matters is you're getting generationally more memory amounts with the RTX 4080-series.
Something Nvidia did in A100 was to add 40MB of L2 cache compared to 4MB or 6MB in previous x100 iterations. Ampere graphics card GPUs still had only a couple megs of L2 cache with GA102 topping out at 6MB. RTX 4080 has 48MB and RTX 4090 has 96MB of the stuff. This sounds Infinitely familiar change for some reason :D
Posted on Reply
#65
Haile Selassie
The reason why AD10x chip can get away with lower memory bandwidth is the much increased L2 cache. Same reason as to why 6900XT with 256bit bus was able to compete with 3080 (320) and 3090 (384).
Posted on Reply
#66
steen
GreiverBlade12gb over 196bits 192bits is fine, although that put that 4080 in the league of the RX 7700 XT if that one keep the RX 6700 XT memory/bus pattern

well at least 196bits 192bits does not hinder the RX 6700 XT :oops:

edit: the 4080 12gb is technically the 4070 o_O i wonder how the 4070 will look .... because if it has better specs on memory and close to the 4080 12gb in cuda ... the 4080 12gb is DOA basially
7700XT 16GB N32 GCD + 4 MCD = 256bit. 7700 12GB N32 GCD + 3 MCD =192bit. 7600XT 8GB N33 monolithic die is 128bit. All rumored.
Posted on Reply
#67
ARF
londisteSomething Nvidia did in A100 was to add 40MB of L2 cache compared to 4MB or 6MB in previous x100 iterations. Ampere graphics card GPUs still had only a couple megs of L2 cache with GA102 topping out at 6MB. RTX 4080 has 48MB and RTX 4090 has 96MB of the stuff. This sounds Infinitely familiar change for some reason :D
The higher the resolution, the less the help from that cache. This is the reason why the Radeons were always slower at 3840x2160 4K.
Posted on Reply
#68
ModEl4
Ada 96MB L2 logically will have a lot higher throughput than a 96MB Infinity cache (L3) but it will have higher die size cost also (I don't know if my assumption is correct but I suspect around 83-92mm² for 96MB on N4 but I may be completely off, we will see.
Posted on Reply
#69
PapaTaipei
The 40xx prices are absolutely insane. It is however what I was expecting. Also 12GB for a 4080 while my 1080ti from 6 years ago has 11GB. WTF...

Btw doesn't 1200 euros mean 1450-1500 with taxes?
Posted on Reply
#70
TheoneandonlyMrK
wolfI think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
Agreed but I think the 16GB flips it back to ridiculous personally.
It's a bad play IMHO.
Posted on Reply
#71
londiste
ARFThe higher the resolution, the less the help from that cache. This is the reason why the Radeons were always slower at 3840x2160 4K.
4K does not increase memory dependence all that much. I would argue that Ampere was simply slow in lower resolutions, particularly due to fill rate concerns - too few units at a comparatively low frequency. At high resolutions, shading power became the limiting factor and vs RDNA2 Ampere had more of that. In Ada Nvidia has basically doubled the ROP count.
PapaTaipeiBtw doesn't 1200 euros mean 1450-1500 with taxes?
EU prices include taxes.
Posted on Reply
#72
defaultluser
ratirtTo be fair, if 4080 is 192-bit what 4070 or 4060 will be? 128 and 64 bit?
they have typically capped feature cards at 128-bits.; that doesn't mean several different chips don't use the same memory bus (see: Maxwell 750 Ti, and 960 representing with the exact same width!)

I expect the 4070 to be 192-bits (but only GDDR6 at 18Gbps to save cost and power),
GTX 4060 Ti and maybe 4060 at 160-bits, 4050 at 128-bits!

we just went through a 2-year transition to move on to 2GB density chips, so not expecting anything less than 128-bits in all these cards! its going to be half a decade before we see the 8GB on 64-bit bus video cards!
Posted on Reply
#73
RandomWan
The 4080 12GB is just a rebranded 4070 to justify wringing more money out of customers. The branded 4070 is going to look more like a 4060 specs-wise. This way they sell what was going to be a mid-tier card like it is a high-end card.

It's to be expected with what Jensen was saying in regards to the stock price and profits at the shareholder meeting; market manipulation to inflate profits. Nvidia is doing this through misbranding their products to justify a higher price than what the customers would normally bear for a similar tier product.

I won't be surprised if more AIBs start dumping Nvidia partnerships after this generation as Nvidia is going to do whatever they can to take up more market share with the FE cards.
Posted on Reply
#74
RedelZaVedno
4080 16 gigs looks ultra stupid. 9,728 shaders/2.51Ghz makes it 1.4X performance of 3080 based on TFLops calculation coupled with 1.4X MSRP price increase.
Where's the price to performance ratio improvement from generation to generation we used to get? :banghead:
Posted on Reply
#75
sumludus
Given how much the cache size has increased vs the 30 series, I really don't think the memory bus being 192-bit matters at all. I don't see this as NVIDIA is being greedy by charging too much or inflating the specs of a x70 card to x80 status. To anyone who plans on boycotting the generation after what they saw yesterday, I'll bet NVIDIA only has one thing to say to you; good.

NVIDIA doesn't want to sell RTX 40 series cards.

Don't get me wrong, if you still wanna buy a 40 series, they'll be more than happy to sell one to you. But given what I can only assume are still poor inventory levels (supply chains, yields, etc.), that press conference wasn't meant to sell 40 series cards. NVIDIA has two objectives with this launch.

1. Set the goalposts out of reach for RX 7000 by leaning heavily on DLSS 3.0 to promote performance uplifts when they know FSR isn't keeping up.
2. Price the cards out of reach to steer consumers toward the glut of 30 series inventory.

Accomplishing 1 will make sure they have the undisputed Halo product, which sells more lower end cards by association. AMD was much closer than I bet NVIDIA anticipated they would be in terms of pure rasterization last generation. It makes sense for them to leverage their proprietary portfolio to the fullest where AMD is not comparable.

As for 2, once the 30 series are depleted they will have a lot of wiggle room to lower the prices of the vanilla 40 series. Just in time for a Super refresh.
Posted on Reply
Add your own comment
Aug 16th, 2024 10:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts