Wednesday, September 21st 2022

ICYMI, NVIDIA GeForce RTX 4080 12GB Uses 192-bit Memory Bus

Amid the fog of rapid announcements and AIC graphics card launches, this little, interesting detail might have missed you, but the new GeForce RTX 4080 12 GB graphics card announced yesterday; features a memory bus-width of just 192-bit, which is half that of the RTX 3080 12 GB (384-bit). The card uses 21 Gbps GDDR6X memory, which at 192-bit bus-width, works out to just 504 GB/s bandwidth. In comparison, the RTX 3080 12 GB uses 19 Gbps GDDR6X memory, which at 384-bit bus width, produces 912 GB/s. In fact, even the original RTX 3080 with 10 GB of GDDR6X memory across a 320-bit bus, has 760 GB/s on tap.

The bigger RTX 4080 16 GB variant uses 256-bit memory bus, but faster 23 Gbps GDDR6X memory, producing 736 GB/s of memory bandwidth, which again, is less than that of the original 10 GB RTX 3080. It's only the RTX 4090 that has an unchanged amount of memory bandwidth over the previous generation—1008 GB/s, which is identical to that of the RTX 3090 Ti, and a tad higher than the 936 GB/s of the RTX 3090 (non-Ti). Of course, memory bandwidth is no way to compare the RTX 40-series from its predecessors, there are a dozen other factors that weigh into performance, and what matters is you're getting generationally more memory amounts with the RTX 4080-series. The RTX 4080 12 GB offers 20% more memory than the RTX 3080, and the RTX 4080 16 GB offers 33% more than the RTX 3080 12 GB. NVIDIA tends to deliver significant performance gains with each new generation, and we expect this to hold up.
Add your own comment

81 Comments on ICYMI, NVIDIA GeForce RTX 4080 12GB Uses 192-bit Memory Bus

#26
ratirt
wolfYou're not wrong, to get 2x my 3080 performance it's looking I'll need to spend double... No thanks.

But waiting for full 4080 reviews should coincide with RDNA3 and I can choose based on something less... Insane.
You have a third option. Don't buy anything and stay with what you have. 3080 is very much capable. You dont need 4080 especially for that price though. I wonder thought, how much faster that 408012GB will be in comparison to 3080 10/12GB.
Posted on Reply
#27
Zubasa
wolfPrice is a whooooole different discussion, one which I think is kinda insane.

More just purely, if it has less shaders and less memory bandwidth, if it can equal performance vs a 384-bit GDDR6X subsystem, then that's certainly something
The shader part was easily made up by the significant clock speed increase. As for bandwidth I wonder what kind of cache config it has.
Posted on Reply
#28
Minus Infinity
How on earth can they call the 12GB a 4080. It's a glorified 4070 Ti (same bus width as 4070) no doubt and pointless at the stupid price.
Posted on Reply
#29
Ferrum Master
Where are the news of new DLSS3 requiring ADA? I want rage flaming high in the forum :D
Posted on Reply
#30
wolf
Better Than Native
lexluthermiesterThere is no excuse for a 192bit VRAM bus on a TOP TIER CARD. It's stupid, plain and simple.
I think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
Posted on Reply
#31
Mussels
Freshwater Moderator
BwazeIs it possible the lower memory bandwidth will mostly show in titles with older engines, and older titles - which reviews usually don't cover?

I know the older titles should pretty much run well even on older cards, but that's not always the case.

You can bog down any card with just 11 year old Skyrim and high enough textures and mods.

And of course VR. On titles like DCS. No DLSS to help you, you need raw power and lots of memory to push high resolution of HP Reverb G2 or Valve Index in DCS and other sims that don't change their graphics engines often.
Exactly. If DLSS 3.0 reduces the memory load (and it does) these may perform extremely well in RTX testing, but be nothing exciting in traditional DX12 titles - and potentially worse in VRAM heavy workloads, be they actual workloads like rendering or AI work, or just people running 16K textures in skyrim at 4K
Posted on Reply
#32
Bwaze
wolfI think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
But does it "best the 3090 Ti", if it bests it just in some cases? Cases reviewers will focus on, I' m sure (Nvidia demands), but it could still lag behind in older titles, older engines, VR high resolutions demanding high rasterisation?
Posted on Reply
#33
Dirt Chip
4080 16GB - galaxy s22+
4080 12GB - galaxy s22

This new sub-category will take time to ingest but once compered to the cellphone industry tier structure (almost) everyone will accept it.
Still, the confusion regarding performance will stay.
Posted on Reply
#34
ratirt
MusselsExactly. If DLSS 3.0 reduces the memory load (and it does) these may perform extremely well in RTX testing, but be nothing exciting in traditional DX12 titles - and potentially worse in VRAM heavy workloads, be they actual workloads like rendering or AI work, or just people running 16K textures in skyrim at 4K
If that is the case, it may also show better boosts when using DLSS vs native due to lower mem bandwidth.
Posted on Reply
#35
Ferrum Master
Dirt Chip4080 16GB - galaxy s22+
4080 12GB - galaxy s22

This new sub-category will take time to ingest but once compered to the cellphone industry tier structure (almost) everyone will accept it.
Still, the confusion regarding performance will stay.
You cannot even compare. Each have their own preferences there that are mostly aesthetic, like size and looks, the internal specs are not priority for most. It ain't a card that fits one slot. The added margin for GPU's tho... sheesh. Phones at least consist of much more components justifying the cost around ~1K. GPU still consist of core 4 parts. GPU, memory, VRM and cooling. That's it. Phones have ASIC of memory/CPU/RAM, display/cooling, bunch of cameras, battery, Cell radio, speakers etc.
Posted on Reply
#36
Unregistered
My feeling is since Turing nVidia doesn't care about gaming all that much they focus more on professional applications of their hardware.
#37
Vayra86
wolfPrice is a whooooole different discussion, one which I think is kinda insane.

More just purely, if it has less shaders and less memory bandwidth, if it can equal performance vs a 384-bit GDDR6X subsystem, then that's certainly something
But can it, because usually that only counts for the special selection Nvidia has in store for us. Software wise. And in that sense, the DX11 era is over, where anything on that API runs so much better on green...
We've already seen 10GB 3080's drown and most of the time cards are limited in memory performance wise, not core, which is the usual MO with Nvidia.

What we have now is 'select DLSS' titles, 'select RT titles' and every gen wants to nudge us further into that 'reality' when in fact there is a massive truckload of gaming outside of it.

So far, I'm completely unimpressed by these specs, if not appalled.
Posted on Reply
#38
Dirt Chip
Ferrum MasterYou cannot even compare. Each have their own preferences there that are mostly aesthetic, like size and looks, the internal specs are not priority for most. It ain't a card that fits one slot. The added margin for GPU's tho... sheesh. Phones at least consist of much more components justifying the cost around ~1K. GPU still consist of core 4 parts. GPU, memory, VRM and cooling. That's it. Phones have ASIC of memory/CPU/RAM, display/cooling, bunch of cameras, battery, Cell radio, speakers etc.
You can compare it from a marketing point of view.
Posted on Reply
#39
Vayra86
Xex360My feeling is since Turing nVidia doesn't care about gaming all that much they focus more on professional applications of their hardware.
I think they know there isn't much to get there, because realistically for rasterized they had performance on point since Pascal. Even the move to 4K isn't lasting enough in terms of performance requirements to keep GPU sales afloat.
Posted on Reply
#40
wolf
Better Than Native
Vayra86What we have now is 'select DLSS' titles, 'select RT titles' and every gen wants to nudge us further into that 'reality' when in fact there is a massive truckload of gaming outside of it.
Absolutely ya'll have a point, I want to see the full reviews of how these cards stack up in everything, not cheery picked scenarios.

My experience of a 3080 with all the 2020+ bells and whistles has been really good, and older stuff it absolutely obliterates. Here's hoping the 4080 series can do the same, offer those uplifts in the most demanding scenario's and still smash 'normal' (or older) workloads at high res/fps.
Posted on Reply
#41
Naito
So they've printed the 80 series label on a 60/70s tier GPU with the high price to go with it. This is getting ridiculous :shadedshu:
Posted on Reply
#42
Bwaze
NaitoSo they've printed the 80 series label on a 60/70s tier GPU with the high price to go with it. This is getting ridiculous :shadedshu:
No. They've printed 80 series label on a 60/70 tier GPU, and gave it 90 tier price. See, better now.
Posted on Reply
#43
1d10t
Sweet, I predicted RTX 4070 will be 8GB GDDR6 non X, 128bit width bus and run on PCIe Gen 4.0 x8 for the price of current RTX 3080.
Posted on Reply
#44
Bwaze
wolfMy experience of a 3080 with all the 2020+ bells and whistles has been really good, and older stuff it absolutely obliterates. Here's hoping the 4080 series can do the same, offer those uplifts in the most demanding scenario's and still smash 'normal' (or older) workloads at high res/fps.
I don't have a feeling my RTX 3080 "obliterates" anything in VR. And DCS, a flight sim with an older engine, or Microsoft Flight Simulator, struggle even in normal 4K, without everything maxed out.
Posted on Reply
#45
ARF
1d10tSweet, I predicted RTX 4070 will be 8GB GDDR6 non X, 128bit width bus and run on PCIe Gen 4.0 x8 for the price of current RTX 3080.
And with the performance of the old 3070? :kookoo:
Posted on Reply
#46
lexluthermiester
wolfI think saying that in as many words doesn't account for the performance and all the other spec nuances though, like lets say it equals or bests a 3090Ti and costs significantly less, is it stupid?
Limiting VRAM bandwidth is always stupid and always has been, regardless of the reasoning.
Posted on Reply
#47
Chomiq
ratirtTo be fair, if 4080 is 192-bit what 4070 or 4060 will be? 128 and 64 bit?
They'll cut the PCI lanes.
Posted on Reply
#48
wolf
Better Than Native
BwazeI don't have a feeling my RTX 3080 "obliterates" anything in VR. And DCS, a flight sim with an older engine, or Microsoft Flight Simulator, struggle even in normal 4K, without everything maxed out.
VR isn't really what I was referring to, more like any games say 2018 and older (for a 2020 high end card), it has smashed for me at 4k. MSFS2020? yeah good luck there, I hope you have a 5800X3D or 12900K + DDR5 if you expect to relieve the CPU bottleneck and have the GPU actually take a front seat.
lexluthermiesterLimiting VRAM bandwidth is always stupid and always has been, regardless of the reasoning.
I do believe the 12GB card is a 4070 with an 8 stickered over the 7, so in that sense it offers xx70 tier bandwidth or higher. Me personally? yeah it's useful but ultimately the performance defines what I think, not the spec itself. I do respect how you feel however, that's not an invalid take, just one with a bit more nuance for me, there's a hec of a lot more that goes into a cards resultant performance than just memory subsystem/bandwidth.
Posted on Reply
#49
lexluthermiester
wolfI do believe the 12GB card is a 4070 with an 8 stickered over the 7, so in that sense it offers xx70 tier bandwidth or higher.
That's still iffy to me, personally. However I agree, this would be better if it were a 4070 with 12GB.
wolfI do respect how you feel however, that's not an invalid take, just one with a bit more nuance for me,
Fair enough. And I do see your points. The I feel this way is because I've been watching card makers gimp an otherwise solid GPU with a skimp VRAM bus for nearly 30 decades. It's one of those things that irritates the crap out of me.
wolfthere's a hec of a lot more that goes into a cards resultant performance than just memory subsystem/bandwidth.
True.
Posted on Reply
#50
hat
Enthusiast
lexluthermiesterThere is no excuse for a 192bit VRAM bus on a TOP TIER CARD. It's stupid, plain and simple.
They said the same thing when bus width was being reduced, starting with the GTX280 at 512 bit, then down to 384 bit with the 480 and 580, and finally down to the usual 256 with the 680. While 192 does seem slim for a high end card, I'm more concerned with there being two variants of 4080s with no clear distinction, other than the memory size and the bus width itself being listed if one digs into the specs of the particular card being viewed. It seems like it should be a 4070 to me. Anyway, the reviews will tell the tale and show whether or not the card really does suffer from a narrower bus.
Posted on Reply
Add your own comment
Aug 16th, 2024 10:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts