Thursday, December 28th 2023

NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon, 16GB of 256-bit Memory

Recent placeholder listings of unreleased MSI RTX 40-series SUPER graphics cards seem to confirm that the RTX 4070 Ti SUPER is getting 16 GB of memory, likely across a 256-bit memory interface, as NVIDIA is tapping into the larger "AD103" silicon to create it. The company had maxed out the "AD104" silicon with the current RTX 4070 Ti. What's also interesting is that they point to the RTX 4080 SUPER having the same 16 GB of 256-bit memory as the RTX 4080. NVIDIA carved the current RTX 4080 out of the "AD103" by enabling 76 out of 80 SM (38 out of 40 TPCs). So it will be interesting to see if NVIDIA manages to achieve the performance goals of the RTX 4080 SUPER by simply giving it 512 more CUDA cores (from 9,728 to 10,240). The three other levers NVIDIA has at its disposal are GPU clocks, power limits, and memory speeds. The RTX 4080 uses 22.4 Gbps memory speed, which it can increase to 23 Gbps.

The current RTX 4080 has a TGP of 320 W, compared to the 450 W of the AD102-based RTX 4090, and RTX 4080 cards tend to include an NVIDIA-designed adapter that converts three 8-pin PCIe connectors to a 12VHPWR with signal pins denoting 450 W continuous power capability. In comparison, RTX 4090 cards include a 600 W capable adapter with four 8-pin inputs. Even with the 450 W capable adapter, NVIDIA has plenty of room to raise the TGP of the RTX 4080 SUPER up from the 320 W of the older RTX 4080, to increase GPU clocks besides maxing out the "AD103" silicon. NVIDIA is expected to announce the RTX 4070 Ti SUPER and RTX 4080 SUPER on January 8, with the RTX 4080 SUPER scheduled to go on sale toward the end of January.
Source: VideoCardz
Add your own comment

60 Comments on NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon, 16GB of 256-bit Memory

#26
ModEl4
With 24Gbps memory it should be around 6-7% faster than 4080 at 4K (with slower memory timings even less difference, but NV should be able to hit the specced speed of the Micron MT61K512M32KPA-24-if it uses the same ICs) the problem is that 4070TiS should be only -12% at 4K vs 4080 and with $799 MSRP it may force NV to lower the MSRP for 4080S to $1099 (this time the performance difference between 4080S-4070TiS will be less than what we had in 4080-4070Ti, plus there is no memory advantage any more, so theoretically NV should lower a little bit the 4080S price to compensate (unless the yields for a full specced die doesn't permit such a move) Logically the new market dynamics will force AMD's partners to gradually lower the price of 7900XTX to $879, 7900XT to $699 and 7900GRE to $599 (4070S should be around +14-15% faster at 4K vs 4070 depending GPU frequency)
Posted on Reply
#27
Dahita
Seeing that some games now require 16Gb of RAM to play maxed out at 4K, I would have considered it with 20Gb of RAM. This upgrade is not making any sense to me. Will wait for next gen.
Posted on Reply
#28
Dawora
DahitaSeeing that some games now require 16Gb of RAM to play maxed out at 4K, I would have considered it with 20Gb of RAM. This upgrade is not making any sense to me. Will wait for next gen.
Pls dont be so newbie, read facts.
Allocates Vram is not the same as used/or required Vram.
Posted on Reply
#29
Dahita
DaworaPls dont be so newbie, read facts.
Allocates Vram is not the same as used/or required Vram.
Wow, what a sweet, coherant, constructed answer. Happy.new year!
Posted on Reply
#30
efikkan
Clearly not the tone I would have used, but the statement is factual.

Reported memory usage in a game or third-party tool is just allocated memory. GPUs compress memory, and compression has improved with pretty much every new generation. And you can't directly compare e.g. 16 GB across GPU generations or different competitors.

The way to tell whether you need more memory or not is through benchmarking. It doesn't take long from approaching the VRAM limit and get stutter until you get something completely unplayable (or in some cases glitching), so a reviewer should be able to tell quite easily when performing a benchmark run. (and if the GPU continues scaling in 4K with overclocked memory or GPU, you know VRAM isn't the limitation.)

In reality, most GPUs will run out of other resources way faster, like bandwidth or GPU processing power, and this resource balance will not change after the product is designed, so this will remain as long as the product exists. So there is no reason to have extra VRAM for "future proofing". Most often we see performance tank due to bandwidth long before VRAM allocation, unless you use very sub-optimal texture packs etc, or run settings which push the frame rate far below 60 FPS anyways.
Posted on Reply
#31
Dahita
efikkanClearly not the tone I would have used, but the statement is factual.

Reported memory usage in a game or third-party tool is just allocated memory. GPUs compress memory, and compression has improved with pretty much every new generation. And you can't directly compare e.g. 16 GB across GPU generations or different competitors.

The way to tell whether you need more memory or not is through benchmarking. It doesn't take long from approaching the VRAM limit and get stutter until you get something completely unplayable (or in some cases glitching), so a reviewer should be able to tell quite easily when performing a benchmark run. (and if the GPU continues scaling in 4K with overclocked memory or GPU, you know VRAM isn't the limitation.)

In reality, most GPUs will run out of other resources way faster, like bandwidth or GPU processing power, and this resource balance will not change after the product is designed, so this will remain as long as the product exists. So there is no reason to have extra VRAM for "future proofing". Most often we see performance tank due to bandwidth long before VRAM allocation, unless you use very sub-optimal texture packs etc, or run settings which push the frame rate far below 60 FPS anyways.
Thanks man,

Appreciate the (much more interesting) answer, and you taking the time to explain. Let me ask you this; benchmark aside, Alan Wake 2 as an example takes 17Gb+ of VRAM on the 4090 all maxed out. What makes you believe the 4080 wouldn't benefit from having more headspace to work with?

tpucdn.com/review/alan-wake-2-performance-benchmark/images/vram.png

I think I'm getting by what your said that the card is balanced/optimized properly so there's won't be specific bottlenecking from the 16Gb of RAM more than its GPU power or bandwidth. Re-reading your answer, I think I get it now. I re-read tests of the 4060Ti 8 VS 16Gb, it helps visualizing the situation.

Thanks mate!
Posted on Reply
#32
efikkan
DahitaThanks man,

Appreciate the (much more interesting) answer, and you taking the time to explain. Let me ask you this; benchmark aside, Alan Wake 2 as an example takes 17Gb+ of VRAM on the 4090 all maxed out. What makes you believe the 4080 wouldn't benefit from having more headspace to work with?

tpucdn.com/review/alan-wake-2-performance-benchmark/images/vram.png
It's a fundamental fact; if you want to push more frames, you need bandwidth and processing power to scale with it.
Also, if you actually run out of VRAM, the card will start swapping memory, and the game will behave strangely in extreme cases.

But if you see the card continues to scale in 4K with OC, then VRAM capacity is not the bottleneck.
Whether it is for this card or not, you'll have to look up a few reviews of RTX 4080 to find out, but I haven't noticed any of those indicators with non-OC in TPU's results.

Also, a very good case study for VRAM;
www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/31.html
Here you can see RTX 4060 Ti 8 GB and 16 GB vs. RTX 4070 12 GB, so it's clearly that RTX 4060 Ti 16 GB doesn't get an advantage over RTX 4070 12 GB, in fact 4070's advantage grows in 4K!
Edit: sorry wrong link
Posted on Reply
#33
Dahita
efikkanIt's a fundamental fact; if you want to push more frames, you need bandwidth and processing power to scale with it.
Also, if you actually run out of VRAM, the card will start swapping memory, and the game will behave strangely in extreme cases.

But if you see the card continues to scale in 4K with OC, then VRAM capacity is not the bottleneck.
Whether it is for this card or not, you'll have to look up a few reviews of RTX 4080 to find out, but I haven't noticed any of those indicators with non-OC in TPU's results.

Also, a very good case study for VRAM;
www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/31.html
Here you can see RTX 4060 Ti 8 GB and 16 GB vs. RTX 4070 12 GB, so it's clearly that RTX 4060 Ti 16 GB doesn't get an advantage over RTX 4070 12 GB, in fact 4070's advantage grows in 4K!
Edit: sorry wrong link
Thanks a bunch, makes a lot of sense
Posted on Reply
#35
efikkan
TumbleGeorgeRX 7600 XT version with 256 bit bus and 16GB VRAM?
And your point being? :)

The only thing that would make a such card especially interesting in the market would be if it features higher bandwidth, not the VRAM size itself, that would mostly be a gimmick for gaming.
I think it's unlikely that we'll see a 256-bit bus (~18 Gbps) on it, but it would be an interesting case study to see what overkill memory bandwidth would do to a GPU like this.
A 128-bit bus is far more likely, but it can certainly be faster than 18 Gbps. I've seen 23 Gbps, so that's at least a possibility for ~28% more bandwidth there on a narrow bus with more expensive memory, but again, I doubt they will go for the most expensive memory.
Either way, it's the bandwidth that would make this card interesting, not the extra VRAM (if the card is actually released like this).
Posted on Reply
#36
f0ssile
Beginner Micro DeviceAt MSRP, yes.
At current pricing, not really. The only things to bother this GPU are:
• RX 7800 XT, yet y'know, it's already more expensive and less available, and also is an AMD GPU so it doesn't have DLSS, CUDA, power efficiency, and ray tracing performance to brag about.
• RX 6900 XT. Same crap but also even less power efficient.
• RTX 3080. Technically faster than 4070, yet it consumes almost double the power and has two less GB of VRAM so not ideal by any stretch of imagination.
• Aftermarket 3080 Ti and 6950 XT. Both are faster but they are also both used and power hogs.

And that's it. Compared to other price segments, 4070 gets you the best bang per buck as well. So it's not 4070 that's bad, it's the market that's suboptimal.

(but having a $500 GPU that handles any game at 1080p/DLSS1440p and will handle any game at 1080p/DLSS1440p for another five years is really not bad)
Strange that you underline the difference in VRAM from 10 to 12GB between 3080 and 4070, while in the analysis of the 7800xt (albeit true) you forget the detail of the 4GB of extra VRAM, and you prefer to take the raster performance for granted. Will it be an anomalous case, or is it not a case because it was wanted...?

5 years with 12GB? Of course, perhaps by scaling properly on the textures and everything that involves it (not just the res, for example the normal maps are heavy).
I would say that the assumption is that first of all you have to take Nvidia, the pros are professional and the cons are not visible, but I could be wrong...
Posted on Reply
#37
Beginner Macro Device
f0ssileyou forget the detail of the 4GB of extra VRAM
At 12+ GB range, these additional 4 GB matter in:

• Ray tracing (which is not the 7800 XT's forte)
• Very high resolutions (which makes short work of either GPU)
Extremely detailed/poorly compressed textures (far from intended usage)
• Rendering workloads (most of them strictly benefit from CUDA technology, thus 7800 XT is behind despite having more VRAM)
• Mining (not sure if mining resuscitates before both GPUs become hot obsolete garbage)
• Other very specific tasks

When 12 GB become insufficient in actual gaming at high (not ultra) settings, both 4070 and 7800 XT will be too slow to be bothered by the latter's VRAM advantage (HD 7970 is much faster than GTX 680 in modern games, yet 10 to 20 FPS difference is... meaningless).

Whereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.
f0ssileyou prefer to take the raster performance for granted
Difference exists and of course 7800 XT is faster in raster. I wouldn't select it otherwise.
Posted on Reply
#38
TumbleGeorge
efikkanAnd your point being? :)

The only thing that would make a such card especially interesting in the market would be if it features higher bandwidth, not the VRAM size itself, that would mostly be a gimmick for gaming.
I think it's unlikely that we'll see a 256-bit bus (~18 Gbps) on it, but it would be an interesting case study to see what overkill memory bandwidth would do to a GPU like this.
A 128-bit bus is far more likely, but it can certainly be faster than 18 Gbps. I've seen 23 Gbps, so that's at least a possibility for ~28% more bandwidth there on a narrow bus with more expensive memory, but again, I doubt they will go for the most expensive memory.
Either way, it's the bandwidth that would make this card interesting, not the extra VRAM (if the card is actually released like this).
My point is that it says 256 bit bus, I don't need to think of any other ways to increase VRAM communication speed. If you read, more computing units are also assumed. So if you happen to be wrong about what in theory will came to market. Then there are all the prerequisites for an excellent offer. ;)
Posted on Reply
#39
f0ssile
Beginner Micro DeviceAt 12+ GB range, these additional 4 GB matter in:

• Ray tracing (which is not the 7800 XT's forte)
• Very high resolutions (which makes short work of either GPU)
Extremely detailed/poorly compressed textures (far from intended usage)
• Rendering workloads (most of them strictly benefit from CUDA technology, thus 7800 XT is behind despite having more VRAM)
• Mining (not sure if mining resuscitates before both GPUs become hot obsolete garbage)
• Other very specific tasks

When 12 GB become insufficient in actual gaming at high (not ultra) settings, both 4070 and 7800 XT will be too slow to be bothered by the latter's VRAM advantage (HD 7970 is much faster than GTX 680 in modern games, yet 10 to 20 FPS difference is... meaningless).

Whereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.

Difference exists and of course 7800 XT is faster in raster. I wouldn't select it otherwise.
You forgot the VRAM bandwidth, but now we understand why, now there are no more doubts.

In practice the 4070 has the perfect amount of VRAM, more is not needed, less is too little.
The fact that more is used with RT only becomes a disadvantage for the 7800xt which performs less in RT, it is not that if anything it will be a disadvantage for the 4070 which won't be able to take advantage of it much.

If the textures exceed 12GB it means that they are poorly compressed, let alone that they are too good for the 4070.
Resolution is not the primary determinant, and you skipped the normal maps just as you minimized the textures.
Not at Ultra I assume that means they're not needed, right? Let me guess...
Wow, with CUDA which in gaming counts for a chicken in the henhouse, you put it in the middle there without fear, eh...

Before I could have been wrong, before... You are so biased that you don't even realize it.
What is on Nvidia is right regardless, what is downplayed is wrong.
That is the epicenter, and in the next 5 years you will improvise I guess, on the other hand it has never happened that over the years VRAM has become scarce, I assume because it has happened more often with Nvidia.

A bit of quick dialectics advances to find the fanboys who pretend to be objective, not even strict philology is needed...
Posted on Reply
#40
efikkan
Beginner Micro DeviceWhereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.
That's where your thinking is flawed;
RTX 3080 is and will remain a more powerful card, and the balance between the core features will not change with software, and as you can see in various reviews, pick anyone, the trend in scaling of RTX 3080 vs. 4070 on 1080p vs. 1440p vs. 4K tells you everything you need to know; with more demanding graphics RTX 3080 pulls ahead of 4070. This trend will continue as games get more demanding; in pure performance RTX 3080 will pull ahead, until it gets to a point where it lacks proper hardware support for a new feature.

Does this mean I would buy RTX 3080 over 4070 today? (assuming equal pricing)
No, we are not taking about large differences here, and 4070 will remain supported in top tier drivers for longer, along with more recent codec support, better energy efficiency, etc.

Specs for ref:
RTX 3080 25.07-29.77 TFLOPS, 760 GB/s (320-bit), 10 GB VRAM, 138.2-164.2 GP/s, 391.68-465.12 GT/s
RTX 4070 22.6-29.1 TFLOPS, 504 GB/s (192-bit), 12 GB VRAM, 158.4 GP/s, 455.4 GT/s
TumbleGeorgeMy point is that it says 256 bit bus, I don't need to think of any other ways to increase VRAM communication speed. If you read, more computing units are also assumed. So if you happen to be wrong about what in theory will came to market. Then there are all the prerequisites for an excellent offer. ;)
Your source actually speculates whether it uses the Navi 33 or the Navi 32 GPU, the latter will open the possibility of a 256-bit bus. This would be unusual, but not impossible, especially if they happen to have a larger (unexpected) surplus of lower bin Navi 32 GPUs.

These rumors are based on EEC filings for potential future products.
If you look closely at your own source, it refers to an earlier leak pointing to three contradictory VRAM capacities for 7600XT; 10, 12 and 16 GB. Surely, multiple versions are possible, but the far more likely scenario is that one or more of these are a typo, or filings of hypothetical products that may not materialize, both has happened before. Remember, all it takes is one or two letters/digits to be mistyped to give this a completely different meaning, e.g. 7800XT 16GB.

Beyond the naming in the filing, the source only speculates about everything else.
Posted on Reply
#41
Beginner Macro Device
efikkanRTX 3080 is and will remain a more powerful card, and the balance between the core features will not change with software, and as you can see in various reviews, pick anyone, the trend in scaling of RTX 3080 vs. 4070 on 1080p vs. 1440p vs. 4K tells you everything you need to know; with more demanding graphics RTX 3080 pulls ahead of 4070. This trend will continue as games get more demanding; in pure performance RTX 3080 will pull ahead, until it gets to a point where it lacks proper hardware support for a new feature.
This is exactly what I said earlier:
Beginner Micro Device• RTX 3080. Technically faster than 4070
But 10 GB is 10 GB. Not quite enough to offer longevity. 4070 is balanced a bit better.
f0ssileYou forgot the VRAM bandwidth
I did not. It's incorrect to compare raw VRAM bandwidth of GPUs of different architectures. Both GPUs handle this resource differently and I can't say which GPU does this job better. When I don't know what is right I decide not to spread fallacy but rather keep myself silent.
f0ssileyou skipped the normal maps
Because I don't know how they work and how Ada VS RDNA3 fares in this department.
f0ssileYou are so biased that you don't even realize it.
Yes, I'm biased towards balance and value. RX 7800 XT lacks the former, yet offers the latter. 4070 offers both. You're talking like I am a huge fan of naming what otherwise would be a 4060 Ti at best, a 4070 and at 600 USD before taxes, granting "godlike" 12 GB. No, I'm not, this GPU is an awful rip off.

And 7800 XT is a very great GPU in isolation. Fast, furious, great overclocker, loads of VRAM for its $, but... it's not enough because 4070 is just a smarter product. Just like a 7700 XT is a better item than a 4060 Ti. Just like buying a 6800 non-XT is a no-brainer at <400 USD if obtainable brand new.
f0ssilein the next 5 years you will improvise I guess
Why do I need to? One GPU dies of insufficient VRAM, another dies of insufficient RT performance and no one can assure you the latter won't be more severe. Just check it out: www.techpowerup.com/review/avatar-fop-performance-benchmark/5.html

I don't say things will go exactly the way of increased RT loads in newer games. What I say is this is more than possible. That's a gamble but at this win probability, I'd place my bet on a 4070.
Posted on Reply
#42
efikkan
Beginner Micro DeviceThis is exactly what I said earlier:

But 10 GB is 10 GB. Not quite enough to offer longevity. 4070 is balanced a bit better.
Notice what I underlined, your post were mostly correct up to that point.
Your assumption is that the RTX 3080 will run out of its 10GB before other limitations in future games, but this is not an accurate assessment. Just study the scaling in gaming; heavier loads means processing power and bandwidth become limitations first, and RTX 3080 are better in these areas. Even in RT benchmarks which is differently balanced; 3080 either pulls ahead or closes in on 4070 seemingly in every game, overall it pulls ahead in 4K vs. 1440p, so the evidence points to 3080 being clearly better balanced for heavier loads, and VRAM size being the least relevant factor here. Claiming 4070 is better balanced due to subjective "feelings" about numbers would be dogma, not following the empirical evidence. ;)
Posted on Reply
#43
theouto
Beginner Micro Device(but having a $500 GPU that handles any game at 1080p/DLSS1440p and will handle any game at 1080p/DLSS1440p for another five years is really not bad)
A 500 (usd, euro, GBP, etc.) should do 1440p at native minimum, I think the 4060ti 16GB has shown that we shouldn't pay 500 coins for 1080p, when 500 coins used to be 1440p territory, but now because of upscaling it is """1440p""" territory (really it is 960p and below territory, upscalers shouldn't be used as "free optimization"), but that's more an issue with the current GPU and Gaming market more than anything else, games don't even look much better than they used to, yet demand so much more lol. (I could also talk about TAA and how that vs upscalers makes upscalers look way better than they actually are, but that requires a thread that would probably end in a lock, so it might better to not do that lmao)
Posted on Reply
#44
Beginner Macro Device
theoutoA 500 (usd, euro, GBP, etc.) should do 1440p at native minimum
When it's 0 years old? Sure.
When it's 5 years old? 1080p with a couple settings downed is fine.
Posted on Reply
#45
theouto
Beginner Micro DeviceWhen it's 0 years old? Sure.
When it's 5 years old? 1080p with a couple settings downed is fine.
At 5 years old yes, I agree there, a gpu lasting 5 years at that resolution is quite nice.
Posted on Reply
#46
gmn 17
So therell be 3 16gb cards why not make a 20gb card?
Posted on Reply
#49
Dahita
gmn 17A 20GB card would be 320 bit bus so plenty of extra bandwidth there over 256bit
You're not taking into account the GPU limitations. Of course, if you change all the specs, sure, it would work I guess. But for this specific references, it won't. Look at the 4060 TI. It doesn't benefit from the extra VRAM. As Effikan pointed out above, take the 4070 with only 12Gb as a comparison; the performance difference increases with the 4060 TI 16Gb as you raise the resolution.
Posted on Reply
#50
gmn 17
DahitaYou're not taking into account the GPU limitations. Of course, if you change all the specs, sure, it would work I guess. But for this specific references, it won't. Look at the 4060 TI. It doesn't benefit from the extra VRAM. As Effikan pointed out above, take the 4070 with only 12Gb as a comparison; the performance difference increases with the 4060 TI 16Gb as you raise the resolution.
Also an ad102 die with the 20gb card is a possibility
Posted on Reply
Add your own comment
Jun 10th, 2024 15:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts