• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 4080 SUPER Sticks with AD103 Silicon, 16GB of 256-bit Memory

Joined
Feb 3, 2012
Messages
141 (0.03/day)
Location
Medina, Ohio
System Name Daily driver
Processor i9 13900k
Motherboard Z690 Aorus Master
Cooling Custom loop
Memory 2x16 GB GSkill DDR5 @ 6000
Video Card(s) RTX4090 FE
Storage 2x 2TB 990 Pro SSD 1x 2TB 970 evo SSD, 1x 4TB HDD
Display(s) LG 32" 2560x1440
Case Fractal Design Meshify 2 XL
Audio Device(s) onboard
Power Supply beQuiet Dark Power 12 1000W
Mouse Razer Death adder
Keyboard Razer blackwidow v3
VR HMD n/a
Software Windows 11 pro
Benchmark Scores Heaven 4.0 @ 2560x1440 270.5 FPS
I really wanted a further cut down AD102 based "4080Ti" with 20GB and a 320 bit bus. I really think that 256 bit bus and lack of memory bandwidth kills the 4080. Compared to my 3080Ti and my 4090, it displays wired performance issues that I really think are due to LACK OF MEMORY BANDWIDTH
 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
With 24Gbps memory it should be around 6-7% faster than 4080 at 4K (with slower memory timings even less difference, but NV should be able to hit the specced speed of the Micron MT61K512M32KPA-24-if it uses the same ICs) the problem is that 4070TiS should be only -12% at 4K vs 4080 and with $799 MSRP it may force NV to lower the MSRP for 4080S to $1099 (this time the performance difference between 4080S-4070TiS will be less than what we had in 4080-4070Ti, plus there is no memory advantage any more, so theoretically NV should lower a little bit the 4080S price to compensate (unless the yields for a full specced die doesn't permit such a move) Logically the new market dynamics will force AMD's partners to gradually lower the price of 7900XTX to $879, 7900XT to $699 and 7900GRE to $599 (4070S should be around +14-15% faster at 4K vs 4070 depending GPU frequency)
 
Joined
Jun 16, 2023
Messages
128 (0.23/day)
Seeing that some games now require 16Gb of RAM to play maxed out at 4K, I would have considered it with 20Gb of RAM. This upgrade is not making any sense to me. Will wait for next gen.
 
Joined
Sep 19, 2014
Messages
73 (0.02/day)
Seeing that some games now require 16Gb of RAM to play maxed out at 4K, I would have considered it with 20Gb of RAM. This upgrade is not making any sense to me. Will wait for next gen.
Pls dont be so newbie, read facts.
Allocates Vram is not the same as used/or required Vram.
 
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Clearly not the tone I would have used, but the statement is factual.

Reported memory usage in a game or third-party tool is just allocated memory. GPUs compress memory, and compression has improved with pretty much every new generation. And you can't directly compare e.g. 16 GB across GPU generations or different competitors.

The way to tell whether you need more memory or not is through benchmarking. It doesn't take long from approaching the VRAM limit and get stutter until you get something completely unplayable (or in some cases glitching), so a reviewer should be able to tell quite easily when performing a benchmark run. (and if the GPU continues scaling in 4K with overclocked memory or GPU, you know VRAM isn't the limitation.)

In reality, most GPUs will run out of other resources way faster, like bandwidth or GPU processing power, and this resource balance will not change after the product is designed, so this will remain as long as the product exists. So there is no reason to have extra VRAM for "future proofing". Most often we see performance tank due to bandwidth long before VRAM allocation, unless you use very sub-optimal texture packs etc, or run settings which push the frame rate far below 60 FPS anyways.
 
Joined
Jun 16, 2023
Messages
128 (0.23/day)
Clearly not the tone I would have used, but the statement is factual.

Reported memory usage in a game or third-party tool is just allocated memory. GPUs compress memory, and compression has improved with pretty much every new generation. And you can't directly compare e.g. 16 GB across GPU generations or different competitors.

The way to tell whether you need more memory or not is through benchmarking. It doesn't take long from approaching the VRAM limit and get stutter until you get something completely unplayable (or in some cases glitching), so a reviewer should be able to tell quite easily when performing a benchmark run. (and if the GPU continues scaling in 4K with overclocked memory or GPU, you know VRAM isn't the limitation.)

In reality, most GPUs will run out of other resources way faster, like bandwidth or GPU processing power, and this resource balance will not change after the product is designed, so this will remain as long as the product exists. So there is no reason to have extra VRAM for "future proofing". Most often we see performance tank due to bandwidth long before VRAM allocation, unless you use very sub-optimal texture packs etc, or run settings which push the frame rate far below 60 FPS anyways.
Thanks man,

Appreciate the (much more interesting) answer, and you taking the time to explain. Let me ask you this; benchmark aside, Alan Wake 2 as an example takes 17Gb+ of VRAM on the 4090 all maxed out. What makes you believe the 4080 wouldn't benefit from having more headspace to work with?


I think I'm getting by what your said that the card is balanced/optimized properly so there's won't be specific bottlenecking from the 16Gb of RAM more than its GPU power or bandwidth. Re-reading your answer, I think I get it now. I re-read tests of the 4060Ti 8 VS 16Gb, it helps visualizing the situation.

Thanks mate!
 
Last edited:
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Thanks man,

Appreciate the (much more interesting) answer, and you taking the time to explain. Let me ask you this; benchmark aside, Alan Wake 2 as an example takes 17Gb+ of VRAM on the 4090 all maxed out. What makes you believe the 4080 wouldn't benefit from having more headspace to work with?

It's a fundamental fact; if you want to push more frames, you need bandwidth and processing power to scale with it.
Also, if you actually run out of VRAM, the card will start swapping memory, and the game will behave strangely in extreme cases.

But if you see the card continues to scale in 4K with OC, then VRAM capacity is not the bottleneck.
Whether it is for this card or not, you'll have to look up a few reviews of RTX 4080 to find out, but I haven't noticed any of those indicators with non-OC in TPU's results.

Also, a very good case study for VRAM;
Here you can see RTX 4060 Ti 8 GB and 16 GB vs. RTX 4070 12 GB, so it's clearly that RTX 4060 Ti 16 GB doesn't get an advantage over RTX 4070 12 GB, in fact 4070's advantage grows in 4K!
Edit: sorry wrong link
 
Joined
Jun 16, 2023
Messages
128 (0.23/day)
It's a fundamental fact; if you want to push more frames, you need bandwidth and processing power to scale with it.
Also, if you actually run out of VRAM, the card will start swapping memory, and the game will behave strangely in extreme cases.

But if you see the card continues to scale in 4K with OC, then VRAM capacity is not the bottleneck.
Whether it is for this card or not, you'll have to look up a few reviews of RTX 4080 to find out, but I haven't noticed any of those indicators with non-OC in TPU's results.

Also, a very good case study for VRAM;
Here you can see RTX 4060 Ti 8 GB and 16 GB vs. RTX 4070 12 GB, so it's clearly that RTX 4060 Ti 16 GB doesn't get an advantage over RTX 4070 12 GB, in fact 4070's advantage grows in 4K!
Edit: sorry wrong link
Thanks a bunch, makes a lot of sense
 
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
And your point being? :)

The only thing that would make a such card especially interesting in the market would be if it features higher bandwidth, not the VRAM size itself, that would mostly be a gimmick for gaming.
I think it's unlikely that we'll see a 256-bit bus (~18 Gbps) on it, but it would be an interesting case study to see what overkill memory bandwidth would do to a GPU like this.
A 128-bit bus is far more likely, but it can certainly be faster than 18 Gbps. I've seen 23 Gbps, so that's at least a possibility for ~28% more bandwidth there on a narrow bus with more expensive memory, but again, I doubt they will go for the most expensive memory.
Either way, it's the bandwidth that would make this card interesting, not the extra VRAM (if the card is actually released like this).
 

f0ssile

New Member
Joined
Jan 5, 2023
Messages
24 (0.03/day)
Location
Italy
At MSRP, yes.
At current pricing, not really. The only things to bother this GPU are:
• RX 7800 XT, yet y'know, it's already more expensive and less available, and also is an AMD GPU so it doesn't have DLSS, CUDA, power efficiency, and ray tracing performance to brag about.
• RX 6900 XT. Same crap but also even less power efficient.
• RTX 3080. Technically faster than 4070, yet it consumes almost double the power and has two less GB of VRAM so not ideal by any stretch of imagination.
• Aftermarket 3080 Ti and 6950 XT. Both are faster but they are also both used and power hogs.

And that's it. Compared to other price segments, 4070 gets you the best bang per buck as well. So it's not 4070 that's bad, it's the market that's suboptimal.

(but having a $500 GPU that handles any game at 1080p/DLSS1440p and will handle any game at 1080p/DLSS1440p for another five years is really not bad)
Strange that you underline the difference in VRAM from 10 to 12GB between 3080 and 4070, while in the analysis of the 7800xt (albeit true) you forget the detail of the 4GB of extra VRAM, and you prefer to take the raster performance for granted. Will it be an anomalous case, or is it not a case because it was wanted...?

5 years with 12GB? Of course, perhaps by scaling properly on the textures and everything that involves it (not just the res, for example the normal maps are heavy).
I would say that the assumption is that first of all you have to take Nvidia, the pros are professional and the cons are not visible, but I could be wrong...
 
Joined
Feb 24, 2023
Messages
3,126 (4.69/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC / FULLRETARD
Processor i5-12400F / 10600KF / C2D E6750
Motherboard Gigabyte B760M DS3H / Z490 Vision D / P5GC-MX/1333
Cooling Laminar RM1 / Gammaxx 400 / 775 Box cooler
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333 / 3 GB DDR2-700
Video Card(s) RX 6700 XT / R9 380 2 GB / 9600 GT
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 / 500 GB HDD
Display(s) Compit HA2704 / MSi G2712 / non-existent
Case Matrexx 55 / Junkyard special / non-existent
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / non-existent
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
you forget the detail of the 4GB of extra VRAM
At 12+ GB range, these additional 4 GB matter in:

• Ray tracing (which is not the 7800 XT's forte)
• Very high resolutions (which makes short work of either GPU)
• Extremely detailed/poorly compressed textures (far from intended usage)
• Rendering workloads (most of them strictly benefit from CUDA technology, thus 7800 XT is behind despite having more VRAM)
• Mining (not sure if mining resuscitates before both GPUs become hot obsolete garbage)
• Other very specific tasks

When 12 GB become insufficient in actual gaming at high (not ultra) settings, both 4070 and 7800 XT will be too slow to be bothered by the latter's VRAM advantage (HD 7970 is much faster than GTX 680 in modern games, yet 10 to 20 FPS difference is... meaningless).

Whereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.
you prefer to take the raster performance for granted
Difference exists and of course 7800 XT is faster in raster. I wouldn't select it otherwise.
 
Joined
Sep 1, 2020
Messages
2,394 (1.52/day)
Location
Bulgaria
And your point being? :)

The only thing that would make a such card especially interesting in the market would be if it features higher bandwidth, not the VRAM size itself, that would mostly be a gimmick for gaming.
I think it's unlikely that we'll see a 256-bit bus (~18 Gbps) on it, but it would be an interesting case study to see what overkill memory bandwidth would do to a GPU like this.
A 128-bit bus is far more likely, but it can certainly be faster than 18 Gbps. I've seen 23 Gbps, so that's at least a possibility for ~28% more bandwidth there on a narrow bus with more expensive memory, but again, I doubt they will go for the most expensive memory.
Either way, it's the bandwidth that would make this card interesting, not the extra VRAM (if the card is actually released like this).
My point is that it says 256 bit bus, I don't need to think of any other ways to increase VRAM communication speed. If you read, more computing units are also assumed. So if you happen to be wrong about what in theory will came to market. Then there are all the prerequisites for an excellent offer. ;)
 

f0ssile

New Member
Joined
Jan 5, 2023
Messages
24 (0.03/day)
Location
Italy
At 12+ GB range, these additional 4 GB matter in:

• Ray tracing (which is not the 7800 XT's forte)
• Very high resolutions (which makes short work of either GPU)
• Extremely detailed/poorly compressed textures (far from intended usage)
• Rendering workloads (most of them strictly benefit from CUDA technology, thus 7800 XT is behind despite having more VRAM)
• Mining (not sure if mining resuscitates before both GPUs become hot obsolete garbage)
• Other very specific tasks

When 12 GB become insufficient in actual gaming at high (not ultra) settings, both 4070 and 7800 XT will be too slow to be bothered by the latter's VRAM advantage (HD 7970 is much faster than GTX 680 in modern games, yet 10 to 20 FPS difference is... meaningless).

Whereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.

Difference exists and of course 7800 XT is faster in raster. I wouldn't select it otherwise.
You forgot the VRAM bandwidth, but now we understand why, now there are no more doubts.

In practice the 4070 has the perfect amount of VRAM, more is not needed, less is too little.
The fact that more is used with RT only becomes a disadvantage for the 7800xt which performs less in RT, it is not that if anything it will be a disadvantage for the 4070 which won't be able to take advantage of it much.

If the textures exceed 12GB it means that they are poorly compressed, let alone that they are too good for the 4070.
Resolution is not the primary determinant, and you skipped the normal maps just as you minimized the textures.
Not at Ultra I assume that means they're not needed, right? Let me guess...
Wow, with CUDA which in gaming counts for a chicken in the henhouse, you put it in the middle there without fear, eh...

Before I could have been wrong, before... You are so biased that you don't even realize it.
What is on Nvidia is right regardless, what is downplayed is wrong.
That is the epicenter, and in the next 5 years you will improvise I guess, on the other hand it has never happened that over the years VRAM has become scarce, I assume because it has happened more often with Nvidia.

A bit of quick dialectics advances to find the fanboys who pretend to be objective, not even strict philology is needed...
 
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Whereas 3080 is exactly the GPU that's almost foul regarding VRAM. 12 GB is fine, 10 GB is not great. I mean, you can still game almost anything at ridiculously high settings with an RTX 3080 but in a year or two it'll run outta VRAM juice hardly. 4070 won't.
That's where your thinking is flawed;
RTX 3080 is and will remain a more powerful card, and the balance between the core features will not change with software, and as you can see in various reviews, pick anyone, the trend in scaling of RTX 3080 vs. 4070 on 1080p vs. 1440p vs. 4K tells you everything you need to know; with more demanding graphics RTX 3080 pulls ahead of 4070. This trend will continue as games get more demanding; in pure performance RTX 3080 will pull ahead, until it gets to a point where it lacks proper hardware support for a new feature.

Does this mean I would buy RTX 3080 over 4070 today? (assuming equal pricing)
No, we are not taking about large differences here, and 4070 will remain supported in top tier drivers for longer, along with more recent codec support, better energy efficiency, etc.

Specs for ref:
RTX 3080 25.07-29.77 TFLOPS, 760 GB/s (320-bit), 10 GB VRAM, 138.2-164.2 GP/s, 391.68-465.12 GT/s
RTX 4070 22.6-29.1 TFLOPS, 504 GB/s (192-bit), 12 GB VRAM, 158.4 GP/s, 455.4 GT/s

My point is that it says 256 bit bus, I don't need to think of any other ways to increase VRAM communication speed. If you read, more computing units are also assumed. So if you happen to be wrong about what in theory will came to market. Then there are all the prerequisites for an excellent offer. ;)
Your source actually speculates whether it uses the Navi 33 or the Navi 32 GPU, the latter will open the possibility of a 256-bit bus. This would be unusual, but not impossible, especially if they happen to have a larger (unexpected) surplus of lower bin Navi 32 GPUs.

These rumors are based on EEC filings for potential future products.
If you look closely at your own source, it refers to an earlier leak pointing to three contradictory VRAM capacities for 7600XT; 10, 12 and 16 GB. Surely, multiple versions are possible, but the far more likely scenario is that one or more of these are a typo, or filings of hypothetical products that may not materialize, both has happened before. Remember, all it takes is one or two letters/digits to be mistyped to give this a completely different meaning, e.g. 7800XT 16GB.

Beyond the naming in the filing, the source only speculates about everything else.
 
Joined
Feb 24, 2023
Messages
3,126 (4.69/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC / FULLRETARD
Processor i5-12400F / 10600KF / C2D E6750
Motherboard Gigabyte B760M DS3H / Z490 Vision D / P5GC-MX/1333
Cooling Laminar RM1 / Gammaxx 400 / 775 Box cooler
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333 / 3 GB DDR2-700
Video Card(s) RX 6700 XT / R9 380 2 GB / 9600 GT
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 / 500 GB HDD
Display(s) Compit HA2704 / MSi G2712 / non-existent
Case Matrexx 55 / Junkyard special / non-existent
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / non-existent
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
RTX 3080 is and will remain a more powerful card, and the balance between the core features will not change with software, and as you can see in various reviews, pick anyone, the trend in scaling of RTX 3080 vs. 4070 on 1080p vs. 1440p vs. 4K tells you everything you need to know; with more demanding graphics RTX 3080 pulls ahead of 4070. This trend will continue as games get more demanding; in pure performance RTX 3080 will pull ahead, until it gets to a point where it lacks proper hardware support for a new feature.
This is exactly what I said earlier:
• RTX 3080. Technically faster than 4070
But 10 GB is 10 GB. Not quite enough to offer longevity. 4070 is balanced a bit better.
You forgot the VRAM bandwidth
I did not. It's incorrect to compare raw VRAM bandwidth of GPUs of different architectures. Both GPUs handle this resource differently and I can't say which GPU does this job better. When I don't know what is right I decide not to spread fallacy but rather keep myself silent.
you skipped the normal maps
Because I don't know how they work and how Ada VS RDNA3 fares in this department.
You are so biased that you don't even realize it.
Yes, I'm biased towards balance and value. RX 7800 XT lacks the former, yet offers the latter. 4070 offers both. You're talking like I am a huge fan of naming what otherwise would be a 4060 Ti at best, a 4070 and at 600 USD before taxes, granting "godlike" 12 GB. No, I'm not, this GPU is an awful rip off.

And 7800 XT is a very great GPU in isolation. Fast, furious, great overclocker, loads of VRAM for its $, but... it's not enough because 4070 is just a smarter product. Just like a 7700 XT is a better item than a 4060 Ti. Just like buying a 6800 non-XT is a no-brainer at <400 USD if obtainable brand new.
in the next 5 years you will improvise I guess
Why do I need to? One GPU dies of insufficient VRAM, another dies of insufficient RT performance and no one can assure you the latter won't be more severe. Just check it out: https://www.techpowerup.com/review/avatar-fop-performance-benchmark/5.html

I don't say things will go exactly the way of increased RT loads in newer games. What I say is this is more than possible. That's a gamble but at this win probability, I'd place my bet on a 4070.
 
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
This is exactly what I said earlier:

But 10 GB is 10 GB. Not quite enough to offer longevity. 4070 is balanced a bit better.
Notice what I underlined, your post were mostly correct up to that point.
Your assumption is that the RTX 3080 will run out of its 10GB before other limitations in future games, but this is not an accurate assessment. Just study the scaling in gaming; heavier loads means processing power and bandwidth become limitations first, and RTX 3080 are better in these areas. Even in RT benchmarks which is differently balanced; 3080 either pulls ahead or closes in on 4070 seemingly in every game, overall it pulls ahead in 4K vs. 1440p, so the evidence points to 3080 being clearly better balanced for heavier loads, and VRAM size being the least relevant factor here. Claiming 4070 is better balanced due to subjective "feelings" about numbers would be dogma, not following the empirical evidence. ;)
 
Last edited:
Joined
Jun 29, 2023
Messages
571 (1.05/day)
Location
Spain
System Name Gungnir
Processor Ryzen 5 7600X
Motherboard ASUS TUF B650M-PLUS WIFI
Cooling Thermalright Peerless Assasin 120 SE Black
Memory 2x16GB DDR5 CL36 5600MHz
Video Card(s) XFX RX 6800XT Merc 319
Storage 1TB WD SN770 | 2TB WD Blue SATA III SSD
Display(s) 1440p 165Hz VA
Case Lian Li Lancool 215
Audio Device(s) Beyerdynamic DT 770 PRO 80Ohm
Power Supply EVGA SuperNOVA 750W 80 Plus Gold
Mouse Logitech G Pro Wireless
Keyboard Keychron V6
VR HMD The bane of my existence (Oculus Quest 2)
(but having a $500 GPU that handles any game at 1080p/DLSS1440p and will handle any game at 1080p/DLSS1440p for another five years is really not bad)
A 500 (usd, euro, GBP, etc.) should do 1440p at native minimum, I think the 4060ti 16GB has shown that we shouldn't pay 500 coins for 1080p, when 500 coins used to be 1440p territory, but now because of upscaling it is """1440p""" territory (really it is 960p and below territory, upscalers shouldn't be used as "free optimization"), but that's more an issue with the current GPU and Gaming market more than anything else, games don't even look much better than they used to, yet demand so much more lol. (I could also talk about TAA and how that vs upscalers makes upscalers look way better than they actually are, but that requires a thread that would probably end in a lock, so it might better to not do that lmao)
 
Joined
Feb 24, 2023
Messages
3,126 (4.69/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC / FULLRETARD
Processor i5-12400F / 10600KF / C2D E6750
Motherboard Gigabyte B760M DS3H / Z490 Vision D / P5GC-MX/1333
Cooling Laminar RM1 / Gammaxx 400 / 775 Box cooler
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333 / 3 GB DDR2-700
Video Card(s) RX 6700 XT / R9 380 2 GB / 9600 GT
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 / 500 GB HDD
Display(s) Compit HA2704 / MSi G2712 / non-existent
Case Matrexx 55 / Junkyard special / non-existent
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / non-existent
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
A 500 (usd, euro, GBP, etc.) should do 1440p at native minimum
When it's 0 years old? Sure.
When it's 5 years old? 1080p with a couple settings downed is fine.
 
Joined
Jun 29, 2023
Messages
571 (1.05/day)
Location
Spain
System Name Gungnir
Processor Ryzen 5 7600X
Motherboard ASUS TUF B650M-PLUS WIFI
Cooling Thermalright Peerless Assasin 120 SE Black
Memory 2x16GB DDR5 CL36 5600MHz
Video Card(s) XFX RX 6800XT Merc 319
Storage 1TB WD SN770 | 2TB WD Blue SATA III SSD
Display(s) 1440p 165Hz VA
Case Lian Li Lancool 215
Audio Device(s) Beyerdynamic DT 770 PRO 80Ohm
Power Supply EVGA SuperNOVA 750W 80 Plus Gold
Mouse Logitech G Pro Wireless
Keyboard Keychron V6
VR HMD The bane of my existence (Oculus Quest 2)
When it's 0 years old? Sure.
When it's 5 years old? 1080p with a couple settings downed is fine.
At 5 years old yes, I agree there, a gpu lasting 5 years at that resolution is quite nice.
 
Joined
Nov 25, 2011
Messages
174 (0.04/day)
Location
Australia
So therell be 3 16gb cards why not make a 20gb card?
 
Joined
Jun 16, 2023
Messages
128 (0.23/day)
So therell be 3 16gb cards why not make a 20gb card?
It's been discussed above; these cards wouldn't be gaining much from 4 additional Gbs as the GPU/Bandwidth would give up first (check the 4060 ti 16Gb VS 8Gb review)
 
Joined
Nov 25, 2011
Messages
174 (0.04/day)
Location
Australia
It's been discussed above; these cards wouldn't be gaining much from 4 additional Gbs as the GPU/Bandwidth would give up first (check the 4060 ti 16Gb VS 8Gb review)
A 20GB card would be 320 bit bus so plenty of extra bandwidth there over 256bit
 
Joined
Jun 16, 2023
Messages
128 (0.23/day)
A 20GB card would be 320 bit bus so plenty of extra bandwidth there over 256bit
You're not taking into account the GPU limitations. Of course, if you change all the specs, sure, it would work I guess. But for this specific references, it won't. Look at the 4060 TI. It doesn't benefit from the extra VRAM. As Effikan pointed out above, take the 4070 with only 12Gb as a comparison; the performance difference increases with the 4060 TI 16Gb as you raise the resolution.
 
Top