• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Tunes GeForce RTX 5080 GDDR7 Memory to 32 Gbps, RTX 5070 Launches at CES

Joined
Nov 11, 2016
Messages
3,411 (1.16/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
It's all DOA except 5090 that will sell like hot cakes. All the rest are refreshes with 10-20% improvement. 5080 with only 5% more Cuda than 4080 Super is slower than 4090 for sure.

4080 has 10% less CUDA cores than 3090Ti, yet beating 3090Ti by 20% at 4K
relative-performance_3840-2160.png
 
Joined
Dec 6, 2022
Messages
382 (0.53/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
Dear Leader Jensen needs more leader jackets… :D

.
IMG_0261.png


Source:

 
Joined
Aug 12, 2019
Messages
2,180 (1.13/day)
Location
LV-426
System Name Custom
Processor i9 9900k
Motherboard Gigabyte Z390 arous master
Cooling corsair h150i
Memory 4x8 3200mhz corsair
Video Card(s) Galax RTX 3090 EX Gamer White OC
Storage 500gb Samsung 970 Evo PLus
Display(s) MSi MAG341CQ
Case Lian Li Pc-011 Dynamic
Audio Device(s) Arctis Pro Wireless
Power Supply 850w Seasonic Focus Platinum
Mouse Logitech G403
Keyboard Logitech G110
Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
 
Joined
Sep 17, 2014
Messages
22,438 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
it's not even AI, it is BS that NV and lots of other companies are milking and making billions from, it is however not AI in the slightest or anywhere close to, it's a fuckin ponzi scheme, another dotcom bubble bollocks and it will bust, taking a lot of data and having the means to look at it and interpret so that it looks like intelligence it is not artificial intelligence, it's computing, it's faster computing than how we have been doing it up until now, but there's no intelligence involved, you can program it just the same as any other type of computer models to come up with any result you want it to, that's not artificial intelligence, it's nvidia/others realising they can do tasks on GPU's/LLM/"AI" thousands of times faster than you can on traditional CPU and telling you it's AI, we are so far away from true AI and I highly doubt it will even come to pass in our generations, the industry gets hyped up over some new technological BS every few years that is going to change everything and doesn't, just gets a select few very very rich and the rest left in the dirt, rinse and repeat
Its really distributed (computing) acceleration in the cloud, not very much unlike what we've been Folding@Home for years. I think the progress here is the amount of and tweakability of the models they use, its just that the consumer facing models are pretty weak and a larger model is very costly to run. For business though, there's a lot to be gained - potentially. But yeah, its problems are clear. The rapidly expanding data and power hunger of this tech isn't quite suitable for our day and age. Its quite similar to how we're gaming now, on grossly inefficient engines doing more realtime processing to gain a tiny visual advantage over much more efficient, older approaches to render an image. There's a bit more detail, but the price of it is ridiculous. Ironically, to make it work better, we take a much lower quality source image to render and then juice it up with this new technology; the net result once again offering a tiny advantage over ye olde methods. Its a two steps forward one step back affair really. There's some movement forward though, if you're willing to see it.

I think for gaming the major problem isn't so much AI or its presence or Nvidia pushing it, but rather the overall market conditions and the lacking progress of competitors. Those are factors that can and will likely change. Markets don't just stop working; its like the economy, going up and down in effectiveness.

Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
It will be slower in some games and faster in another handful of selected titles so Nvidia can maintain they have a 4090 killer at a slightly lower price, so they can win over those who didn't jump on the x90 last time. This, to me, is simply obvious; we know how Nvidia rolls at this point. Which also tells us a lot of the actual changes in shaders; I think its going to be clocked higher and architecturally, not much is changed. The improved clocking will make the difference - except where it can't clock higher because the game/app just wants all power it can get out of it. Its the perfect gray area for Nvidia to sell this on.
 
Last edited:
Joined
Sep 30, 2024
Messages
87 (1.58/day)
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
 
Joined
Aug 12, 2019
Messages
2,180 (1.13/day)
Location
LV-426
System Name Custom
Processor i9 9900k
Motherboard Gigabyte Z390 arous master
Cooling corsair h150i
Memory 4x8 3200mhz corsair
Video Card(s) Galax RTX 3090 EX Gamer White OC
Storage 500gb Samsung 970 Evo PLus
Display(s) MSi MAG341CQ
Case Lian Li Pc-011 Dynamic
Audio Device(s) Arctis Pro Wireless
Power Supply 850w Seasonic Focus Platinum
Mouse Logitech G403
Keyboard Logitech G110
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
high prices wont change for a fact... its all about smart spending your money... depending on what you play, settings, resolution and current gpu... decides whether you should upgrade or not
i think folks with 10 series should upgrade... 20 series can consider depending on the circumstances... and i think 30 series especially x70 x80 x90 can hold off and 40 series... well just stay put
 
Joined
Dec 14, 2011
Messages
1,035 (0.22/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Corsair iCUE H115i Elite Capellix 280mm
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage Sabrent Rocket 1TB M.2
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Redragon K618 RGB PRO
Software Microsoft Windows 11 - Enterprise (64-bit)
New generation GPUs at these asking prices, should have 5-7 year warranties.
 
Joined
Jun 2, 2017
Messages
9,133 (3.34/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Where I live we are already seeing Laptops for $4-7000 Canadian. Or even higher. Now we have MBs for over $700 with 1 PCIe slot when the separation between MB pricing is supposed to be flexibility. The 5090 I expect to push as high as $5000. As crazy as that sounds the most expensive 4090 where I live is just shy of $4000. I hope they know what they are doing. Where I live a 1 bedroom apartment is about $2000 a month but they want to sell these to to young affluent Gamers and people with more money than they need or people that think Nvidia is good enough to use the cost of building 3 capable Gaming PCs to get the GPU.

The others cards are so meh that the 5080 for $2000 will not sell well. Ray tracing is nice but not worth the cost of 2 cards.

The 5070 leaves room for the 5070s Super or TI but those will be expensive too.

I hope the performance justifies the price. Of course for me the real want is raster performance. 32GB of DDR7 sounds good but not if it costs more than some used cars.
 
Joined
Feb 10, 2023
Messages
73 (0.11/day)
System Name Desktop + SteamDeck OLED
Processor i7-10700K @ -0.130mV offset
Motherboard MSI MPG Z490 Gaming Edge WiFi
Cooling be quiet! Pure Loop 280mm
Memory Corsair Vengeance RGB Pro 32GB 3200MT/s CL16
Video Card(s) RTX 3080 Ti Founders Edition @ 1830MHz, 800mV
No, the joke is that the 5070 has 6,400 Shaders, my 3070Ti has 6144.

This thing will only have what, maybe 15% more performance than mine with the new memory?
Considering 4070 has 15%~ more performance than 3070 Ti with less shaders and memory bandwidth, 5070 will obviously be even faster than that. You can't compare shader counts cross generationally
 
Last edited:
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
You can't compare shader counts cross generationally
Yes you can, it's one of the most reliable indicators of performance, the difference in shaders between 4070 and 3070ti is very small except the 4070 has a massive increase in cache, that's why it's faster. If it weren't for that the difference between the two would be close to none, GPU cores don't get that much faster between generations, there is not much to optimize.
 
Joined
Sep 1, 2020
Messages
2,344 (1.52/day)
Location
Bulgaria
Anybody get the feeling the 5080 will be slower than 4090 in terms of raw performance but has better Ai processing cores to generate frames and pretty much everyone said already the prices are gonna suck
Which is more important...hmm... trend, rt-pt...or ai?
 
Joined
Dec 6, 2022
Messages
382 (0.53/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
Why is this article so positive? 12GB for an $800 card in 2025 is nothing to celebrate, neither is 16GB on an $1000+ card. How anyone can put a positive spin on this is beyond me.

16GB is the minimum for a mid to high end card in 2025, and 16GB in a top of the range consumer card is ludicrous. There are games already using more than that now, so how will this fare for another 2 years in a customer's PC?
Sadly, its the new crop of consumers, led by bribed/biased influencers.
 
Joined
Apr 14, 2018
Messages
655 (0.27/day)
4080 has 10% less CUDA cores than 3090Ti, yet beating 3090Ti by 20% at 4K
View attachment 366901

Average clock speed for a 3090ti founders edition is 1999mhz, average for 4080/4080s is 2715, thats a 36% clock speed advantage while having nearly identical specs. The 5080 will absolutely be slower than a 4090 if it releases with ~10700 CUDA cores.

It’s not just gonna “magic” itself faster without huge IPC gains.
 
Joined
Aug 12, 2010
Messages
130 (0.02/day)
Location
Brazil
Processor Ryzen 7 7800X3D
Motherboard ASRock B650M PG Riptide
Cooling Wraith Max + 2x Noctua Redux NF-P12
Memory 2x16GB ADATA XPG Lancer Blade DDR5-6000 CL30
Video Card(s) Powercolor RX 7800 XT Fighter OC
Storage ADATA Legend 970 2TB PCIe 5.0
Display(s) Dell 32" S3222DGM - 1440P 165Hz + P2422H
Case HYTE Y40
Audio Device(s) Microsoft Xbox TLL-00008
Power Supply Cooler Master MWE 750 V2
Mouse Alienware AW320M
Keyboard Alienware AW510K
Software Windows 11 Pro
Let's see how much faster 12VHPW connectors melt with the new cards.
 
Joined
Dec 14, 2011
Messages
1,035 (0.22/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Corsair iCUE H115i Elite Capellix 280mm
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage Sabrent Rocket 1TB M.2
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Redragon K618 RGB PRO
Software Microsoft Windows 11 - Enterprise (64-bit)
Yes you can, it's one of the most reliable indicators of performance, the difference in shaders between 4070 and 3070ti is very small except the 4070 has a massive increase in cache, that's why it's faster. If it weren't for that the difference between the two would be close to none, GPU cores don't get that much faster between generations, there is not much to optimize.

Correct and it has faster memory. People seem to forget these things, guess that is why the leather jacket man keeps getting away with his bs.
 
Joined
Oct 5, 2024
Messages
82 (1.64/day)
Location
United States of America
Yikes, 12 GB for a 5070? I have 12GB on my 6700XT and that is enough at 1440p because the hardware is well-tuned for this resolution, I can't imagine how crippled the 5070 will be on 12 GB of VRAM.
 
Joined
Feb 1, 2019
Messages
3,590 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Average clock speed for a 3090ti founders edition is 1999mhz, average for 4080/4080s is 2715, thats a 36% clock speed advantage while having nearly identical specs. The 5080 will absolutely be slower than a 4090 if it releases with ~10700 CUDA cores.

It’s not just gonna “magic” itself faster without huge IPC gains.
I do see another clock speed boost given its an extra 80w TDP. That 80w is going to go somewhere, whether that is enough for it to get anywhere near the 4090 though remains to be seen. I am expecting 20-30% gain on 4080 raw, but more via some DLSS/RT enhancement tied to the 5000 series.
 
Joined
Apr 14, 2018
Messages
655 (0.27/day)
I do see another clock speed boost given its an extra 80w TDP. That 80w is going to go somewhere, whether that is enough for it to get anywhere near the 4090 though remains to be seen. I am expecting 20-30% gain on 4080 raw, but more via some DLSS/RT enhancement tied to the 5000 series.

Same process node, pushing clocks is going to land more on the side of diminishing returns when it comes to power; we don’t know whats being done with cache or other parts of the die so where power is being utilized is up in the air.

I think people are being way too optimistic given the general specs and the current trend over the past 4 years. I don’t see IPC and clock speed advances covering a 60% CUDA core gap.
 
Joined
Jun 10, 2014
Messages
2,986 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
 
Joined
Sep 30, 2024
Messages
87 (1.58/day)
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
I take it you haven't tried to play too many modern games with RT at 4K? 16GB of VMEM is barely enough and still causes the game to swap out to main memory at times, causing stuttering. I can play CyberPunk2077 and max out a 16GB card very easily.

The main point that seems to escape you though, is one of longevity. We probably will have to wait 2 years with these cards, so do you think games in 2 years time will still work well with a 12GB frame buffer, or even a 16GB buffer? No, they won't, and NV will save the day with launching a 5080 Super, which is the real 5080, and it will have 24GB of VMEM, then the 5070ti will come with 16GB... making anyone who spent $700+ on a 5070, which is already really a 60-class card, looking pretty stupid for wasting their money, and stuck with games presenting as a stuttering mess within a year of owning the card.

There is no way of looking at this $1000+ 5080 as offering any kind of value in 2025. It's even looking unlikely to match the over 2-year-old 4090, let alone outperform it.

And regarding your comment on reviews - No, I don't trust people that will literally get given $3000+ worth of cards to keep so they can "review" them. Yes, there will be lots of pretend whining, sarcasm and outrage about the price to perf ratios, and how NV is greedy and out of their mind blah blah, but that won't stop them from spending the next 2 years reviewing every motherboard, CPU and game using a free $2000+ card.
 
Last edited:
Joined
Oct 22, 2014
Messages
14,091 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Yikes, 12 GB for a 5070? I have 12GB on my 6700XT and that is enough at 1440p because the hardware is well-tuned for this resolution, I can't imagine how crippled the 5070 will be on 12 GB of VRAM.
And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
 
Joined
Sep 30, 2024
Messages
87 (1.58/day)
And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
Because the speed and throughput of the VRAM is meaningless when it has run out... :kookoo:

Please understand that your use case, as well as your definition of value is maybe not the same as other peoples. I personally see very little value in spending $1000+ on a 16GB card in 2025... That's my opinion, based on MY use case.

On a related side-note, it seems the best value NV card next year is going to be the 4070ti, but we all know that NV will cancel that card ASAP.
 
Last edited:
Joined
Jul 24, 2024
Messages
220 (1.77/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
Yet another generation of Nvidia GPUs, and as usual pretty much the entire discussion is about people complaining about memory size and bus width, and not a single word about how this may enable more immersive and exciting games :rolleyes:. And as always, people make arbitrary guesses about how much memory a certain tier of a GPU needs, especially without know anything about the performance characteristics of this upcoming generation. It's the same sad song every time, yet Nvidia have continued to dominate the upper mid-range and high-end segments, offering solid products which have offered remarkable longevity.

As this needs to be said every single time; allocated VRAM isn't the same as needed VRAM.
And don't compare VRAM sizes across GPU generations or vendors, just like with cache, comparing it without context makes no sense.

Whether Nvidia has done the right choice will be very obvious in reviews; when GPUs run out of VRAM things go bad quickly. But if they continues to scale with high resolutions/details, then VRAM is not the bottleneck, despite what anecdotes reviewers/opinionators might pull out of thin air.
(But like with everything these days, opinions and feelings are more important than facts…)
It's about what you get for what you pay. This is the second time when Ngreedia tries to sell us lower-specified product with a sticker of "premium/high performance product". Same things as with two versions of RTX 4080 before. It's more like: How dare they? 12 GB VRAM for $600-700 GPU in 2024 is ridiculous, a ripoff. Of course, Nvidia does this on purpose so they can release 2 another versions of same card few months later. The RTX 4080 Super is fail among fails, that card is not even worth printing the boxes it's stored in.

And here we have the problem, people that can't look past the quantity of ram, and look into the speed and throughput of that ram.
DLSS and RT and similar stuff occupies noticeable space of VRAM for it's own caching purposes. VRAM is not only for textures. Yes, faster memory has higher bandwidth so it can make up for time lost with loading stuff into the slower memory. Having more VRAM means that sometimes there's no need for so many loadings and that enables disk, DRAM and CPU to focus on other operations.

Some games checks for VRAM size and don't let you ramp up certain graphical settings to the highest possible values due to not having enough VRAM.

Some games rely heavily on VRAM size in higher resolutions, as shown in the video above. Lows are much better with more VRAM. Especially take a look at Last of Us at 4K, RTX 4060 Ti 8 GB is completely messed up. Please, do note that in order to compensate for lack of enough VRAM, driver uses system memory (RAM) for this purpose. RAM not only is slower than VRAM but might be required for other purposes. So, having more VRAM is better because the card will not parasite on other computer's resources.

As shown in the video above, sometimes it eats more than 3 GB from the RAM. Gaming with 16 GB RAM and RTX 4060 Ti 8GB may easily become a stuttering festival past 1080p. Same logic applies to 12 GB, 16 GB, 20 GB, ... When there is not enough video memory in the graphics card, driver will look for it elsewhere.
 
Last edited:
Top