• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Widespread GeForce RTX 4080 SUPER Card Shortage Reported in North America

Joined
Jan 8, 2017
Messages
9,499 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
AMD adds VRAM because they have to stand out somehow, it's not a big secret.
The reason this is wrong is because Nvidia adds more memory to their cards all the time, point in case the 4070ti super.

So how am I supposed to interpret this ? Nvidia needs to stand out as well now ? Why ?

Nah, that's not it. Nvidia is simply penny pinching and crippling the VRAM on their cards intentionally. They've been at it since the early 2000s, that's why their cards have always had these bizarre VRAM configurations, 384mb, 768mb, 896mb, 1.2GB, 1.5GB, etc.

AMD isn't trying to stand out as much as Nvidia is trying to skimp on VRAM, that much is clear.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
The reason this is wrong is because Nvidia adds more memory to their cards all the time, point in case the 4070ti super.

So how am I supposed to interpret this ? Nvidia needs to stand out as well now ? Why ?

Nah, that's not it. Nvidia is simply penny pinching and crippling the VRAM on their cards intentionally. They've been at it since the early 2000s, that's why their cards have always had these bizarre VRAM configurations, 384mb, 768mb, 896mb, 1.2GB, 1.5GB, etc.

AMD isn't trying to stand out as much as Nvidia is trying to skimp on VRAM, that much is clear.

The 4070 Ti Super is built out of a crippled AD103, not AD104. The AD104 on the 4070 Ti is maxed out. The 192-bit memory interface means that it can have either 6 or 12 memory chips in clamshell, and that translates to real memory capacities of 6, 12 and 24 GB with currently available chips. It's not 6 (not enough) or 24 GB (power, thermal, and market segmentation) for obvious reasons. On the other hand, the AD103 has a 256-bit interface, which translates to 8 or 16 chips and 8, 16 and 32 GB memory capacities being possible. The 4070 Ti Super simple uses 8 of the slower 21Gbps variant of GDDR6X instead to reach its performance and capacity targets. This is impossible to achieve on the AD104 4070 Ti.

As the party that holds both the market and mind share, it isn't Nvidia that needs to do something to stand out, it's the other way around.
 
Joined
Jun 18, 2021
Messages
2,567 (2.01/day)
The 192-bit memory interface means that it can have either 6 or 12 memory chips in clamshell, and that translates to real memory capacities of 6, 12 and 24 GB with currently available chips. It's not 6 (not enough) or 24 GB (power, thermal, and market segmentation) for obvious reasons. On the other hand, the AD103 has a 256-bit interface, which translates to 8 or 16 chips and 8, 16 and 32 GB memory capacities being possible. The 4070 Ti Super simple uses 8 of the slower 21Gbps variant of GDDR6X instead to reach its performance and capacity targets. This is impossible to achieve on the AD104 4070 Ti.

That's their problem, not mine. Asking for $600 and above for a card with 12gb of vram is unreasonable. The 4060ti has a 16gb version (and should only have that version, the regular 4060ti with 8gb is wildly inferior), so why does the card above end up with less? Nvidia was the one to choose to design AD104 with 192bits, hell they even went as far as calling it 4080 and trying to sell it for 900$, it's like they are actively trying to mock consumers to their faces.

They are the ones in control of the design, I think even they get surprised with what they're getting away sometimes, sometimes they push their luck and have to pull back - like the 4060ti and 4080 12gb - but the 4070 seems like a clear example throwing shit into the market and getting away with it as it's still selling anyway.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
That's their problem, not mine. Asking for $600 and above for a card with 12gb of vram is unreasonable. The 4060ti has a 16gb version (and should only have that version, the regular 4060ti with 8gb is wildly inferior), so why does the card above end up with less? Nvidia was the one to choose to design AD104 with 192bits, hell they even went as far as calling it 4080 and trying to sell it for 900$, it's like they are actively trying to mock consumers to their faces.

They are the ones in control of the design, I think even they get surprised with what they're getting away sometimes, sometimes they push their luck and have to pull back - like the 4060ti and 4080 12gb - but the 4070 seems like a clear example throwing shit into the market and getting away with it as it's still selling anyway.

Not so unreasonable apparently since it's selling anyway. The only thing you can do is vote with your wallet, and it's a losing battle if you ask me.
 
Joined
Dec 16, 2010
Messages
1,668 (0.33/day)
Location
State College, PA, US
System Name My Surround PC
Processor AMD Ryzen 9 7950X3D
Motherboard ASUS STRIX X670E-F
Cooling Swiftech MCP35X / EK Quantum CPU / Alphacool GPU / XSPC 480mm w/ Corsair Fans
Memory 96GB (2 x 48 GB) G.Skill DDR5-6000 CL30
Video Card(s) MSI NVIDIA GeForce RTX 4090 Suprim X 24GB
Storage WD SN850 2TB, Samsung PM981a 1TB, 4 x 4TB + 1 x 10TB HGST NAS HDD for Windows Storage Spaces
Display(s) 2 x Viotek GFI27QXA 27" 4K 120Hz + LG UH850 4K 60Hz + HMD
Case NZXT Source 530
Audio Device(s) Sony MDR-7506 / Logitech Z-5500 5.1
Power Supply Corsair RM1000x 1 kW
Mouse Patriot Viper V560
Keyboard Corsair K100
VR HMD HP Reverb G2
Software Windows 11 Pro x64
Benchmark Scores Mellanox ConnectX-3 10 Gb/s Fiber Network Card
That's their problem, not mine. Asking for $600 and above for a card with 12gb of vram is unreasonable. The 4060ti has a 16gb version (and should only have that version, the regular 4060ti with 8gb is wildly inferior), so why does the card above end up with less? Nvidia was the one to choose to design AD104 with 192bits, hell they even went as far as calling it 4080 and trying to sell it for 900$, it's like they are actively trying to mock consumers to their faces.

They are the ones in control of the design, I think even they get surprised with what they're getting away sometimes, sometimes they push their luck and have to pull back - like the 4060ti and 4080 12gb - but the 4070 seems like a clear example throwing shit into the market and getting away with it as it's still selling anyway.
I agree with Dr. Dro in that you should vote with your wallet and (theoretically) companies will respond to that feedback. But also realize that if it is selling well then it could also be that your needs are just different than the needs of most buyers.
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11

This is the exact memory chip used on the RTX 4080. At $55,700, the per unit cost is $27.85 in a 2000 unit bulk.

$27.85 * 8 = $222.80 in memory IC costs alone.


The same situation with the slower 21 Gbps chip used in the 4070 Ti and 4090, at $25.31 per unit at bulk cost

$25.31 * 6 = $151.86 in memory costs alone

Not only does the cost add up, but factor in PCB costs, power delivery component costs, etc. - losses, the fact that everyone needs profit, yada yada, you get what I'm going at. I'm fairly sure the BoM for a 4070 Ti must be somewhere in the vicinity of $500 as a rough estimate. The rest makes up for profit, distribution cost, driver software development costs, etc.
Yeah, nGreedia orders millions of memory chips from Digikey every month...
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Yeah, nGreedia orders millions of memory chips from Digikey every month...

You overestimate Digikey's margins. Of course Nvidia gets the chips a little cheaper - but here's the kicker, both them and AMD sell AIB partners a validated set comprised of GPU ASIC + memory ICs, so both of them charge basically whatever they want for the memory chips.
 
Joined
Jul 13, 2016
Messages
3,321 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
I agree with Dr. Dro in that you should vote with your wallet and (theoretically) companies will respond to that feedback. But also realize that if it is selling well then it could also be that your needs are just different than the needs of most buyers.

The idea that one can "vote with their wallet" implies the customer has enough power to make a difference to begin with. The reality is we exist in a duoplistic market in which Nvidia is increasingly earning more and more of it's revenue from sources outside the gaming market. By extension consumer power to push for changes is greatly dimminished. This argument might, and I very much stress might, have more pull if Intel becomes competitive but that assumes Intel too doesn't just join AMD and Nvidia in their pricing structure to maintain high margins. The customer has the right to choose in which way they'd like to be taken over a barrel.

Do any of you even remotely believe what you've been saying? It's been proven time and time again that you need to push unrealistic, unreasonable settings to run out of VRAM on 12 GB GPUs today. Certainly settings far beyond the means of an RTX 4070 Ti, and certainly far beyond the means of AMD's equivalents.

There's two games in W1zzard's suite, that at 4K resolution and ultra high settings, require more than 12 GB. if you're playing ultra high 4K, you aren't running a midranger card.

Do you often purchase $800 graphics cards with the expectation that day 1 there will be games that already exceed it's VRAM buffer? Since when did some PC gamers start finding that acceptable? Oh that's right, during the 7 year time period Nvidia failed to increase VRAM allowance across their GPU stack. Sure are fools willing to fight against their own best interests to defend a company that doesn't care about them. That's before considering the 4070 Ti is the better offering than the initial 4070. The 4070 only has 12GB of VRAM, you were picking the best possible example of the two. Such a failure to increase VRAM would have never been acceptable in the past as would paying that kind of money to not have enough VRAM day one, let alone into the future.

I had a 1080 Ti for nearly 7 years and it wasn't until the tail end of that did a few games start showing up that challenged the VRAM. None of this day 1 lacking VRAM nonsense. That card cost me $650 and was the top end card of that generation. Now that much money doesn't even get you an xx70 Ti class card, you get a 4070 with a mere 1GB more VRAM over a 7 year period. Historically VRAM capacity would have doubled two to three times in the same timespan. 780 Ti had 3GB, 980 Ti had 6GB, and 1080 Ti had 11GB. Imagine that, VRAM nearly doubling or doubling gen over gen was pretty normal and there was a mere $50 price difference between 3 generations.

The 1080 Ti also does a good job of demonstrating that if VRAM allowances on cards don't increase, devs won't create games that utilize that additoinal VRAM. 8GB was the standard that game devs were forced to target for a whopping 7 years and heck even today still have to heavily consider it given the high price tag of 12GB+ card on the Nvidia side (which sells some 80%+ of all video cards).

Also lest we not forget another major factor:

"Hardware leads software, not the other way around. It's backwards to imply that because games don't use x amount of VRAM today, x amount of VRAM isn't worthwhile. That kind of logic only perpetuates stagnation of VRAM capacity. Devs aren't going to make games with VRAM requirements the vast majority of cards can't meet. It's up to the hardware vendors to increase GPU VRAM allowance so that game devs can then create games that utilize that extra capacity.

I will never understand people who fight against the idea of increasing VRAM amounts of video cards. VRAM is cheap, enables devs to do more with their games, and increases card longevity. Whether it benefits you at this very moment is an extremely narrow way to look at things."
 
Last edited:
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Do you often purchase $800 graphics cards with the expectation that day 1 there will be games that already exceed it's VRAM buffer? Since when did some PC gamers start finding that acceptable? Oh that's right, during the 7 year time period Nvidia failed to increase VRAM allowance across their GPU stack. Sure are fools willing to fight against their own best interests to defend a company that doesn't care about them. Such a failure to increase VRAM would have never been acceptable in the past as would paying that kind of money to not have enough VRAM day one, let alone into the future.

I had a 1080 Ti for nearly 7 years and it wasn't until the tail end of that did a few games start showing up that challenged the VRAM. None of this day 1 lacking VRAM nonsense. That card cost me $650 and was the top end card of that generation. Now that much money doesn't even get you an xx70 Ti class card, you get a 4070 with a mere 1GB more VRAM over a 7 year period. Historically VRAM capacity would have doubled two to three times in the same timespan. 780 Ti had 3GB, 980 Ti had 6GB, and 1080 Ti had 11GB. Imagine that, VRAM nearly doubling or doubling gen over gen was pretty normal and there was a mere $50 price difference between 3 generations.

The 1080 Ti also does a good job of demonstrating that if VRAM allowances on cards don't increase, devs won't create games that utilize that additoinal VRAM. 8GB was the standard that game devs were forced to target for a whopping 7 years and heck even today still have to heavily consider it given the high price tag of 12GB+ card on the Nvidia side (which sells some 80%+ of all video cards).

Also lest we not forget another major factor:

"Hardware leads software, not the other way around. It's backwards to imply that because games don't use x amount of VRAM today, x amount of VRAM isn't worthwhile. That kind of logic only perpetuates stagnation of VRAM capacity. Devs aren't going to make games with VRAM requirements the vast majority of cards can't meet. It's up to the hardware vendors to increase GPU VRAM allowance so that game devs can then create games that utilize that extra capacity.

I will never understand people who fight against the idea of increasing VRAM amounts of video cards. VRAM is cheap, enables devs to do more with their games, and increases card longevity. Whether it benefits you at this very moment is an extremely narrow way to look at things."

I haven't run into VRAM limitations with any of my GPU purchases in the past 8 years with the sole exception of the R9 Fury X (for obvious reasons), so I couldn't really answer. I normally don't buy lower-end GPUs and that I opted for a 4080 this time is most unusual (and frankly motivated by money, as I wanted to buy an expensive display and overhaul the rest of my PC as well), I always buy cards at the halo tier. The few games I played on a 4 GB GPU recently (my laptop's 3050M) did require compromises in texture quality to be playable, but they were playable nonetheless, at settings that I personally consider acceptable for a low-end laptop GPU.

Anyway, let's be realistic for a moment here. When the 1080 Ti launched 7 years ago in the Pascal refresh cycle, 11 GB was truly massive. And we all know that 11 GB was because Nvidia only removed a memory chip to lower the bandwidth and ensure that the 2016 Titan X retained a similar level of performance (despite having slower memory) and that there was a significant enough gap between it and the then-new Titan Xp, which came with the complete GP102 chip and the fastest memory that they had at the time. For most of its lifetime you could simply enable all settings and never worry about that.

GPU memory capacities began to rise after the Turing and RDNA generation but we were still targeting 8 GB for the performance segment parts (RTX 2080, 5700 XT), and these cards weren't really all that badly choked by these. Then came RDNA 2 and started to give everyone 12 and 16 GB like candy, but that still didn't give them a very distinct advantage over Ampere, that was still lingering on 8-10 GB, and then eventually 12 GB for the high end. Nvidia positioned the RTX 3090 with 24 GB to basically assume both the role of gaming flagship and a replacement for the Titan RTX, while removing a few perks of the latter and technically lowering the MSRP by $1000 (which was tbh an acceptable tradeoff). We know reality is different, but if we exclude the RTX 3090, then only AMD really offered a high VRAM capacity.

This generation started to bring higher VRAM capacities on both sides, and it's only now with games that were designed first and foremost for the PS5 and Xbox Series X starting to become part of gamers' rosters and benchmark suites, all featuring advanced graphics, new texture streaming systems, tons of high-resolution assets, etc. that we've begun to see that being utilized to a bigger extent. IMO, this places cards like the RTX 3080 in a bad spot, but is it really a deal breaker? With the exception of a noteworthy game I'm about to mention, we've yet to see even the 3080 fall dramatically behind its AMD counterpart despite having 6 GB less.

I agree, more VRAM is better. But realistically speaking, it's not a life or death situation, at least not yet, and I don't think it'll be one for some time to come, especially if you're willing to drop textures from "ultra" to "high" and not run extreme raytracing settings. Perhaps the only game where 10-12 GB has become a true limitation at this point is The Last of Us, which seems to use VRAM so aggressively that it's not going to perform on a GPU that has less than 16 GB of VRAM, as evident here given the RX 6800 XT wins out against the otherwise far superior 4070 Ti.



Frankly, a game that *genuinely* needs so much VRAM probably has more than a few optimization issues, but I digress. We're not "fighting against memory increases", I'm just making the case that VRAM amounts are currently adequate, if not slightly lacking across most segments, as you see, even then the aforementioned scenario involves 4K and very high graphics settings.
 
Joined
Apr 29, 2014
Messages
4,303 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
It's really not. I'll remind you that Nvidia originally intended for this thing to be a 900$ "4080" and no "super" models most likely (not that they really change anything). They moved the labels around but that doesn't change what these products were originally intended to be.

Not that it matters, your comment is trying to imply that it's unreasonable to expect such a card to be used for 4K gaming. 700$ is a truckload of money, no matter how spin it that's not mid range, nor is isn't unreasonable to expect it to not be limited by VRAM at 4K ultra.


Strange how AMD despite being the budget option can always include a truckload of VRAM.
I agree, its not unreasonable anymore to assume people will 4k game with the 4070ti class of cards.
I'm well aware, and like I said, I agree, $700 is a lot of money, but it remains the middle option in Nvidia's product stack. We just have to accept that the days for a $699 flagship aren't coming back, as much as we'd want that to happen.

AMD adds VRAM because they have to stand out somehow, it's not a big secret. Also, they've been discontinuing driver support for their previous generation hardware mercilessly as of late. That alone would justify a cheaper asking price, IMHO.
No its intentionally making their products obsolete sooner. Nvidia is very careful when it comes to making cards last for a certain amount of time as that their business strategy in getting people to upgrade. VRAM is an easy way to limit a cards future prospects as we have seen VRAM usage grow exponentially and its a reason Nvidia forced vendors to stop putting out special versions of cards with more VRAM. Only the very top of the product stack has reasonable VRAM amounts that would allow you to use the card effectively longer. Hence why the card had a 192bit bus at launch and lower vram (But now we have a newer version with a 256 bit bus and 16gb near the end of the cycle) because they are trying to gin up a reason to purchase it for the hold outs.
I haven't run into VRAM limitations with any of my GPU purchases in the past 8 years with the sole exception of the R9 Fury X (for obvious reasons), so I couldn't really answer. I normally don't buy lower-end GPUs and that I opted for a 4080 this time is most unusual (and frankly motivated by money, as I wanted to buy an expensive display and overhaul the rest of my PC as well), I always buy cards at the halo tier. The few games I played on a 4 GB GPU recently (my laptop's 3050M) did require compromises in texture quality to be playable, but they were playable nonetheless, at settings that I personally consider acceptable for a low-end laptop GPU.

Anyway, let's be realistic for a moment here. When the 1080 Ti launched 7 years ago in the Pascal refresh cycle, 11 GB was truly massive. And we all know that 11 GB was because Nvidia only removed a memory chip to lower the bandwidth and ensure that the 2016 Titan X retained a similar level of performance (despite having slower memory) and that there was a significant enough gap between it and the then-new Titan Xp, which came with the complete GP102 chip and the fastest memory that they had at the time. For most of its lifetime you could simply enable all settings and never worry about that.

GPU memory capacities began to rise after the Turing and RDNA generation but we were still targeting 8 GB for the performance segment parts (RTX 2080, 5700 XT), and these cards weren't really all that badly choked by these. Then came RDNA 2 and started to give everyone 12 and 16 GB like candy, but that still didn't give them a very distinct advantage over Ampere, that was still lingering on 8-10 GB, and then eventually 12 GB for the high end. Nvidia positioned the RTX 3090 with 24 GB to basically assume both the role of gaming flagship and a replacement for the Titan RTX, while removing a few perks of the latter and technically lowering the MSRP by $1000 (which was tbh an acceptable tradeoff). We know reality is different, but if we exclude the RTX 3090, then only AMD really offered a high VRAM capacity.

This generation started to bring higher VRAM capacities on both sides, and it's only now with games that were designed first and foremost for the PS5 and Xbox Series X starting to become part of gamers' rosters and benchmark suites, all featuring advanced graphics, new texture streaming systems, tons of high-resolution assets, etc. that we've begun to see that being utilized to a bigger extent. IMO, this places cards like the RTX 3080 in a bad spot, but is it really a deal breaker? With the exception of a noteworthy game I'm about to mention, we've yet to see even the 3080 fall dramatically behind its AMD counterpart despite having 6 GB less.

I agree, more VRAM is better. But realistically speaking, it's not a life or death situation, at least not yet, and I don't think it'll be one for some time to come, especially if you're willing to drop textures from "ultra" to "high" and not run extreme raytracing settings. Perhaps the only game where 10-12 GB has become a true limitation at this point is The Last of Us, which seems to use VRAM so aggressively that it's not going to perform on a GPU that has less than 16 GB of VRAM, as evident here given the RX 6800 XT wins out against the otherwise far superior 4070 Ti.



Frankly, a game that *genuinely* needs so much VRAM probably has more than a few optimization issues, but I digress. We're not "fighting against memory increases", I'm just making the case that VRAM amounts are currently adequate, if not slightly lacking across most segments, as you see, even then the aforementioned scenario involves 4K and very high graphics settings.
I agree to a point on the optimization argument as it has become more common place to just dump all textures into the GPU memory for developers to avoid having to do work harder in pull those assets when needed. But that unfortunately is the reality and we now have to contend with that fact when purchasing a GPU.
 
Joined
Jun 18, 2021
Messages
2,567 (2.01/day)
I agree with Dr. Dro in that you should vote with your wallet and (theoretically) companies will respond to that feedback. But also realize that if it is selling well then it could also be that your needs are just different than the needs of most buyers.
Amy Poehler Breakfast GIF by Parks and Recreation


GPU memory capacities began to rise after the Turing and RDNA generation but we were still targeting 8 GB for the performance segment parts (RTX 2080, 5700 XT), and these cards weren't really all that badly choked by these. Then came RDNA 2 and started to give everyone 12 and 16 GB like candy, but that still didn't give them a very distinct advantage over Ampere, that was still lingering on 8-10 GB, and then eventually 12 GB for the high end. Nvidia positioned the RTX 3090 with 24 GB to basically assume both the role of gaming flagship and a replacement for the Titan RTX, while removing a few perks of the latter and technically lowering the MSRP by $1000 (which was tbh an acceptable tradeoff). We know reality is different, but if we exclude the RTX 3090, then only AMD really offered a high VRAM capacity.

This generation started to bring higher VRAM capacities on both sides, and it's only now with games that were designed first and foremost for the PS5 and Xbox Series X starting to become part of gamers' rosters and benchmark suites, all featuring advanced graphics, new texture streaming systems, tons of high-resolution assets, etc. that we've begun to see that being utilized to a bigger extent. IMO, this places cards like the RTX 3080 in a bad spot, but is it really a deal breaker? With the exception of a noteworthy game I'm about to mention, we've yet to see even the 3080 fall dramatically behind its AMD counterpart despite having 6 GB less.

Why was 8gb enough for so long, want to take a guess at how much vram the ps4 had? The PS5 and Xbox Series X launched in November 2020 with 16gb, we're in 2024, 12gb is inexcusable and will be an obvious limitation unless you upgrade often - which you shouldn't need to when you're spending more than 600$. It's ridiculous and nvidia is doing it's best to kill pc gaming and make people turn to consoles.

It's cool that you mention the 3090 as well, that was problematic for an entirely different set of reasons: the titan cards used to have professional features enabled, bringing back the x90 class card was supposed to replace the titan and was even advertised often as such but they cuf off access to those.

Vote with your wallet and all that, doesn't matter all that much when mindless people will buy whatever is available but anyway... here's to waiting the 50 series isn't as bad as this one but i'm not holding my breath, given the current AI boom we'll need to wait for AI winter.
 
Joined
Jul 13, 2016
Messages
3,321 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Anyway, let's be realistic for a moment here. When the 1080 Ti launched 7 years ago in the Pascal refresh cycle, 11 GB was truly massive. And we all know that 11 GB was because Nvidia only removed a memory chip to lower the bandwidth and ensure that the 2016 Titan X retained a similar level of performance (despite having slower memory) and that there was a significant enough gap between it and the then-new Titan Xp, which came with the complete GP102 chip and the fastest memory that they had at the time. For most of its lifetime you could simply enable all settings and never worry about that.

Compared to the jump in VRAM of prior gens including from the 780 Ti to 980 Ti, 11GB was slightly below normal. The 780 Ti to 980 Ti was a straight doubling, the 1080 Ti a near doubling. The 2080 Ti was zero increase and the 3080 Ti was a 1GB increase to 12GB. It seems revisionist to call the 1080 Ti's VRAM allotance massive, no it was a pretty normal increase at the time. Only through 7 years of Nvidia cuckery does it appear big.

GPU memory capacities began to rise after the Turing and RDNA generation but we were still targeting 8 GB for the performance segment parts (RTX 2080, 5700 XT), and these cards weren't really all that badly choked by these. Then came RDNA 2 and started to give everyone 12 and 16 GB like candy, but that still didn't give them a very distinct advantage over Ampere, that was still lingering on 8-10 GB, and then eventually 12 GB for the high end. Nvidia positioned the RTX 3090 with 24 GB to basically assume both the role of gaming flagship and a replacement for the Titan RTX, while removing a few perks of the latter and technically lowering the MSRP by $1000 (which was tbh an acceptable tradeoff). We know reality is different, but if we exclude the RTX 3090, then only AMD really offered a high VRAM capacity.

2080 and 5700 XT are not in the same class. x700 is distinctively AMD's midrange where xx80 is Nvidia's high end. That the 2080 only having 8GB of VRAM was disgraceful, the 2000 series was a joke of a generation.

The 3090 (and it's varients) is the only Nvidia card that actually increased VRAM on the Nvidia side. the rest of the stack saw little to no movement, especially if you factored in price (with or without inflation). The 3090 was also super expensive.

I agree, more VRAM is better. But realistically speaking, it's not a life or death situation, at least not yet, and I don't think it'll be one for some time to come, especially if you're willing to drop textures from "ultra" to "high" and not run extreme raytracing settings. Perhaps the only game where 10-12 GB has become a true limitation at this point is The Last of Us, which seems to use VRAM so aggressively that it's not going to perform on a GPU that has less than 16 GB of VRAM, as evident here given the RX 6800 XT wins out against the otherwise far superior 4070 Ti.

You don't seem to understand that my argument has never been "hey look at this chart, it shows more VRAM = better". Yes you can show that but the point I've been making, and have repeated 4 times now is:

"Hardware leads software, not the other way around. It's backwards to imply that because games don't use x amount of VRAM today, x amount of VRAM isn't worthwhile. That kind of logic only perpetuates stagnation of VRAM capacity. Devs aren't going to make games with VRAM requirements the vast majority of cards can't meet. It's up to the hardware vendors to increase GPU VRAM allowance so that game devs can then create games that utilize that extra capacity."

There isn't ever going to be a point where games are releasing en mass with requirements that make most people's experience unenjoyable. You don't seem to understand that one has to come before the other, instead arguing that because the latter hasn't come the former is fine. Again, backwords logic.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
2080 and 5700 XT are not in the same class. x700 is distinctively AMD's midrange where xx80 is Nvidia's high end. That the 2080 only having 8GB of VRAM was disgraceful, the 2000 series was a joke of a generation.

The 3090 (and it's varients) is the only Nvidia card that actually increased VRAM on the Nvidia side. the rest of the stack saw little to no movement, especially if you factored in price (with or without inflation). The 3090 was also super expensive.

You don't seem to understand that my argument has never been "hey look at this chart, it shows more VRAM = better". Yes you can show that but the point I've been making, and have repeated 4 times now is:

"Hardware leads software, not the other way around. It's backwards to imply that because games don't use x amount of VRAM today, x amount of VRAM isn't worthwhile. That kind of logic only perpetuates stagnation of VRAM capacity. Devs aren't going to make games with VRAM requirements the vast majority of cards can't meet. It's up to the hardware vendors to increase GPU VRAM allowance so that game devs can then create games that utilize that extra capacity."

There isn't ever going to be a point where games are releasing en mass with requirements that make most people's experience unenjoyable. You don't seem to understand that one has to come before the other, instead arguing that because the latter hasn't come the former is fine. Again, backwords logic.

The 5700 XT performs similarly to the 2080 (in the things that it can do, anyway), and the 2080 similarly uses the midrange TU104 processor. So in this specific generation, no. As for the 3090, it was supposed to take over the market segment that they extinguished after the Titan RTX (which also had 24 GB), so in that sense, it was a net zero improvement, other than the $1000 decrease in MSRP. Which didn't mean anything during that season anyhow.

And again, let's be honest. Games are generally developed with average hardware in mind, and primarily target consoles.

Why was 8gb enough for so long, want to take a guess at how much vram the ps4 had? The PS5 and Xbox Series X launched in November 2020 with 16gb, we're in 2024, 12gb is inexcusable and will be an obvious limitation unless you upgrade often - which you shouldn't need to when you're spending more than 600$. It's ridiculous and nvidia is doing it's best to kill pc gaming and make people turn to consoles.

It's cool that you mention the 3090 as well, that was problematic for an entirely different set of reasons: the titan cards used to have professional features enabled, bringing back the x90 class card was supposed to replace the titan and was even advertised often as such but they cuf off access to those.

Vote with your wallet and all that, doesn't matter all that much when mindless people will buy whatever is available but anyway... here's to waiting the 50 series isn't as bad as this one but i'm not holding my breath, given the current AI boom we'll need to wait for AI winter.

Yes, I brought all of that up and i'm in agreement. But remember that the consoles also have unified memory pool, those 16 GB count for system and graphics memory simultaneously.
 
Joined
Jan 8, 2017
Messages
9,499 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
and the 2080 similarly uses the midrange TU104 processor.
TU104 was 545mm^2, pretty huge as far as GPU dies go, definitely not midrange. Navi 10 was a much smaller die but on 7nm, this is not a good comparison at all.
 
Joined
Jun 18, 2021
Messages
2,567 (2.01/day)
those 16 GB count for system and graphics memory simultaneously.

Architectures vary, but they don't, the PS5 has 512mb of DDR4 for system functions and the Xbox also has something similar besides it's weird tiered memory system. But the way it's shared is not comparable with how a desktop computer would share memory, even DDR5 is dogshit slow if the gpu ends up needing to use it.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
TU104 was 545mm^2, pretty huge as far as GPU dies go, definitely not midrange. Navi 10 was a much smaller die but on 7nm, this is not a good comparison at all.

Yes, it was 545mm², but it also had several hardware capabilities (such as dedicated tensor and raytracing cores), and it was built on 12 nm, while Navi 10 completely lacks those areas and was built on 7 nm. So there's that.

Architectures vary, but they don't, the PS5 has 512mb of DDR4 for system functions and the Xbox also has something similar besides it's weird tiered memory system. But the way it's shared is not comparable with how a desktop computer would share memory, even DDR5 is dogshit slow if the gpu ends up needing to use it.

That handles the core OS at best, though.
 
Joined
Jul 13, 2016
Messages
3,321 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
The 5700 XT performs similarly to the 2080 and the 2080 similarly uses the midrange TU104 processor. So in this specific generation, no.

As Vya pointed out, the 2080 was a 545mm2 die. It also cost $800.

(in the things that it can do, anyway),

Sorta of like how the 2080 isn't going to be doing RT on any modern game either? Yeah you can technically do RT on either the 5700 XT and 2080 and both are going to give you are terrible experience. The 3070 is already half in the same boat already as well.

Yes, it was 545mm², but it also had several hardware capabilities (such as dedicated tensor and raytracing cores), and it was built on 12 nm, while Navi 10 completely lacks those areas and was built on 7 nm. So there's that.

You are trying to make this hypothetical argument that the 2080 WOULD be a midrange card in your hypothetical situation but it doesn't workout because 1) It would be even more expensive in that scenario given the increased cost of 7nm 2) It'd still be at least an $800+ video card 3) Your hypothetical is not reality. The 2080 in reality has the price tag, name, and die size of a higher card. Let's call a pig a pig.

As for the 3090, it was supposed to take over the market segment that they extinguished after the Titan RTX (which also had 24 GB), so in that sense, it was a net zero improvement, other than the $1000 decrease in MSRP. Which didn't mean anything during that season anyhow.


xx90 cards do FP64 at half the rate of Titan cards (1:64 vs 1:32). Nvidia wanted people doing scientific calculations, one of the major uses of Titan cards, to spend more money so they got rid of Titan cards. The xx90 class is a more gimped version of the Titan cards focused specifically on the gaming aspect. People didn't get a discount, most people who legimitimately needed the features of the Titan cards eneded up having to spend more and buy up.

And again, let's be honest. Games are generally developed with average hardware in mind, and primarily target consoles.

Entirely depends if the consoles are considered average at the point of measure in their lifecycle. Console performance tends to go down over time as games become more demanding. Games requirements do not remain fixed and often games released later in a console gen tend to perform more poorly as we reach the end of a cycle.


Yes, I brought all of that up and i'm in agreement. But remember that the consoles also have unified memory pool, those 16 GB count for system and graphics memory simultaneously.

Couple of factors you are forgetting:

1) Both consoles have additional reserved memory just for the OS

2) Consoles have less overhead

3) Both consoles have proprietary texture decompression chips that allow them to dynamically stream data off their SSDs at high speeds and low latency, drastically reducing the amount of data that has to be stored in memory / VRAM.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,499 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
xx90 cards do FP64 at half the rate of Titan cards (1:64 vs 1:32).
To be fair 1:32 is just as useless as 1:64. My 7900XT can do roughly 1 TFLOP of FP64 (1:32) which is actually about the same as my CPU. Only the very first Titan and Titan V had good FP64 performance.
 
Joined
Jul 13, 2016
Messages
3,321 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
To be fair 1:32 is just as useless as 1:64. My 7900XT can do roughly 1 TFLOP of FP64 (1:32) which is actually about the same as my CPU. Only the very first Titan and Titan V had good FP64 performance.

Agreed, it's really a shame that the Titan cards went from 1:2 to 1:32 in FP64 when the Titan branding was in part built on their scientific and engineering chops.
 
Joined
Dec 30, 2019
Messages
145 (0.08/day)

This is the exact memory chip used on the RTX 4080. At $55,700, the per unit cost is $27.85 in a 2000 unit bulk.

$27.85 * 8 = $222.80 in memory IC costs alone.


The same situation with the slower 21 Gbps chip used in the 4070 Ti and 4090, at $25.31 per unit at bulk cost

$25.31 * 6 = $151.86 in memory costs alone

Not only does the cost add up, but factor in PCB costs, power delivery component costs, etc. - losses, the fact that everyone needs profit, yada yada, you get what I'm going at. I'm fairly sure the BoM for a 4070 Ti must be somewhere in the vicinity of $500 as a rough estimate. The rest makes up for profit, distribution cost, driver software development costs, etc.

If consumers knew the volume discount on these things, there would be riots in the streets.

But, for all we know, nVidia™ might not be able to design a memory controller as well as others; and bad yields in the controller area might be making die costs way higher than AMD or Intel.
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
If consumers knew the volume discount on these things, there would be riots in the streets.

But, for all we know, nVidia™ might not be able to design a memory controller as well as others; and bad yields in the controller area might be making die costs way higher than AMD or Intel.
I believe that nGreedia feels that they do good compression work, and that negates the need for a larger frame buffer. In nGreedias eyes. But the facts are slightly different. nGreedia only cares about profit, like every other corporation, but nGreedia does this to the next level, as they simply do not care about offering any kind of value to their customers.

VRAM is the second most costly part of a graphics card, and that right there is the reason we don't see 16GB as standard on all but the lowest end cards in 2024. I truly worry about Blackwell, as if nGreedia doesn't up the VRAM for that generation, then we will begin to see a stutter fest on all AAA games next year onwards. But I suspect that we will see only the unobtanium card getting 24 or possibly 32GB frame buffers, while the rest will top out at 16GB, and the midrange and lower will top out at 12GB. If so, that generation will be very bad value, even worse than the bad value of this awful generation.
 
Joined
Apr 29, 2014
Messages
4,303 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
I believe that nGreedia feels that they do good compression work, and that negates the need for a larger frame buffer. In nGreedias eyes. But the facts are slightly different. nGreedia only cares about profit, like every other corporation, but nGreedia does this to the next level, as they simply do not care about offering any kind of value to their customers.

VRAM is the second most costly part of a graphics card, and that right there is the reason we don't see 16GB as standard on all but the lowest end cards in 2024. I truly worry about Blackwell, as if nGreedia doesn't up the VRAM for that generation, then we will begin to see a stutter fest on all AAA games next year onwards. But I suspect that we will see only the unobtanium card getting 24 or possibly 32GB frame buffers, while the rest will top out at 16GB, and the midrange and lower will top out at 12GB. If so, that generation will be very bad value, even worse than the bad value of this awful generation.
AMD and Nvidia are no different in that they are focused on profits as that is what they are there to do. We can't say that either is objectively worse in wanting to make profits off their cards. Nvidia's major sin is the fact they have gotten complacent with whatever they release. Who can blame them, their cards always sell very well regardless of what they do and its caused the market to be very stagnant/uncompetitive. If people want change, they need to start reading the gaming reviews and buying based on that instead of what Nvidia card they can afford. Does it absolve AMD when they make a bad card, no it should not sell well if its not a good card (R9 Fury's for instance) but currently they are pretty competitive in more areas than naught. If people sent a message in those areas by voting with their wallet, then Nvidia will change and improve.
Agreed, it's really a shame that the Titan cards went from 1:2 to 1:32 in FP64 when the Titan branding was in part built on their scientific and engineering chops.
Yea, its funny how the Titan series went from a hybrid professional card, to the ultimate gaming card, to just being dropped entirely and the XX90 series branding coming back.
 
Joined
May 18, 2009
Messages
2,983 (0.52/day)
Location
MN
System Name Personal / HTPC
Processor Ryzen 5900x / Ryzen 5600X3D
Motherboard Asrock x570 Phantom Gaming 4 /ASRock B550 Phantom Gaming
Cooling Corsair H100i / bequiet! Pure Rock Slim 2
Memory 32GB DDR4 3200 / 16GB DDR4 3200
Video Card(s) EVGA XC3 Ultra RTX 3080Ti / EVGA RTX 3060 XC
Storage 500GB Pro 970, 250 GB SSD, 1TB & 500GB Western Digital / lots
Display(s) Dell - S3220DGF & S3222DGM 32"
Case CoolerMaster HAF XB Evo / CM HAF XB Evo
Audio Device(s) Logitech G35 headset
Power Supply 850W SeaSonic X Series / 750W SeaSonic X Series
Mouse Logitech G502
Keyboard Black Microsoft Natural Elite Keyboard
Software Windows 10 Pro 64 / Windows 10 Pro 64
Was at Micro Center yesterday picking up a few items and a guy checking out in front of me was excited he was able to pickup the last 4080 Super (priced at $1000 for the model he got) they had in stock. Damn....after tax that guy spent close to $1100 on a 4080 Super. Good for him and all, but I don't know how you could be excited about dropping that much money on a GPU.

How he wants to spend his money is his propagative, but if you ask me, a fool and his money are soon parted. The card alone cost almost the same as the rest of the items he was getting (SSD, MB, CPU, case, RAM, PSU). In all he was looking at around $2300 for everything after taxes, based on the prices on the items (it is possible some were priced lower than the sticker cost).

To be fair, I asked a guy in the DIY department how many 4080S they see come in and he said just a couple here and there and within a few days the couple that come do sell. They are selling, but by no means are they coming in by the truckload.
 
Joined
Apr 14, 2018
Messages
697 (0.29/day)
AMD and Nvidia are no different in that they are focused on profits as that is what they are there to do. We can't say that either is objectively worse in wanting to make profits off their cards. Nvidia's major sin is the fact they have gotten complacent with whatever they release. Who can blame them, their cards always sell very well regardless of what they do and its caused the market to be very stagnant/uncompetitive. If people want change, they need to start reading the gaming reviews and buying based on that instead of what Nvidia card they can afford. Does it absolve AMD when they make a bad card, no it should not sell well if its not a good card (R9 Fury's for instance) but currently they are pretty competitive in more areas than naught. If people sent a message in those areas by voting with their wallet, then Nvidia will change and improve.

Yea, its funny how the Titan series went from a hybrid professional card, to the ultimate gaming card, to just being dropped entirely and the XX90 series branding coming back.

The problem with that is most buyers end up purchasing Nvidia regardless of the fact that for 95% of games (without RT) AMD offers a better value product in the midrange. Voting with your “wallet” requires thinking about a purchase rather than looking at a halo product and automatically jumping to the conclusion everything down the stack is the best you can get. Eventually consumers are going to price themselves out of PC gaming by accepting whatever ridiculous price Nvidia sets.

Proves time and time again, most people make pretty poor choices when it comes to their “needs” when it comes to tech.
 
Joined
Nov 26, 2021
Messages
1,702 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04

This is the exact memory chip used on the RTX 4080. At $55,700, the per unit cost is $27.85 in a 2000 unit bulk.

$27.85 * 8 = $222.80 in memory IC costs alone.


The same situation with the slower 21 Gbps chip used in the 4070 Ti and 4090, at $25.31 per unit at bulk cost

$25.31 * 6 = $151.86 in memory costs alone

Not only does the cost add up, but factor in PCB costs, power delivery component costs, etc. - losses, the fact that everyone needs profit, yada yada, you get what I'm going at. I'm fairly sure the BoM for a 4070 Ti must be somewhere in the vicinity of $500 as a rough estimate. The rest makes up for profit, distribution cost, driver software development costs, etc.
DRAMExchange shows otherwise. This also lines up with other reports.

1707930754867.png


Another data point is the price of DDR5 on Digikey. Notice it says 16Gbit. Now we all know that this price is much higher than the actual price of DRAM. DDR5 is now starting at $73 for two 16 GB DIMMs. That is less than a quarter the price of the Digikey quote if they are selling it one IC at a time. The logical conclusion is that the reel, i.e. one unit, doesn't correspond to one DRAM IC.

1707931785009.png

It goes down to $13.5 when ordering in large quantities.

1707931820683.png
 
Last edited:
Top