Friday, February 9th 2024

Widespread GeForce RTX 4080 SUPER Card Shortage Reported in North America

NVIDIA's decision to shave off $200 from its GeForce RTX 4080 GPU tier has caused a run on retail since the launch of SUPER variants late last monthVideoCardz has investigated an apparent North American supply shortage. The adjusted $999 base MSRP appears to be an irresistible prospect for discerning US buyers—today's report explains how: "a week after its release, that GeForce RTX 4080 SUPER cards are not available at any major US retailer for online orders." At the time of writing, no $999 models are available to purchase via e-tailers (for delivery)—BestBuy and Micro Center have a smattering of baseline MSRP cards (including the Founders Edition), but for in-store pickup only. Across the pond, AD103 SUPER's supply status is a bit different: "On the other hand, in Europe, the situation appears to be more favorable, with several retailers listing the cards at or near the MSRP of €1109."

The cheapest custom GeForce RTX 4080 SUPER SKU, at $1123, seems to be listed by Amazon.com. Almost all of Newegg's product pages are displaying an "Out of Stock" notice—ZOTAC GAMING's GeForce RTX 4080 SUPER Trinity OC White Edition model is on "back order" for $1049.99, while the only "in stock" option is MSI's GeForce RTX 4080 Super Expert card (at $1149.99). VideoCardz notes that GeForce RTX 4070 SUPER and RTX 4070 TI SUPER models are in plentiful supply, which highlights a big contrast in market conditions for NVIDIA's latest Ada Lovelace families. The report also mentions an ongoing shortage of GeForce RTX 4080 (Non-SUPER) cards, going back weeks prior to the official January 31 rollout: "Similar to the RTX 4090, finding the RTX 4080 at its $1200 price point has proven challenging." Exact sales figures are not available to media outlets—it is unusual to see official metrics presented a week or two after a product's launch—so we will have to wait a little longer to find out whether demand has far outstripped supply in the USA.
Source: VideoCardz
Add your own comment

95 Comments on Widespread GeForce RTX 4080 SUPER Card Shortage Reported in North America

#76
Dr. Dro
AnotherReaderDRAMExchange shows otherwise. This also lines up with other reports.



Another data point is the price of DDR5 on Digikey. Notice it says 16Gbit. Now we all know that this price is much higher than the actual price of DRAM. DDR5 is now starting at $73 for two 16 GB DIMMs. That is less than a quarter the price of the Digikey quote if they are selling it one IC at a time. The logical conclusion is that the reel, i.e. one unit, doesn't correspond to one DRAM IC.


It goes down to $13.5 when ordering in large quantities.

This isn't the same model or even type of IC, though. Each IC has a different cost, so it's an apples to oranges comparison. Sure using standard ~16-18Gbps GDDR6 is cheaper, but Nvidia's cards have that specific bin, and chip type of GDDR6X that I referred to in the Digikey argument. You can double check with W1zzard's reviews.
evernessinceAs Vya pointed out, the 2080 was a 545mm2 die. It also cost $800.



Sorta of like how the 2080 isn't going to be doing RT on any modern game either? Yeah you can technically do RT on either the 5700 XT and 2080 and both are going to give you are terrible experience. The 3070 is already half in the same boat already as well.



You are trying to make this hypothetical argument that the 2080 WOULD be a midrange card in your hypothetical situation but it doesn't workout because 1) It would be even more expensive in that scenario given the increased cost of 7nm 2) It'd still be at least an $800+ video card 3) Your hypothetical is not reality. The 2080 in reality has the price tag, name, and die size of a higher card. Let's call a pig a pig.




xx90 cards do FP64 at half the rate of Titan cards (1:64 vs 1:32). Nvidia wanted people doing scientific calculations, one of the major uses of Titan cards, to spend more money so they got rid of Titan cards. The xx90 class is a more gimped version of the Titan cards focused specifically on the gaming aspect. People didn't get a discount, most people who legimitimately needed the features of the Titan cards eneded up having to spend more and buy up.



Entirely depends if the consoles are considered average at the point of measure in their lifecycle. Console performance tends to go down over time as games become more demanding. Games requirements do not remain fixed and often games released later in a console gen tend to perform more poorly as we reach the end of a cycle.




Couple of factors you are forgetting:

1) Both consoles have additional reserved memory just for the OS

2) Consoles have less overhead

3) Both consoles have proprietary texture decompression chips that allow them to dynamically stream data off their SSDs at high speeds and low latency, drastically reducing the amount of data that has to be stored in memory / VRAM.
I've not forgotten any of this.

1. No, RX 5700 XT cannot do raytracing at all. It's just not compatible with the technology and AMD has made absolutely no effort to support a software DXR driver on this hardware. This was their choice. The RTX 2080 may lack the performance of its higher end and contemporary siblings, but it is fully compatible with DirectX 12 Ultimate. AMD has no claim to this prior to RDNA 2.

2. Regardless of die size and cost, the TU104 served as Turing's midrange offering alongside the low-end TU106 and the high-end TU102. Its larger die area is owed to the earlier fabrication process and the presence of extra hardware features that the competition plain didn't support. We know Turing was expensive, but it's the one thing you're completely unwilling to accept: reality is that midrange GPUs have been costing $800 for some time now. And that's not about to change. The way the market is going - and this includes AMD, is that the pricing "floor" on cards that are worth buying is consistently being raised generation after generation. There's an interesting video that's been making the rounds recently where the guy approaches exactly this problem:


3. Regarding FP64, frankly, who cares? The ratio may have changed to more or less keep this about the same level but FP64 is increasingly unimportant across all kinds of GPU computing segments. Has been for many years, remember 10 years ago when Titan X Maxwell removed the FP64 dedicated cores that the Kepler models had? It's not that they were disabled, Maxwell simply didn't support that... there was no demand for that then, and there is no demand for it now. Titan optimizations went beyond FP64, they simply enabled the optimizations from their enterprise drivers for targeted applications such as specviewperf and similar suites in an answer to AMD's Vega Frontier Edition semi-pro GPU. I owned one and AMD abandoned it, feature incomplete, far before announcing GCN5 EOL because they simply did not care for maintaining that product and the promise they had made to its buyers.

4. Neither the PS5 nor the Xbox Series are particularly powerful in comparison to a contemporary PC. Digital Foundry's placed an RTX 4080 SUPER at about ~3-3.2x the performance of a PS5. The PS5 may have a few tricks up its sleeve like the aforementioned dedicated decomp engine but... on the other hand, we've got far more powerful processors with much faster storage and much faster memory available, so really, it balances that out even if you disregard things like DirectStorage.

GhostRyderNo its intentionally making their products obsolete sooner. Nvidia is very careful when it comes to making cards last for a certain amount of time as that their business strategy in getting people to upgrade. VRAM is an easy way to limit a cards future prospects as we have seen VRAM usage grow exponentially and its a reason Nvidia forced vendors to stop putting out special versions of cards with more VRAM. Only the very top of the product stack has reasonable VRAM amounts that would allow you to use the card effectively longer. Hence why the card had a 192bit bus at launch and lower vram (But now we have a newer version with a 256 bit bus and 16gb near the end of the cycle) because they are trying to gin up a reason to purchase it for the hold outs.

I agree to a point on the optimization argument as it has become more common place to just dump all textures into the GPU memory for developers to avoid having to do work harder in pull those assets when needed. But that unfortunately is the reality and we now have to contend with that fact when purchasing a GPU.
The "intentionally making their products obsolete sooner" argument flies directly in the face of Nvidia's lengthy and comprehensive software support over the years. I'd argue nothing makes a graphics card more obsolete than ceasing its driver support, and AMD has already axed pretty much everything older than the RX 5700 XT already.

It's not some big scheme, they just upmarked the SKUs in relation to the processor that powered them way too much. AMD did the same, and why you ended with such a massive gap between the 7600 (full Navi 33) and the 7800 XT (full Navi 32). They had to shoehorn SKUs between these, and it resulted in a crippled 7700 XT that's got 12 GB of VRAM and became unpopular because of it, and a 7600 XT that's basically a 16 GB 7600 that... no one sane would purchase at the prices being asked, and was received just as poorly as the 16 GB 4060 Ti.
GhostRyderI don't disagree with that statement, though I would not say 95% of the time as I think its a much closer battle than that depending on which cards you are comparing. That being said I stand by saying the 7900 XTX is a better value at $900 than buying a RTX 4080 S at $1000.
For $100 you're taking:

- NVIDIA's vastly superior software ecosystem (all the bells and whistles) with a much longer support lifecycle
- Identical raster with 20% extra RT performance
- A GPU with a higher power efficiency figure

That's very much up to you... personally, I wouldn't touch the XTX if I was asked to choose between it at $900 and the 4080S at $1K. The XTX needs to be priced at $799 to become a clear winner in my eyes.
Posted on Reply
#77
AnotherReader
Dr. DroThis isn't the same model or even type of IC, though. Each IC has a different cost, so it's an apples to oranges comparison. Sure using standard ~16-18Gbps GDDR6 is cheaper, but Nvidia's cards have that specific bin, and chip type of GDDR6X that I referred to in the Digikey argument. You can double check with W1zzard's reviews.



I've not forgotten any of this.

1. No, RX 5700 XT cannot do raytracing at all. It's just not compatible with the technology and AMD has made absolutely no effort to support a software DXR driver on this hardware. This was their choice. The RTX 2080 may lack the performance of its higher end and contemporary siblings, but it is fully compatible with DirectX 12 Ultimate. AMD has no claim to this prior to RDNA 2.

2. Regardless of die size and cost, the TU104 served as Turing's midrange offering alongside the low-end TU106 and the high-end TU102. Its larger die area is owed to the earlier fabrication process and the presence of extra hardware features that the competition plain didn't support. We know Turing was expensive, but it's the one thing you're completely unwilling to accept: reality is that midrange GPUs have been costing $800 for some time now. And that's not about to change. The way the market is going - and this includes AMD, is that the pricing "floor" on cards that are worth buying is consistently being raised generation after generation. There's an interesting video that's been making the rounds recently where the guy approaches exactly this problem:


3. Regarding FP64, frankly, who cares? The ratio may have changed to more or less keep this about the same level but FP64 is increasingly unimportant across all kinds of GPU computing segments. Has been for many years, remember 10 years ago when Titan X Maxwell removed the FP64 dedicated cores that the Kepler models had? It's not that they were disabled, Maxwell simply didn't support that... there was no demand for that then, and there is no demand for it now. Titan optimizations went beyond FP64, they simply enabled the optimizations from their enterprise drivers for targeted applications such as specviewperf and similar suites in an answer to AMD's Vega Frontier Edition semi-pro GPU. I owned one and AMD abandoned it, feature incomplete, far before announcing GCN5 EOL because they simply did not care for maintaining that product and the promise they had made to its buyers.

4. Neither the PS5 nor the Xbox Series are particularly powerful in comparison to a contemporary PC. Digital Foundry's placed an RTX 4080 SUPER at about ~3-3.2x the performance of a PS5. The PS5 may have a few tricks up its sleeve like the aforementioned dedicated decomp engine but... on the other hand, we've got far more powerful processors with much faster storage and much faster memory available, so really, it balances that out even if you disregard things like DirectStorage.




The "intentionally making their products obsolete sooner" argument flies directly in the face of Nvidia's lengthy and comprehensive software support over the years. I'd argue nothing makes a graphics card more obsolete than ceasing its driver support, and AMD has already axed pretty much everything older than the RX 5700 XT already.

It's not some big scheme, they just upmarked the SKUs in relation to the processor that powered them way too much. AMD did the same, and why you ended with such a massive gap between the 7600 (full Navi 33) and the 7800 XT (full Navi 32). They had to shoehorn SKUs between these, and it resulted in a crippled 7700 XT that's got 12 GB of VRAM and became unpopular because of it, and a 7600 XT that's basically a 16 GB 7600 that... no one sane would purchase at the prices being asked, and was received just as poorly as the 16 GB 4060 Ti.



For $100 you're taking:

- NVIDIA's vastly superior software ecosystem (all the bells and whistles) with a much longer support lifecycle
- Identical raster with 20% extra RT performance
- A GPU with a higher power efficiency figure

That's very much up to you... personally, I wouldn't touch the XTX if I was asked to choose between it at $900 and the 4080S at $1K. The XTX needs to be priced at $799 to become a clear winner in my eyes.
How is it even remotely realisitic for Nvidia to be paying 8 times as much as the memory I linked. By the same logic, DDR5 also seems to cost much more on Digikey. I believe that you're misunderstanding the packaging to refer to one IC when all the evidence seems to point to it being more than one IC. Tape and Reel doesn't contain one IC.
In tape and reel, any components are set into specially-designed pockets in a long piece of plastic tape (the "tape"). This tape is then sealed to keep components in place and wound around a central "reel". This method of packaging helps protect the components during storage from damage or dust.
Posted on Reply
#78
Dr. Dro
AnotherReaderHow is it even remotely realisitic for Nvidia to be paying 8 times as much as the memory I linked. By the same logic, DDR5 also seems to cost much more on Digikey. I believe that you're misunderstanding the packaging to refer to one IC when all the evidence seems to point to it being more than one IC. Tape and Reel doesn't contain one IC.

It'll vary by the precise chip type, I understand what you're trying to get at, though. I'm sure it's less than even large bulk cost on IC suppliers like Digikey or Mouser, but those are about the best leads we've got without being industry insiders with access to these deals.
Posted on Reply
#79
AnotherReader
Dr. DroIt'll vary by the precise chip type, I understand what you're trying to get at, though. I'm sure it's less than even large bulk cost on IC suppliers like Digikey or Mouser, but those are about the best leads we've got without being industry insiders with access to these deals.
Yes, we can only guess at upper bounds for cost based on information available from places like DRAMeXchange. Higher speed bins will be more expensive, but not as expensive as one would expect, and then there's the fact that AMD, Intel, and Nividia enjoy volume discounts.
Posted on Reply
#80
rv8000
GhostRyderThat is why I bet there is this "Shortage", its artificial to make people buy them when they see them in stock. Helps sales with people who see a card sold out constantly being willing to buy it the moment they see it in stock. I was at Micro-Center as well in Dallas this week and they told me it was a very low amount of stock they got.


I don't disagree with that statement, though I would not say 95% of the time as I think its a much closer battle than that depending on which cards you are comparing. That being said I stand by saying the 7900 XTX is a better value at $900 than buying a RTX 4080 S at $1000.
Considering most game time is going into multiplayer games and mobas like COD, valorant, league etc…, in the massive majority of games being played (lacking RT or more often than not disabled for high FPS) AMD literally offers better or the same rasterized performance at lower price points across the board. I stand by that statement 100%.

Tech enthusiasts overestimate what your average “gamer” is doing with their hardware. Chances are theyre streaming TFT on twitch and listening spotify, not running CP2077 at 8k with path tracing on a 15k rig.
Posted on Reply
#81
GhostRyder
Dr. DroThe "intentionally making their products obsolete sooner" argument flies directly in the face of Nvidia's lengthy and comprehensive software support over the years. I'd argue nothing makes a graphics card more obsolete than ceasing its driver support, and AMD has already axed pretty much everything older than the RX 5700 XT already.

It's not some big scheme, they just upmarked the SKUs in relation to the processor that powered them way too much. AMD did the same, and why you ended with such a massive gap between the 7600 (full Navi 33) and the 7800 XT (full Navi 32). They had to shoehorn SKUs between these, and it resulted in a crippled 7700 XT that's got 12 GB of VRAM and became unpopular because of it, and a 7600 XT that's basically a 16 GB 7600 that... no one sane would purchase at the prices being asked, and was received just as poorly as the 16 GB 4060 Ti.
I checked AMD's website, the newest product that has no further driver support would be the R9 Fury series from 2015. Everything else after has a driver as new as 1/24/2024. Nvidia stops major support after the next generation comes out and offers a driver that has no improvements or changes to the product after that point. They just keep the driver profile checked off in their downloads. My Titan has seen 0 changes since the 2XXX series came out (I am not complaining about that, just pointing out). Neither company is supporting the products much after next gen comes out.

I stand by the argument, Nvidia makes the more midrange cards only really good for that generation at the moment and not for much future unlike their top products. Yes the GPU's are less powerful but memory has been what holds many of the cards back at times. I am mostly aiming this at the second main tier cards like the XX70 series. Its totally reasonable the 4060ti and others around and below to have less memory.
Dr. DroFor $100 you're taking:

- NVIDIA's vastly superior software ecosystem (all the bells and whistles) with a much longer support lifecycle
- Identical raster with 20% extra RT performance
- A GPU with a higher power efficiency figure

That's very much up to you... personally, I wouldn't touch the XTX if I was asked to choose between it at $900 and the 4080S at $1K. The XTX needs to be priced at $799 to become a clear winner in my eyes.
I again would argue your first bullet point as the definition of "Vastly superior" because it all depends how you look at it. I stand by saying AMDs software center is superior in this day and age and neither have major driver issues anymore.
The raster argument vs Ray Tracing is a choice. Yes less than 5% difference RX 7900 XTX vs RTX 4080 S overall in raster with the edge being to the 7900 XTX. But that is general gaming versus a tech not in every game which is where I argue. Its a choice at the end if they prefer raster performance (Something for every game) or Ray Tracing performance which is available in select games and is a performance killer on all GPU's.
The power argument is true, but your talking about something like a 50watt difference depending on the load. Which in reality is not making much of a difference in someone electric bill even if they stress the card 24/7. Not to mention people didn't seem to care about that with the RTX 3XXX series (Yes I know people complained but the products still sold). I generally only use that argument when the difference is over 100watts in difference for same performance.
I mean thats fine but I am talking about most gamers when using the arguments I am talking about. I am fine if a person says they want more RT performance for the games they play, then the obvious choice is the 4080 S, but most people I talk to/game with could not care less. They care about stretching their budget as much as possible to get the most performance they can.
rv8000Considering most game time is going into multiplayer games and mobas like COD, valorant, league etc…, in the massive majority of games being played (lacking RT or more often than not disabled for high FPS) AMD literally offers better or the same rasterized performance at lower price points across the board. I stand by that statement 100%.

Tech enthusiasts overestimate what your average “gamer” is doing with their hardware. Chances are theyre streaming TFT on twitch and listening spotify, not running CP2077 at 8k with path tracing on a 15k rig.
I am not even referring to RT in my comparison, just saying the difference in overall is less than 5% between the two and once you drop down to different categories the difference between them make some better values than others.
Posted on Reply
#82
iameatingjam
Vya DomusThey probably just didn't make many of these, I doubt there were that many people on fence at this point in time that were about to drop 1K$+ for a video card. Actually I doubt there are that many people ready to do that period.
I think there's people who this appeals to... there's always people building new systems all the time, and those need video cards, whenever they happen. I can see people who would have gone to a 4090 before, going to a 4080s now. I think I would have when I got my 4090. I just refused to pay that spit the face price of $1200 ( or more like $1600+ here), that was just too much to swallow for a card that was so much slower than the halo product.

I know this isn't a reliable metric or anything but I have been seeing a lot of people asking about 4080S on facebook and reddit lately.

Though I do agree on the whole low stock does not necessarily equal high demand thing.
Posted on Reply
#83
rv8000
iameatingjamI think there's people who this appeals to... there's always people building new systems all the time, and those need video cards, whenever they happen. I can see people who would have gone to a 4090 before, going to a 4080s now. I think I would have when I got my 4090. I just refused to pay that spit the face price of $1200 ( or more like $1600+ here), that was just too much to swallow for a card that was so much slower than the halo product.

I know this isn't a reliable metric or anything but I have been seeing a lot of people asking about 4080S on facebook and reddit lately.

Though I do agree on the whole low stock does not necessarily equal high demand thing.
The new “msrp” is a load. Trickle in/low stock release, give it a few weeks and all of the 4080 super cards are going to sit around $1200 in the US +/- $50. Not to mention the FE cards probably won’t exist after the first few weeks. It’s more along the lines of a brief price cut for some (bad?) publicity. Theres next to no appeal at all.

The only marginally good super card was the 4070s, and even then thats a stretch with the awful midrange pricing we now have.
Posted on Reply
#84
iameatingjam
rv8000The new “msrp” is a load. Trickle in/low stock release, give it a few weeks and all of the 4080 super cards are going to sit around $1200 in the US +/- $50. Not to mention the FE cards probably won’t exist after the first few weeks. It’s more along the lines of a brief price cut for some (bad?) publicity. Theres next to no appeal at all.

The only marginally good super card was the 4070s, and even then thats a stretch with the awful midrange pricing we now have.
You're not wrong that a lot of them are in that price range. That would kind of explain the high interest in the msrp, or close to msrp models, for those brief moments we see them. Quickly checked the american Newegg, and looks like you can backorder a Zotac card for $1050 that is said to be back in stock tomorrow. If you were already planning to spend $1000, an extra $50 isn't that hard to eat. Also its white, and people seem to like that for whatever reason.
Posted on Reply
#85
Dr. Dro
GhostRyderI checked AMD's website, the newest product that has no further driver support would be the R9 Fury series from 2015. Everything else after has a driver as new as 1/24/2024. Nvidia stops major support after the next generation comes out and offers a driver that has no improvements or changes to the product after that point. They just keep the driver profile checked off in their downloads. My Titan has seen 0 changes since the 2XXX series came out (I am not complaining about that, just pointing out). Neither company is supporting the products much after next gen comes out.
It's quite misleading, actually. AMD no longer updates the drivers for RX Vega and RX 400/500 GPUs, they're strictly on a quarterly maintenance release, just like the R470 security updates Nvidia still releases for Kepler GPUs. They call it "24.1.1", but the driver under the hood is completely different, as the RDNA is supported by the current 23.40 branch. GCN cards never received the 23.20 and newer branches, it's still on 23.19 series, which is the branch that, if I recall correctly, the "23.9.1" driver was first released at. I don't blame you for missing this detail, although it's very important, it's not made anywhere near as clear as it should've been.

RDNA:



GCN 4/5:



Regarding quality, I'll take your word with a massive grain of salt, I'm well aware of the progress with the AMD drivers and I must say that so far, I am not yet satisfied. They have much grueling work to do. But I'm hopeful that by the 8900 XTX or whatever RDNA 4 is called, they'll have a solid thing going on. I doubt it, but I'm probably going to give them a chance if I manage to upgrade while keeping my 4080.

Needless to say, your Pascal card is still very much supported and fixes are actively developed for it - most recently they're aware of and developing a fix for configurations that have HAGS+SLI having random freeze issues. It may miss out on some of the newer RTX features, but it already supports things like HAGS that only RDNA 3 have come to support. It received all of the other features that don't rely on tensor cores too, like image sharpening, integer scaling, software DXR driver, etc. that AMD either doesn't support at all, or hasn't backported to their older architectures... and going well beyond that, both Maxwell and Pascal are still getting routine bugfixes and game ready profiles, so there's very little to complain there, IMHO. You even have a fancy Xp... I took a look at Pascal recently (and for the first time) with a 1070 Ti I scored some time ago.
Posted on Reply
#86
rv8000
Dr. DroIt's quite misleading, actually. AMD no longer updates the drivers for RX Vega and RX 400/500 GPUs, they're strictly on a quarterly maintenance release, just like the R470 security updates Nvidia still releases for Kepler GPUs. They call it "24.1.1", but the driver under the hood is completely different, as the RDNA is supported by the current 23.40 branch. GCN cards never received the 23.20 and newer branches, it's still on 23.19 series, which is the branch that, if I recall correctly, the "23.9.1" driver was first released at. I don't blame you for missing this detail, although it's very important, it's not made anywhere near as clear as it should've been.

RDNA:



GCN 4/5:



Regarding quality, I'll take your word with a massive grain of salt, I'm well aware of the progress with the AMD drivers and I must say that so far, I am not yet satisfied. They have much grueling work to do. But I'm hopeful that by the 8900 XTX or whatever RDNA 4 is called, they'll have a solid thing going on. I doubt it, but I'm probably going to give them a chance if I manage to upgrade while keeping my 4080.

Needless to say, your Pascal card is still very much supported and fixes are actively developed for it - most recently they're aware of and developing a fix for configurations that have HAGS+SLI having random freeze issues. It may miss out on some of the newer RTX features, but it already supports things like HAGS that only RDNA 3 have come to support. It received all of the other features that don't rely on tensor cores too, like image sharpening, integer scaling, software DXR driver, etc. that AMD either doesn't support at all, or hasn't backported to their older architectures... and going well beyond that, both Maxwell and Pascal are still getting routine bugfixes and game ready profiles, so there's very little to complain there, IMHO. You even have a fancy Xp... I took a look at Pascal recently (and for the first time) with a 1070 Ti I scored some time ago.
Currently owning both amd and nvidia GPUs, from an overall functionality standpoint AMD drivers are superior imo. The user interface and built in monitoring/oc tools are the icing on the cake.

I haven’t ran into any significant driver issues on either side (running 7900XTX, 3080, and 2070s).

For often being touted as a software company, nvidia could really serve to update the UI.
Posted on Reply
#87
Dr. Dro
rv8000Currently owning both amd and nvidia GPUs, from an overall functionality standpoint AMD drivers are superior imo. The user interface and built in monitoring/oc tools are the icing on the cake.

I haven’t ran into any significant driver issues on either side (running 7900XTX, 3080, and 2070s).

For often being touted as a software company, nvidia could really serve to update the UI.
I don't care about the UI, I care about the driver's performance, functionality (and by this I mean, the software that is able to run on it) and the stability. I'll acknowledge progress primarily on the functionality area (although we could do without the gaffes like antilag plus thing getting people banned because some junior dev decided that injecting code haphazardly instead of developing a proper SDK was a great idea!), but the first and last points still stand IMO.
Posted on Reply
#88
rv8000
Dr. DroI don't care about the UI, I care about the driver's performance, functionality (and by this I mean, the software that is able to run on it) and the stability. I'll acknowledge progress primarily on the functionality area, but the first and last points still stand IMO.
And you have no hands on experience with current AMD drivers so how can you come to this conclusion? Nvidia having superior drivers is just a load of BS that continues to be parroted around. Both have bugs, installation issues, or bad drivers, hell the other day someone made a post on this forum about Nvidia drivers not detecting cards or installing properly. Clearly superior…
Posted on Reply
#89
Dr. Dro
rv8000And you have no hands on experience with current AMD drivers so how can you come to this conclusion? Nvidia having superior drivers is just a load of BS that continues to be parroted around. Both have bugs, installation issues, or bad drivers, hell the other day someone made a post on this forum about Nvidia drivers not detecting cards or installing properly. Clearly superior…
The conclusion is about as simple as: "oh so I wanna enable all the eye candy, get me some DLAA with ray reconstruction and (actually generative) frame generation going on, then punch in the GameWorks features like HBAO+, I can do all that and still have my cake and eat it too", all while not worrying if my GPU is gonna croak and BSOD because I had VLC or something on the background. That's been my nightmare all along. Oh, and I almost forgot, actual low-latency support, because you know, got that fancy OLED and all. And that's from a gamer's point of view, too. I wanna play a new game? Driver support is ready day one. Always. Remember last year's first quarter how AMD ignored practically every single AAA release and kept the most recent driver only to the 7900 XTX while everyone else lingered? So... I think I don't need to go much further. I mean, damn, I don't think I'm asking much. They certainly don't need to go as far as releasing an easy to use personal LLM engine to run on their GPU like Nvidia already did, but...
Posted on Reply
#90
umeng2002
It just seems more and more that nVidia wants exit the consumer gaming market. They go from one scam to another: Crypto mining to AI. When AI goes bust, I wonder what they'll move to.
Posted on Reply
#91
Dr. Dro
umeng2002It just seems more and more that nVidia wants exit the consumer gaming market. They go from one scam to another: Crypto mining to AI. When AI goes bust, I wonder what they'll move to.
To the next big thing™, just like both Intel and AMD are doing. I mean, both Meteor Lake and the Phoenix-HS have their "neural processing capabilities" as the central selling point... it is the big craze right now. Truth be told, it's a bit obvious the tech industry is beginning to enter a period of relapse, we don't really have anything new that is truly groundbreaking and we haven't had something that had a decisive wow factor since Sandy Bridge. Ryzen was responsible for bringing that to the masses, but... I'm sure everyone on a socket AM4 machine and any GPU made in the past 6 years is just dying to upgrade... not
Posted on Reply
#92
stimpy88
iameatingjamI think there's people who this appeals to... there's always people building new systems all the time, and those need video cards, whenever they happen. I can see people who would have gone to a 4090 before, going to a 4080s now. I think I would have when I got my 4090. I just refused to pay that spit the face price of $1200 ( or more like $1600+ here), that was just too much to swallow for a card that was so much slower than the halo product.

I know this isn't a reliable metric or anything but I have been seeing a lot of people asking about 4080S on facebook and reddit lately.

Though I do agree on the whole low stock does not necessarily equal high demand thing.
Erm, a 4080 Super is the same price as a 4080. The price reduction was a 2 day myth.
umeng2002It just seems more and more that nVidia wants exit the consumer gaming market. They go from one scam to another: Crypto mining to AI. When AI goes bust, I wonder what they'll move to.
Hopefully the people going all-in on A.I. will soon come to realise the tech is cool, but has no clothes. You simply cannot trust a word any of them tell you, you have to fact-check everything.

Then all nGreedia will have, is the gamers they used to be all about, until they shat all over them.
Posted on Reply
#93
iameatingjam
stimpy88Erm, a 4080 Super is the same price as a 4080. The price reduction was a 2 day myth.
I already responded to that yesterday. Just look up a couple posts.


What I did was briefly check the American Newegg, and pretty quickly found a 4080s on backlog but only by 1day, which you could buy for $1050. Not perfect, but not quite as spit in my face as $1200+. I might have considered this if I hadn't already got a 4090.



Though I tend to agree the quantity is being limited.


I was even wondering before the launch... how many 4080s will they be able to make where 0% of the cores are defective? Least that leaves a lot of dies for 4070 ti supers I guess....
Posted on Reply
#94
AnotherReader
iameatingjamI already responded to that yesterday. Just look up a couple posts.


What I did was briefly check the American Newegg, and pretty quickly found a 4080s on backlog but only by 1day, which you could buy for $1050. Not perfect, but not quite as spit in my face as $1200+. I might have considered this if I hadn't already got a 4090.



Though I tend to agree the quantity is being limited.


I was even wondering before the launch... how many 4080s will they be able to make where 0% of the cores are defective? Least that leaves a lot of dies for 4070 ti supers I guess....
Given that N5's defect density is pretty good, we can assume that it is no worse than 0.1 per square cm. AD103 is 379 mm^2 so even worst case yields should be about 69% for fully functional dies.
Posted on Reply
#95
GhostRyder
Dr. DroIt's quite misleading, actually. AMD no longer updates the drivers for RX Vega and RX 400/500 GPUs, they're strictly on a quarterly maintenance release, just like the R470 security updates Nvidia still releases for Kepler GPUs. They call it "24.1.1", but the driver under the hood is completely different, as the RDNA is supported by the current 23.40 branch. GCN cards never received the 23.20 and newer branches, it's still on 23.19 series, which is the branch that, if I recall correctly, the "23.9.1" driver was first released at. I don't blame you for missing this detail, although it's very important, it's not made anywhere near as clear as it should've been.

RDNA:



GCN 4/5:

I am aware, they both do that. What I was referencing was 0 support meaning not even a Win 11 driver/something that you can install right now and run on a modern machine. Yes the new drivers don't offer anything and are just repackaged old drivers. I thought you were referencing 0 support and 0 drivers that could be downloaded for a modern machine. But I think its going to come down to how we define support, because my reference was reading bug fixes and improvements in performance which I generally saw more often from AMD from previous generation cards versus Nvidia. AMD's newest control center and many of those features are added to the older generation cards with the new drivers center as well depending on what it is. However, many of those features are not as noteworthy. I will say Nvidia is better about putting some of the bigger software based features down at least 2 generations. However my point it is going to be how we define support. I will redirect what I said and say both support their cards for a reasonable amount of time in my book.
iameatingjamI already responded to that yesterday. Just look up a couple posts.


What I did was briefly check the American Newegg, and pretty quickly found a 4080s on backlog but only by 1day, which you could buy for $1050. Not perfect, but not quite as spit in my face as $1200+. I might have considered this if I hadn't already got a 4090.



Though I tend to agree the quantity is being limited.


I was even wondering before the launch... how many 4080s will they be able to make where 0% of the cores are defective? Least that leaves a lot of dies for 4070 ti supers I guess....
Yea they are available, I will say that if at least a few can hold that price area that will be fine. I am worried that the base price will disappear pretty quickly.
Posted on Reply
Add your own comment
Dec 18th, 2024 06:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts