• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Nvidia's GPU market share hits 90% in Q4 2024 (gets closer to full monopoly)

Status
Not open for further replies.
You seem to be moving goal posts. The 970 was a bad design choice with its memory subsystem. Nothing like that has ever happened since.
This doesn't have much to do with binning, I guess.
My point there is they lied once, nothing prevents them (or anyone else) from doing it again.

I think they'll never exist, just like the AD102 full die never came into light, simply because they're too big and yields are not known to reach 100% for such big dies.
Yeah, that was really weird. If good Navi 48 outnumbers defective ones 3:1 (judging by launch day Microcenter stock), then where are the full Blackwell dies?

I had already spoken about this one before.

5060ti will likely be the full GB206, whereas the 5060 will be a die cut out of that. The 5050 is rumored to be the full GB207.
What's the point you're trying to make with such examples?
Nothing is full except for the 5080.


My point is that Super cards are coming with the good dies, and they'll be presented as soooo much value, mark my words. ;)

Well, you said yourself, the 5080 uses the full GB203 die, and that's a bit larger than the Navi 48.
The GB202 is 2.1x bigger than Navi 48.
That's one chip. Where's the rest of the lineup?

Only product that didn't use the full die at launch and got a refresh later was the AD103. And even then, the 4080 Super did not bring much to the table over the 4080 perf-wise, since it only enabled 4 SMs. The price drop was the thing that made it an interesting product.
Fun fact: the 4080 Super was the only product using the full AD103, not even enterprise products got such configuration, which does lead me to believe that the harvests for such die were not good enough for such product at launch.
I believe the above specially applies to Ampere's 3070ti and 3090ti, given how Samsung's node was a mess (albeit cheap).
And now the only full die GPU is the 5080 with the GB203.

I just don't see your point. As I said, it's not like they're withholding good, fully enabled dies.
Yes they are.

They either sell those, or cut those down to segment a product, as any other company.
Refreshes may happen, companies do it all the time, specially as node maturity increases. Are you saying that new products should not be launched?
A full die is not a refresh. It's the same product.
 
My point there is they lied once, nothing prevents them (or anyone else) from doing it again.
I don't think it was an outright lie, but rather sheer incompetence or mismanagement from within the company.
Anand had done a great explanation on it, it's truly a mess:

They did go full into damage control once this happened, and the whole product brought tons of chaos, but I don't think this is something equivalent to "5070 = 4090", instead I think of it as something closer to Intel's CPUs degrading. Still bad, but for different reasons.
Intel at least gave extra warranty on the CPUs tho lol

Yeah, that was really weird. If good Navi 48 outnumbers defective ones 3:1 (judging by launch day Microcenter stock), then where are the full Blackwell dies?
I don't think you can estimate yields like that.
Shoving GB202's info here, we get almost 70% yields for a full die, and that's assuming a really really low defect rate.
1741749826232.png


Some places estimate that the yields are closer to 56%, which would be closer to a failure rate of 0.08:
1741750145559.png


If we pop estimate dimensions for Navi 48 using both defect rates, we have something like so:

1741750241848.png

1741750197256.png


So something between 75 to 84% of yield rates for perfect dies. That seems way better than 56 to 70%.

Nothing is full except for the 5080.

The 5060ti uses the full die, it's right there in the link you posted.

My point is that Super cards are coming with the good dies, and they'll be presented as soooo much value, mark my words. ;)
Even then, what's the issue? Refreshes will come next year, a new arch the year after that, and so on and so on.
UDNA will be coming after RDNA4, and it will likely be better value as well, is that a problem?
Anyhow, the only "Super" chip that I can see using a full die is the 5070 version. 5090 won't be getting a refresh, others will either just change the memory subsystem, or use a totally different die.
That's one chip. Where's the rest of the lineup?
I spoke about those right above, mind doing a ctrl+f for "5060ti" or "5050" within my post.
And now the only full die GPU is the 5080 with the GB203.
And so is the 5060ti, and likely the 5050 will be as well.
Yes they are.
Did you manage to see the full dies stockpiled under the jacketman's bed, where he hoarders those from the hands of gamers and the enterprise alike? :eek:
A full die is not a refresh. It's the same product.
So all this conversation is that you're pissed because a 5070 will likely have a Super variant using the full GB205?
Or is that because the GB205 chip did not have a product using the full die at release, and you really wanted the GB205 and only the full GB205?
 
RDNA4 exclusive, at least so far.

Doesn't really matter given how each vendor has a different API for their GEMM units. Using Nvidia's tensor cores is different from using Intel's XMX unit, which are different from the new RDNA 4.
I'm also under the impression that RDNA4's GEMM units differ from CDNA's matrix cores, but I have no easy way to confirm this.
Maybe so, but XESS can run on non-intel cards via DP4a. I'd hope AMD can do the same thing for their older cards.
 
Or is that because
All of this from someone who bought the 6500XT too, so some of the critiques do feel like a stretch but I guess everyone is free to have their own uhhhh... unique criteria for what they like and don't like in a product. After all, the 6500XT was the fully enabled die, the fully enabled die deployed in a massively gimped and terrible product.

I'd love to see FSR4 backported to older RDNA, even if it's through a different path with a different performance cost. Would be potentially a massive boon for existing handhelds.
 
I don't think it was an outright lie, but rather sheer incompetence or mismanagement from within the company.
Anand had done a great explanation on it, it's truly a mess:

They did go full into damage control once this happened, and the whole product brought tons of chaos, but I don't think this is something equivalent to "5070 = 4090", instead I think of it as something closer to Intel's CPUs degrading. Still bad, but for different reasons.
Intel at least gave extra warranty on the CPUs tho lol
Incompetence? What, like Nvidia doesn't have engineers who know how their GPUs work or something? I find that hard to believe.
Not wanting to mention it, hoping that no one would notice and they could just get away with it sounds more plausible.

I don't think you can estimate yields like that.
Shoving GB202's info here, we get almost 70% yields for a full die, and that's assuming a really really low defect rate.
View attachment 389138

Some places estimate that the yields are closer to 56%, which would be closer to a failure rate of 0.08:
View attachment 389142

If we pop estimate dimensions for Navi 48 using both defect rates, we have something like so:

View attachment 389144
View attachment 389143

So something between 75 to 84% of yield rates for perfect dies. That seems way better than 56 to 70%.
So where's 56-70% of those dies, then?

The 5060ti uses the full die, it's right there in the link you posted.
No, it doesn't. It's a x-300 die. Full dies are designated x-400.

Even then, what's the issue? Refreshes will come next year, a new arch the year after that, and so on and so on.
UDNA will be coming after RDNA4, and it will likely be better value as well, is that a problem?
Anyhow, the only "Super" chip that I can see using a full die is the 5070 version. 5090 won't be getting a refresh, others will either just change the memory subsystem, or use a totally different die.
The problem is that it's not a refresh. It's something that they could release right at the start, but they don't because they want to look like the good guys who give you something more when they actually don't.

What if I said I could give you a chocolate bar for $2 right now? You'd be happy, right? What if I came back next week and said "you know what, I actually could have given you two chocolate bars for that price, but I didn't because I was a dick, but if you give me another $2, I'll give you those 2 chocolate bars and we call it even, ok?"

And so is the 5060ti, and likely the 5050 will be as well.
No. See above.

Did you manage to see the full dies stockpiled under the jacketman's bed, where he hoarders those from the hands of gamers and the enterprise alike? :eek:
Nobody saw the good dies. That's the point. Or maybe you know where they are?

So all this conversation is that you're pissed because a 5070 will likely have a Super variant using the full GB205?
Or is that because the GB205 chip did not have a product using the full die at release, and you really wanted the GB205 and only the full GB205?
I'm simply disappointed because I find this practice dishonest and deceitful.

Take the 4080... It came with a cut-down die for $1200. A year later, the 4080 Super got released with the full die for $1000. Where were those full dies all that time? And why wasn't the 4080 $900 or something?
 
I am shocked at how badly the Blackwell launch has gone. I now fully expect the remainder of the launch to be similarly disastrous. Nvidia really has moved on from the PC gamer and enthusiast.
 
Take the 4080... It came with a cut-down die for $1200. A year later, the 4080 Super got released with the full die for $1000. Where were those full dies all that time? And why wasn't the 4080 $900 or something?
…do you actually not understand why there are several absolutely innocuous (by big corpo standards) explanations for this other than “NV bad” that have to do with yields improving over time (as they do)? You realize that even the highly profitable enterprise segment did NOT get full AD103 cards, right? Like, straight up, the yields were THAT poor. The 4080S is a result of said yields improving and NV having enough full chip stockpile to actually generate enough cards to sell. As for why the chose to refresh the 4080 instead of funneling those chips to enterprise, well, that’s quite simple - Blackwell was already selling there and as for older Ada chips - most customers were interested in AD102 products by that point. The 4080S exists as a consumer product literally because it could not sell effectively anywhere else, hence the price cut too, just to empty the stockpile.

I am shocked at how badly the Blackwell launch has gone. I now fully expect the remainder of the launch to be similarly disastrous. Nvidia really has moved on from the PC gamer and enthusiast.
Welcome to… *checks* …more than a decade ago. I hope your nap under a dune somewhere in Sahara was nice. But yes, it IS their worst launch in quite a while. They just have the luxury of not really caring. I would not too if I were them.
 
…do you actually not understand why there are several absolutely innocuous (by big corpo standards) explanations for this other than “NV bad” that have to do with yields improving over time (as they do)? You realize that even the highly profitable enterprise segment did NOT get full AD103 cards, right? Like, straight up, the yields were THAT poor. The 4080S is a result of said yields improving and NV having enough full chip stockpile to actually generate enough cards to sell. As for why the chose to refresh the 4080 instead of funneling those chips to enterprise, well, that’s quite simple - Blackwell was already selling there and as for older Ada chips - most customers were interested in AD102 products by that point. The 4080S exists as a consumer product literally because it could not sell effectively anywhere else, hence the price cut too, just to empty the stockpile.
Yes, so good Navi 48 chips outnumber the defective ones 3-to-1, but yields on whatever Nvidia makes on the same TSMC node are just poor. Yeah, right...
The example I replied to stated about 56% yields on GB202. So where is that 56% of chips made?

Edit: Also, 9070 and 9070 XT stock at Microcenter on launch day outnumbered 5070 stock a day before (on the 5070 launch day) 9-to-1. I'm sure there's an equally valid business reason for that too, right? :rolleyes:
The 5070 is already on a cut-back die, mind you.
 
Yes, so good Navi 48 chips outnumber the defective ones 3-to-1, but yields on whatever Nvidia makes on the same TSMC node are just poor. Yeah, right...
Navi 48 is a significantly smaller and simpler chip, yes. The bigger you go the worse it gets. There were no issue with AD107 to AD104 yields either.

The example I replied to stated about 56% yields on GB202. So where is that 56% of chips made?
…in the fucking cards. This IS what they have. You do realize that 5nm allocation NV has isn’t infinite? They would rather produce GB100 for now to fullfill their outstanding orders. The GB202 and down is not a priority for now (or ever, depends), hence the low availability. They just don’t really care, as they shouldn’t. The gaming and enthusiast market is a deeply unserious one for them in big 2025. Meanwhile, AMD doesn’t really have much to lose by pumping out at least a respectable amount of N48, which have good yields anyway. Try for a quick cash and market grab, why not. The two companies just are on absolutely different strategies and you comparing them is pointless. Again, let me drive this home - NV does not care what the consumer market wants in the current climate. They have no reason to. They might start paying attention if even the limited stock there is just isn’t moving, but that’s not happening for now.
 
Navi 48 is a significantly smaller and simpler chip, yes. The bigger you go the worse it gets. There were no issue with AD107 to AD104 yields either.
Navi 48 is 357 mm2 with 54B transistors, AD103 is 379 mm2 with 46B. Navi 48 is not so much smaller, and definitely not simpler.

…in the fucking cards. This IS what they have. You do realize that 5nm allocation NV has isn’t infinite? They would rather produce GB100 for now to fullfill their outstanding orders. The GB202 and down is not a priority for now (or ever, depends), hence the low availability. They just don’t really care, as they shouldn’t. The gaming and enthusiast market is a deeply unserious one for them in big 2025. Meanwhile, AMD doesn’t really have much to lose by pumping out at least a respectable amount of N48, which have good yields anyway. Try for a quick cash and market grab, why not.
What's with the low availability? I was talking about yields. If only partially enabled dies get released, that suggests 0% yields... on the same f*ing node that AMD makes Navi 48 on! Does AMD have no problem with yields, while Nvidia's just magically improves just in time for a Super "refresh" (again, on the same node)? And the Easter bunny exists? What am I, 4, to believe shit like that? :banghead:

The two companies just are on absolutely different strategies and you comparing them is pointless.
Yes, they are. That's why I'm comparing them. Forgive me if one of them seems a touch more honest to me than the other. And I'm not a fan, I'll buy everything that suits my needs.

Again, let me drive this home - NV does not care what the consumer market wants in the current climate. They have no reason to. They might start paying attention if even the limited stock there is just isn’t moving, but that’s not happening for now.
Of course that's not happening if there is no stock whatsoever.
 
Last edited:
What's with the low availability? I was talking about yields. If only partially enabled dies get released, that suggest 0% yields... on the same f*ing node that AMD makes Navi 48 on! Do yields just magically improve just in time for a Super "refresh"? And the Easter bunny exists? What am I, 4, to believe shit like that? :banghead:
This level of delusion is impressive. I… don’t even know how to proceed here. Do you even hear yourself? Do you not understand that even having a sub-expectation level of yields of fully-enabled perfect chips will by itself lead to the decision to rather release products based on lightly cut-down chips (like the 5070) rather than release nothing at all? And again, you keep harping on about AMD and N48 - we don’t actually know for sure the yields there. AMD might have the exact same yields as NV, but since they dedicated more of their allocation to making N48 (which is self-evident) they just have MORE CHIPS. Yes, enough to release a product somewhat confidently.
Oh, and yes, yields “magically” do improve in a 1-1.5 years of production that usually precedes GPU refreshes. That is, unironically, exactly what happens. That’s how semiconductor fabrication works historically.
 
This level of delusion is impressive. I… don’t even know how to proceed here. Do you even hear yourself? Do you not understand that even having a sub-expectation level of yields of fully-enabled perfect chips will by itself lead to the decision to rather release products based on lightly cut-down chips (like the 5070) rather than release nothing at all? And again, you keep harping on about AMD and N48 - we don’t actually know for sure the yields there. AMD might have the exact same yields as NV, but since they dedicated more of their allocation to making N48 (which is self-evident) they just have MORE CHIPS. Yes, enough to release a product somewhat confidently.
Oh, and yes, yields “magically” do improve in a 1-1.5 years of production that usually precedes GPU refreshes. That is, unironically, exactly what happens. That’s how semiconductor fabrication works historically.
I'm harping on about N48 because we actually have stock numbers at Microcenter, USA, where the XT outnumbered the non-XT 3-to-1 on launch day, while full GB205 is nowhere to be seen, and we have news of the 5050, 5060 and 5060 Ti also releasing on cut-back dies soon. I'm not babbling on about business stuff. I'm just stating what I see with my own two eyes, because that's the only thing that interests me.
 
I'm harping on about N48 because we actually have stock numbers at Microcenter, USA, where the XT outnumbered the non-XT 3-to-1 on launch day, while full GB205 is nowhere to be seen, and we have news of the 5050, 5060 and 5060 Ti also releasing on cut-back dies soon.
Have you thought that maybe, just maybe, AIBs saw the non-XT for what it was - an upsell designed to screw THEM - and just decided to straight up not procure enough non-XT chips and make less non-XT cards since they can claim higher margin on premium models with the latter? No? Well, I am sure I am just talking crazy, no way, everyone knows that AIB card stock at a couple if shops is definitely indicative or entire massive companies overall fabrication, allocation and prioritization strategies.

I'm not babbling on about business stuff. I'm just stating what I see with my own two eyes, because that's the only thing that interests me.
That’s cool and all, but that’s not how you come off. It’s also fairly pointless and affects nothing.
 
The problem is that it's not a refresh. It's something that they could release right at the start, but they don't because they want to look like the good guys who give you something more when they actually don't.

What if I said I could give you a chocolate bar for $2 right now? You'd be happy, right? What if I came back next week and said "you know what, I actually could have given you two chocolate bars for that price, but I didn't because I was a dick, but if you give me another $2, I'll give you those 2 chocolate bars and we call it even, ok?"


No. See above.


Nobody saw the good dies. That's the point. Or maybe you know where they are?


I'm simply disappointed because I find this practice dishonest and deceitful.

Take the 4080... It came with a cut-down die for $1200. A year later, the 4080 Super got released with the full die for $1000. Where were those full dies all that time? And why wasn't the 4080 $900 or something?
Wish I would see this kind of criticism towards other companies as well, but nope, it's always vs nvidia. Let's talk about the x3d chips launching way later than the normal lineup. The 9950x 3d launched just now..let me go over the 9950x 3d review and see your posts complaining about it :roll: :roll:
 
Have you thought that maybe, just maybe, AIBs saw the non-XT for what it was - an upsell designed to screw THEM - and just decided to straight up not procure enough non-XT chips and make less non-XT cards since they can claim higher margin on premium models with the latter? No? Well, I am sure I am just talking crazy, no way, everyone knows that AIB card stock at a couple if shops is definitely indicative or entire massive companies overall fabrication, allocation and prioritization strategies.
Let me make sure I get your theory right:
1. Nvidia doesn't have full dies of certain chips because N4 yields are bad. OK...
2. AMD has a lot of full dies on the market because that's what AIBs are buying. Eh...

So where do AIBs get those full AMD dies from if yields are so bad that Nvidia can't make any of theirs on the same node? AMD has a lot of full dies to sell to AIBs, but Nvidia doesn't?
That doesn't make an inch of sense.

That’s cool and all, but that’s not how you come off. It’s also fairly pointless and affects nothing.
I'm just trying to encourage critical thinking instead of unquestioningly believing business babble here.

Wish I would see this kind of criticism towards other companies as well, but nope, it's always vs nvidia. Let's talk about the x3d chips launching way later than the normal lineup. The 9950x 3d launched just now..let me go over the 9950x 3d review and see your posts complaining about it :roll: :roll:
You're absolutely right in that. I'm only not going there to complain because the product doesn't interest me the slightest.

Edit: Shocked by my reaction? ;) Like I just said in another thread, I'm not an AMD supporter. I just have a low tolerance to bullshit.
 
I'm just trying to encourage critical thinking instead of unquestioningly believing business babble here.
No you are not. You are trying to encourage pooping on nvidia. Amd releases cpu refreshes all the time (eg. the XT lineup), gpu refreshes (the 6950xt), 3d chips way later than the normal lineup. Have you complained about any of these?

You're absolutely right in that. I'm only not going there to complain because the product doesn't interest me the slightest.
But a 5070ti super interests you? Why or how. You already bought a card
 
Let me make sure I get your theory right:
1. Nvidia doesn't have full dies of certain chips because N4 yields are bad. OK...
2. AMD has a lot of full dies on the market because that's what AIBs are buying. Eh...
1. Partially. It’s also a matter of allocation and prioritization. If you don’t prioritize, say, GB205, then what little output you get will be less likely to contain enough fully functional dies for a viable product. Hence, since you don’t want to change your priorities you release a product based on a more ubiquitous somewhat scuffed dies by creating a slightly cut-down SKU.
2. That’s exactly what it is. I said before the 9070 series released that the non-XT is a meme designed almost perfectly to screw AIBs and speculated they wouldn’t be too hot on it. I was right.

So where do AIBs get those full AMD dies from if yields are so bad that Nvidia can't make any of theirs on the same node? AMD has a lot of full dies to sell to AIBs, but Nvidia doesn't?
That doesn't make an inch of sense.
It does make perfect sense if you actually read what I wrote and stop ignoring context that I provided and the fact that, again, we are talking two companies running completely different strategies.
 
No you are not. You are trying to encourage pooping on nvidia.
That's what you see because that's what you want to see.

Amd releases cpu refreshes all the time (eg. the XT lineup), gpu refreshes (the 6950xt), 3d chips way later than the normal lineup. Have you complained about any of these?
Look up an XT CPU "refresh" thread, and you'll see that I have. The only difference is that there wasn't much need to complain about that because everybody knew it was bullshit.

The X3D release is only different because it's a completely different animal than a regular CPU. If you're looking for the latest X3D, you're not gonna settle for a normal one just because it came out sooner.
I still think not having them at the same time is bad, for exactly the above reason.

Edit: Sure, let's talk about the B850 and X870 chipset, too, shall we? Yes, that's bullshit, too. Satisfied? ;)

But a 5070ti super interests you? Why or how. You already bought a card
It interests me as a consumer-level graphics card. As a hobby. The same way the latest Ford Mustang interests me. Not as something I want to buy.
 
I'm just trying to encourage critical thinking instead of unquestioningly believing business babble here.
Said business babble IS critical thinking. You are just mindlessly repeating the mantra of “NV bad, all they do is to screw us”, which is absolutely not rational. You, me, all the consumers here are just collateral damage.

Anyway, I vote to stop this bickering. You obviously aren’t willing to compromise on your views, so I would rather end this before it gets ugly.
 
That's what you see because that's what you want to see.


Look up an XT CPU "refresh" thread, and you'll see that I have. The only difference is that there wasn't much need to complain about that because everybody knew it was bullshit.

The X3D release is only different because it's a completely different animal than a regular CPU. If you're looking for the latest X3D, you're not gonna settle for a normal one just because it came out sooner.
I still think not having them at the same time is bad, for exactly the above reason.


It interests me as a consumer-level graphics card. As a hobby. The same way the latest Ford Mustang interests me. Not as something I want to buy.
And the 9950x 3d doesn't interest you as a hobby?

I don't even get your point here, yields improve majorly over time. Not just yields actually but the quality of the chips themselves improve. Before today's extreme binning processes that amd intel and nvidia are doing you could see tremendous clock differences between the same SKU that was produced on different dates. Early production ryzen 1 cpus were horrible overclockers compared to the ones made a year later.
 
1. Partially. It’s also a matter of allocation and prioritization. If you don’t prioritize, say, GB205, then what little output you get will be less likely to contain enough fully functional dies for a viable product. Hence, since you don’t want to change your priorities you release a product based on a more ubiquitous somewhat scuffed dies by creating a slightly cut-down SKU.
2. That’s exactly what it is. I said before the 9070 series released that the non-XT is a meme designed almost perfectly to screw AIBs and speculated they wouldn’t be too hot on it. I was right.


It does make perfect sense if you actually read what I wrote and stop ignoring context that I provided and the fact that, again, we are talking two companies running completely different strategies.
So what you're suggesting is that by making a much smaller quantity of chips, you have to sell all of them as one SKU not to segment the already low quantity of chips even further?
So that if Nvidia has one wafer allocated to GB205, then all of them have to be the same SKU because there's not enough chips in total, while if AMD has 10 wafers for Navi 48, they can afford to sell one wafer's worth as a non-XT and the rest as XT?

If so, that kind of makes sense. Still not any nicer to gamers, but oh well...

And the 9950x 3d doesn't interest you as a hobby?
No, it doesn't. It's too expensive, and I don't care about CPUs with mixed cores / mixed CCDs. If both CCDs had X3D cache, I'd find it a lot more interesting.

I don't even get your point here, yields improve majorly over time. Not just yields actually but the quality of the chips themselves improve. Before today's extreme binning processes that amd intel and nvidia are doing you could see tremendous clock differences between the same SKU that was produced on different dates. Early production ryzen 1 cpus were horrible overclockers compared to the ones made a year later.
That's clocks, not the hardware itself.
 
we don’t actually know for sure the yields there.
Can make an educated guess, though. It's on N4C, yields should be similar to or better than N4P at a lower price.
And the 9950x 3d doesn't interest you as a hobby?

I don't even get your point here, yields improve majorly over time. Not just yields actually but the quality of the chips themselves improve. Before today's extreme binning processes that amd intel and nvidia are doing you could see tremendous clock differences between the same SKU that was produced on different dates. Early production ryzen 1 cpus were horrible overclockers compared to the ones made a year later.
I thought the new 9950X3D would have V-Cache on each of the CCDs, that's lame.
 
Last edited:
I have to agree with this on drivers at least. I can get drivers that's just been release for win 10/11 for my GTX 980 Driver Results | GeForce GTX 980 | Windows 11 | NVIDIA where the card was released in 9/2014 but cannot get a recent driver for a Vega 56 that was released in 8/2017 Radeon™ RX Vega 56 Drivers
Since Turing released, Nvidia has deserved the fine wine title more than AMD. No competition when it comes to upscaling between 20 series and RDNA. No mesh shaders means there are games that won't even run, or if they do, run properly on RDNA. E.G. Alan Walks 2, Final Fantasy VII rebirth, Indy Jones, upcoming Doom The dark ages. Even the ones you can get running, run poorly compared to the 2070 Super. Steve Walton was arguing with randos on X that it is not a fair comparison as the Super cost $100 more at launch on average. The counter to that is you can still get that money back selling used compared to 5700XT used prices. As aging better has helped hold its value.

The drivers are a mixed bag. R.ID helps out old AMD cards. Where as with old Nvidia cards; while they are officially supported, sometimes does not extend beyond ensuring a game does not throw up a outdated driver warning. I've seen Guardians of the Galaxy and Halo Infinite Campaign be broken on newer drivers and require rolling back to much older ones to get the games even running on the 980ti. Any performance optimizations are basically abandoned as well. Nevertheless, it is still more than AMD does for older cards. Hell, you can still buy AMD APUs and devices with Vega iGPU that are only receiving security updates.
 
Since Turing released, Nvidia has deserved the fine wine title more than AMD.
QFT. That trope is long dead for AMD now, more like sour milk for a few generations and products, which is an unfortunate turnaround because they held so much promise... or was it VRAM?
 
You seem to be moving goal posts. The 970 was a bad design choice with its memory subsystem. Nothing like that has ever happened since.
This doesn't have much to do with binning, I guess.

I think they'll never exist, just like the AD102 full die never came into light, simply because they're too big and yields are not known to reach 100% for such big dies.

I had already spoken about this one before.

5060ti will likely be the full GB206, whereas the 5060 will be a die cut out of that. The 5050 is rumored to be the full GB207.
What's the point you're trying to make with such examples?

It really depends, as we have seen before with the 480/580 vs the 590, it was the same die on a better node, so better clocks and efficiency overall.
They would also change the memory subsystem, which is not something a 9070xt owner will be able to achieve.

Well, you said yourself, the 5080 uses the full GB203 die, and that's a bit larger than the Navi 48.
The GB202 is 2.1x bigger than Navi 48.

The 5090 won't have any refresh, more than likely.
If a refresh of the 5080 happens, they'll only change the memory subsystem, or move it to be a die cut of the GB202 die.
5070 is likely to have a refresh, yeah.
Don't think the same will happen to the smaller models tho.

Only product that didn't use the full die at launch and got a refresh later was the AD103. And even then, the 4080 Super did not bring much to the table over the 4080 perf-wise, since it only enabled 4 SMs. The price drop was the thing that made it an interesting product.
Fun fact: the 4080 Super was the only product using the full AD103, not even enterprise products got such configuration, which does lead me to believe that the harvests for such die were not good enough for such product at launch.
I believe the above specially applies to Ampere's 3070ti and 3090ti, given how Samsung's node was a mess (albeit cheap).

I just don't see your point. As I said, it's not like they're withholding good, fully enabled dies. They either sell those, or cut those down to segment a product, as any other company.
Refreshes may happen, companies do it all the time, specially as node maturity increases. Are you saying that new products should not be launched?


Not gonna happen, each GPU has its own specific ISA and higher level APIs.
Your best bet would be one of the higher level APIs supporting it, such as Vulkan. Nvidia has already contributed an extension that abstracts those that has been adopted by the Khronos group:

Seems like Valve has added support for it recently in Mesa for RDNA4:
Why was the 3GB 1060 cut down from the 6GB 1060 then? Pretty deceptive to call both chips the same but 1 is weaker all over...
 
Status
Not open for further replies.
Back
Top