Don't beat me hard please, this is just my personal take.
GPU compute for the datacenter and AI isn't particularly latency sensitive, so the latency penalty of a chiplet MCM approach is almost irrelevant and the workloads benefit hugely from the raw compute bandwidth.
GPU for high-fps gaming is extremely latency-sensitive, so the latency penalty of chiplet MCM is 100% a total dealbreaker.
AMD hasn't solved/evolved the inter-chiplet latency well enough for them to be suitable for a real-time graphics pipeline yet, but that doesn't mean they won't.
Yes. This is why they put the bandaids on RDNA3 a la RDNA4, while doing RDNA5 from the scratch.
Also I guess AI isn't that sensitive, because their products have tons of HBM memory, which mitigates the portion of the issue.
You may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.
$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.
Indeed. They already got so much wealth, that even if the buble burst today, they will be able to calmly sip the drinks while having a warm bath. They just want to increase the margins even more, while they can.
Also, OpenAI recently got some HW from JHH, so I doub't they are that "Open" after all. Not to mention data sellout to MS, etc. If AI guys want any progress, they should do something really independent, as the cartel lobby has been already established.
Never,
for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.
I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
Look at the column for manufacturing node;
The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.
So
yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!
If Pirelli came accross a super-rare, super-expensive, extra-sticky rubber but there was a limited quantity of the stuff - they could use it to make 1000 of the best Formula 1 racing tyres ever seen and give their brand a huge marketing boost and recognition,
OR they could waste it making 5000 more boring, cheap, everyday tyres for commuter workhorse cars like your grandma's Honda Civic.
True. But if you recall the events that old, you can also see, that these lower nodes were always the bread and butter, at least for AMD, and for nVidia until ADA generation. There's nothing wrong in having simplier SKUs, made from lower end chips on cheaper stable nodes. Heck even nVidia managed to produce and sell dozens of millions of hot garbage chips on Samsung's dog-shit 8(10nm) node.
What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?
www.anandtech.com
In the dynamic landscape of semiconductor manufacturing, the first half of 2024 has seen a continued trend of subdued demand for mature process node wafers, as reported by Taiwanese media sources. During the first quarter, some foundries specializing in mature process nodes have witnessed a decline
www.linkedin.com
Not for 20 years, but if the older less refined node doesn't hinder the performance and power efficiency, then IMHO, it's quite viable solution. It's better to sell more akin 7600 on n6, then make few expensive broken top-end chips on finest node, that nobody would like to buy.
Ohhh, you mean on N4 once N4 is old and cheap?
Sure, that'll eventually happen. That's where N6 is right now - but it's not relevant to this discussion, is it?
Why not? At least for AMD, it's still relevant, since they've hit the invisible wall/theshold in their GPU architecture, where node doesn't bring an advantage anymore. At least for current products. I even would dare to say, that if AMD would have made Radeon RX7000 entire series monolithic and on 6nm, it wold have been more viable, than broken 5nm MCM. An it would have made them time to fix, and refine their MCM approach, so the RDNA4 would have been bug-free.
This is especially esencial, in a view of current horrible situation with TSMC allocations, where all top nodes, were completely consumed by Apple and nVidia with it's "AI" oriented chips. So eg making something decent, that is still not sensitive to the older nodes.
Don't get me wrong. I'm all for stopping the manufacturers to fart the broken inferior chips and products, for the sake of profits. Especially, since it requires a lot of materials, resouces, which otherwise could be put in more advanced, more stable and more powerful products. But there should be some middle ground.
At least some portion of "inferior" older n6 etc products could be made, for reasonable prices, just to meed the demand for a temporary solution. Since so many people sitting on ancient HW, that needs to be changed, but withhold the purchase, as only overpriced and pointless products fill the market.
Yeah, I would really like to see a BOM cost if it was high it would make me feel better lol.
Everyone would. But that won't happen enywhere soon. There's reason why margins are about 60% for nVidia, and for AMD until recently.
They won't disclose it as it will shatter their "premium" brand image, that they both managed to maintain, despite being called out for their shenanigans. It happened many times, when it ended up for nVidia to having cheaping out on the design, while still asking a huge premium. Until both nVidia and AMD's reputation and public image and blind followership won't shatter, nothing will change.
I think the 5080 will be overpriced again in the 1200 range with the 5090 looking great at 1600-1800 but 50-60% faster....
I guess nVidia won't make it "cheaper"/sell for the same price. As they made it perfectly clear about five years ago, that they would stack their newer and more powerful solutions above previous gen stuff, while keeping the price of previous. Since newer are greater, thus more expensive. I can't find the reference, but AFAIR it was during RTX inception.
Aside from the 5090 I don't think there's much more Nvidia can charge. They already priced their products at what the market will bear. There's only so much money regular consumers have to spend on a graphics card. It's more likely that Nvidia will give customers less than charge more. It's shrinkflation basically. Of course it is possible that Nvidia increases prices anyways because frankly they'd be just fine selling more dies to the AI and enterprise markets.
Hard to tell what's going to happen this gen although I do agree it's likely AMD and Nvidia price around each other again instead of competing. Intel is another wildcard as well, might have some presence in the midrange if they get a decent uArch out the door.
Regular consumers- no. But there are a lot of crypto-substitutes AKA "AI", that would be gladly buy any compute power for any money. As much as the dumb rich folks and YT influencers, who would create a public image of "acceptable".
1. Err, I am against the chiplets if I have to sacrifice some significant amounts of performance.
2. Like I said previously, they can use older processes with higher IPC architectures in order to offset the transistor count deficit linked to using an older process.
And still, no one will stop them to make a second revision of Navi 31 with larger die size ~700 mm^2 monolithic and transferring the cost as far as it's possible onto the gamers, and then putting the profit margin at negative values or around zero, like they already have done with the consoles.
Sadly, the MCM won't go anywhere, since this means higher profit margins for AMD. They would do anything to keep it this way. Although it still is cheaper to produce, it doesn't manifest itself in the final price formation.