Monday, May 13th 2024

AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3

AMD's future RDNA 5 graphics architecture will bear a "clean sheet" design, and may probably not even have the RDNA branding, says WJM47196, a source of AMD leaks on ChipHell. Two generations ahead of the current RDNA 3 architecture powering the Radeon RX 7000 series discrete GPUs, RDNA 5 could see AMD reimagine the GPU and its key components, much in the same way RDNA did over the former "Vega" architecture, bringing in a significant performance/watt jump, which AMD could build upon with its successful RDNA 2 powered Radeon RX 6000 series.

Performance per Watt is the biggest metric on which a generation of GPUs can be assessed, and analysts believe that RDNA 3 missed the mark with generational gains in performance/watt despite the switch to the advanced 5 nm EUV process from the 7 nm DUV. AMD's decision to disaggregate the GPU, with some of its components being built on the older 6 nm node may have also impacted the performance/watt curve. The leaker also makes a sensational claim that "Navi 31" was originally supposed to feature 192 MB of Infinity Cache, which would have meant 32 MB segments of it per memory cache die (MCD). The company instead went with 16 MB per MCD, or just 96 MB per GPU, which only get reduced as AMD segmented the RX 7900 XT and RX 7900 GRE by disabling one or two MCDs.
The upcoming RDNA 4 architecture will correct some of the glaring component level problems causing the performance/Watt curve to waver on RDNA 3; and the top RDNA 4 part could end up with performance comparable to the current RX 7900 series, while being from a segment lower, and a smaller GPU overall. In case you missed it, AMD will not make a big GPU that succeeds the "Navi 31" and "Navi 21" for the RDNA 4 generation, but rather focus on the performance segment, offering more bang for the buck well under the $800-mark, so it could claw back some market share from NVIDIA in the performance- mid-range, and mainstream product segments. While it remains to be seen if RDNA 5 will get AMD back into the enthusiast segment, it is expected to bring a significant gain in performance due to the re-architected design.

One rumored aspect of RDNA 4 that even this source agrees with, is that AMD is working to significantly improve its performance with ray tracing workloads, by redesigning its hardware. While RDNA 3 builds on the Ray Accelerator component AMD introduced with RDNA 2, with certain optimizations yielding a 50% generational improvement in ray testing and intersection performance; RDNA 4 could see AMD put more of the ray tracing workload through fixed-function accelerators, unburdening the shader engines. This significant improvement in ray tracing performance, performance/watt improvements at an architectural level, and the switch to a newer foundry node such as 4 nm or 3 nm, is how AMD ends up with a new generation on its hands.

AMD is expected to unveil RDNA 4 this year, and if we're lucky, we might see a teaser at the 2024 Computex, next month.
Sources: wjm47196 (ChipHell), VideoCardz
Add your own comment

169 Comments on AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3

#101
oxrufiioxo
nguyenNot sure if we can take MLiD word for it but he said the BOM cost on 4090 is like 1100usd, Nvidia is upholding to their 60% margins LOL.
1000-1100 given that the cooler probably cost way too much 200 usd ish sounds about right honestly.
Posted on Reply
#102
nguyen
oxrufiioxo1000-1100 given that the cooler probably cost way too much 200 usd ish sounds about right honestly.
Next up Nvidia charge 70% margins while Radeon charge 30%, who need gamers anyways right :D
Posted on Reply
#103
oxrufiioxo
nguyenNext up Nvidia charge 70% margins while Radeon charge 30%, who need gamers anyways right :D
I think the 5080 will be overpriced again in the 1200 range with the 5090 looking great at 1600-1800 but 50-60% faster....
Posted on Reply
#104
Dr. Dro
nguyenNot sure if we can take MLiD word for it but he said the BOM cost on 4090 is like 1100usd, Nvidia is upholding to their 60% margins LOL.
I think MLID is wrong, that 1100 USD estimation is is way, way too high. They should be much cheaper to manufacture than that. What you're paying for is not only a high margin, it's also the cost of software development over time (driver engineers cost money) and their hardware R&D costs.

If I had to shoot blindly, i'd shoot at around ~400 USD for a finished, packaged 4090 FE.
oxrufiioxoI think the 5080 will be overpriced again in the 1200 range with the 5090 looking great at 1600-1800 but 50-60% faster....
Well, if it provides a sufficient lead, efficiency or feature set, it definitely will.
Posted on Reply
#105
evernessince
nguyenThe duopoly must continue, Nvidia is pricing their gaming GPU just high enough to make sure of that.

It's so easy for AMD and Nvidia to figure out the minimum prices of their competitor, given that they share the same chip manufacturer (TSMC), same GDDR manufacturer (Samsung), same PCB manufacturer (AIBs).

Who know perhaps Nvidia will charge higher margins next-gen, just so Radeon can improve their terrible margins.
Aside from the 5090 I don't think there's much more Nvidia can charge. They already priced their products at what the market will bear. There's only so much money regular consumers have to spend on a graphics card. It's more likely that Nvidia will give customers less than charge more. It's shrinkflation basically. Of course it is possible that Nvidia increases prices anyways because frankly they'd be just fine selling more dies to the AI and enterprise markets.

Hard to tell what's going to happen this gen although I do agree it's likely AMD and Nvidia price around each other again instead of competing. Intel is another wildcard as well, might have some presence in the midrange if they get a decent uArch out the door.
Posted on Reply
#106
nguyen
Dr. DroI think MLID is wrong, that 1100 USD estimation is is way, way too high. They should be much cheaper to manufacture than that. What you're paying for is not only a high margin, it's also the cost of software development over time (driver engineers cost money) and their hardware R&D costs.

If I had to shoot blindly, i'd shoot at around ~400 USD for a finished, packaged 4090 FE.
Well the AD102 chip already cost 300-400usd per chip (using silicon cost calculator), not to mention GDDR6 cost 13usd per GB back in 2022 (GDDR6 prices has fallen since then). 1000-1100usd BOM cost on 4090 back in 2022 is quite realistic figure, BOM cost is probably less in 2024 but not in the 400-500usd range. Selling AD102 on workstation (like the 7000usd RTX6000 ADA) increase the profit margins massively.

A quick google search show Nvidia has 29,600 employees, meanwhile AMD has 26k employees, Intel has 120k employee (that's some impressive revenue generated per employee on Nvidia). This means software development cost of Nvidia should not be that much higher than AMD.

TL;DR: Nvidia can maintain super high margins because they use everything effectively.
Posted on Reply
#107
Dr. Dro
nguyenWell the AD102 chip already cost 300-400usd per chip (using silicon cost calculator), not to mention GDDR6 cost 13usd per GB back in 2022 (GDDR6 prices has fallen since then). 1000-1100usd BOM cost on 4090 back in 2022 is quite realistic figure, BOM cost is probably less in 2024 but not in the 400-500usd range. Selling AD102 on workstation (like the 7000usd RTX6000 ADA) increase the profit margins massively.

A quick google search show Nvidia has 29,600 employees, meanwhile AMD has 26k employees, Intel has 120k employee (that's some impressive revenue generated per employee on Nvidia). This means software development cost of Nvidia should not be that much higher than AMD.

TL;DR: Nvidia can maintain super high margins because they use everything effectively.
Mm, its that we don't know how much's the wafers, the cost of packaging and assembly and all. Most of the other components you can extrapolate from bulk pricing (it's cheaper than what you'd pay for through Mouser or Digikey, even in large quantities, but how much exactly?), PCB costs are also decreased with bulk purchases... it's hard without having insider info, but I have a relatively hard time trying to picture how each AD102's going for $300+ at this point in time. It's 2 year old silicon, after all. Besides, 4090's have a megaton of disabled cache and cores, I'd wager most of the bad AD102s are going into 4090's as is.

It's really complicated.
Posted on Reply
#108
oxrufiioxo
Dr. DroMm, its that we don't know how much's the wafers, the cost of packaging and assembly and all. Most of the other components you can extrapolate from bulk pricing (it's cheaper than what you'd pay for through Mouser or Digikey, even in large quantities, but how much exactly?), PCB costs are also decreased with bulk purchases... it's hard without having insider info, but I have a relatively hard time trying to picture how each AD102's going for $300+ at this point in time. It's 2 year old silicon, after all. Besides, 4090's have a megaton of disabled cache and cores, I'd wager most of the bad AD102s are going into 4090's as is.

It's really complicated.
They pay the same amount per die regardless of what they have to disable... It's a per wafer cost not per die.
Posted on Reply
#109
nguyen
Dr. DroMm, its that we don't know how much's the wafers, the cost of packaging and assembly and all. Most of the other components you can extrapolate from bulk pricing (it's cheaper than what you'd pay for through Mouser or Digikey, even in large quantities, but how much exactly?), PCB costs are also decreased with bulk purchases... it's hard without having insider info, but I have a relatively hard time trying to picture how each AD102's going for $300+ at this point in time. It's 2 year old silicon, after all. Besides, 4090's have a megaton of disabled cache and cores, I'd wager most of the bad AD102s are going into 4090's as is.

It's really complicated.
Well from 2022 data, TSMC charge per wafer
10k usd for 7nm
16k usd for 5nm
At 70% yield each AD102 should cost ~260usd
Posted on Reply
#110
ARF
AusWolfDo you think chiplets are about gamers? Far from it. The post you replied to demonstrates that it's a cost saving technique, nothing else. Better yields on smaller chips, the ability to link chips made on different nodes, etc.
1. Err, I am against the chiplets if I have to sacrifice some significant amounts of performance.
2. Like I said previously, they can use older processes with higher IPC architectures in order to offset the transistor count deficit linked to using an older process.
And still, no one will stop them to make a second revision of Navi 31 with larger die size ~700 mm^2 monolithic and transferring the cost as far as it's possible onto the gamers, and then putting the profit margin at negative values or around zero, like they already have done with the consoles.
Posted on Reply
#111
stimpy88
nguyenNot sure if we can take MLiD word for it but he said the BOM cost on 4090 is like 1100usd, Nvidia is upholding to their 60% margins LOL.
I wouldn't mind betting a year's salary on it being much closer to $600-$700 to manufacture and ship it if you assume it's a founders card, direct from nVidia. Not talking about profits, R&D, driver dev etc.

$120 for the PCB and all components
$300 for the chip
$150 for the memory
$60 for the cooler
$20 for the packaging
$5 accessories
$15 shipping
=$670

Obviously, this is manufacturer specific, as the larger ones will get these items cheaper, and we don't know what the relationship is between nGreedia and the OEMs, and how much they charge to supply a GPU chip. But we can easily assume nGreedia charge at least double to the OEMs. So $1100 could well be it for the OEMs, as they probably get the PCB, components and memory far cheaper than I stated.
Posted on Reply
#112
ARF
nguyenNot sure if we can take MLiD word for it but he said the BOM cost on 4090 is like 1100usd, Nvidia is upholding to their 60% margins LOL.
stimpy88I wouldn't mind betting a year's salary on it being much closer to $600-$700 to manufacture and ship it if you assume it's a founders card, direct from nVidia. Not talking about profits, R&D, driver dev etc.

$120 for the PCB and all components
$300 for the chip
$150 for the memory
$60 for the cooler
$20 for the packaging
$5 accessories
$15 shipping
=$670

Obviously, this is manufacturer specific, as the larger ones will get these items cheaper, and we don't know what the relationship is between nGreedia and the OEMs, and how much they charge to supply a GPU chip. But we can easily assume nGreedia charge at least double to the OEMs. So $1100 could well be it for the OEMs.
I would correct it even further:

90 PCB and all components
200 the chip
100 the memory
60 the cooler
10 the packaging
10 accessories
15 shipping
=485$
Posted on Reply
#113
stimpy88
ARFI would correct it even further:

90 PCB and all components
200 the chip
100 the memory
60 the cooler
10 the packaging
10 accessories
15 shipping
=485$
Yeah, I guess if we assume an 80% chip yield would be around the $200-ish point, then you are probably closer. I'm sure the big OEMs have huge discounts on everything but the GPU itself, compared to the 1,000 unit prices we get to see that are designed to make us feel false value in our purchases.
Posted on Reply
#114
ARF
stimpy88Yeah, I guess if we assume an 80% chip yield would be around the $200-ish point, then you are probably closer. I'm sure the big OEMs have huge discounts on everything but the GPU itself, compared to the 1,000 unit prices we get to see that are designed to make us feel false value in our purchases.
Also, the used chip is a salvaged chip with disabled shaders, and probably nvidia pays for the working dies, not for the wasted wafers.
Posted on Reply
#115
Chrispy_
oxrufiioxoI think the 5080 will be overpriced again in the 1200 range with the 5090 looking great at 1600-1800 but 50-60% faster....
Based on past trend of 3090 and historic 4090 pricing over the product cycle so far, the 4090 is going to stay at $2000 when the 5090 launches at a new even higher price.

I'd love to be wrong but Nvidia are charging what people will pay and people will continue to pay $2000 for a 4090 regardless of what else is out there. Nvidia are not going to say undermine their own profits on their current highest-margin part just because they've developed an even more profitable one! That's basic free-market capitalism, which is Nvidia's bible.
Posted on Reply
#116
Random_User
Don't beat me hard please, this is just my personal take.
Chrispy_GPU compute for the datacenter and AI isn't particularly latency sensitive, so the latency penalty of a chiplet MCM approach is almost irrelevant and the workloads benefit hugely from the raw compute bandwidth.

GPU for high-fps gaming is extremely latency-sensitive, so the latency penalty of chiplet MCM is 100% a total dealbreaker.

AMD hasn't solved/evolved the inter-chiplet latency well enough for them to be suitable for a real-time graphics pipeline yet, but that doesn't mean they won't.
Yes. This is why they put the bandaids on RDNA3 a la RDNA4, while doing RDNA5 from the scratch.
Also I guess AI isn't that sensitive, because their products have tons of HBM memory, which mitigates the portion of the issue.
Dr. DroYou may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.

$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.
Indeed. They already got so much wealth, that even if the buble burst today, they will be able to calmly sip the drinks while having a warm bath. They just want to increase the margins even more, while they can.
Also, OpenAI recently got some HW from JHH, so I doub't they are that "Open" after all. Not to mention data sellout to MS, etc. If AI guys want any progress, they should do something really independent, as the cartel lobby has been already established.
Chrispy_Never, for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.

I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Desktop_GPUs
Look at the column for manufacturing node; The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.

So yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!

If Pirelli came accross a super-rare, super-expensive, extra-sticky rubber but there was a limited quantity of the stuff - they could use it to make 1000 of the best Formula 1 racing tyres ever seen and give their brand a huge marketing boost and recognition, OR they could waste it making 5000 more boring, cheap, everyday tyres for commuter workhorse cars like your grandma's Honda Civic.
True. But if you recall the events that old, you can also see, that these lower nodes were always the bread and butter, at least for AMD, and for nVidia until ADA generation. There's nothing wrong in having simplier SKUs, made from lower end chips on cheaper stable nodes. Heck even nVidia managed to produce and sell dozens of millions of hot garbage chips on Samsung's dog-shit 8(10nm) node.
ARFWhat is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?

www.anandtech.com/show/21371/tsmc-preps-lower-cost-4nm-n4c-process-for-2025
www.linkedin.com/pulse/tech-news-mature-process-node-wafer-foundry-prices-set-drop-tiroc
Not for 20 years, but if the older less refined node doesn't hinder the performance and power efficiency, then IMHO, it's quite viable solution. It's better to sell more akin 7600 on n6, then make few expensive broken top-end chips on finest node, that nobody would like to buy.
Chrispy_Ohhh, you mean on N4 once N4 is old and cheap?
Sure, that'll eventually happen. That's where N6 is right now - but it's not relevant to this discussion, is it?
Why not? At least for AMD, it's still relevant, since they've hit the invisible wall/theshold in their GPU architecture, where node doesn't bring an advantage anymore. At least for current products. I even would dare to say, that if AMD would have made Radeon RX7000 entire series monolithic and on 6nm, it wold have been more viable, than broken 5nm MCM. An it would have made them time to fix, and refine their MCM approach, so the RDNA4 would have been bug-free.
This is especially esencial, in a view of current horrible situation with TSMC allocations, where all top nodes, were completely consumed by Apple and nVidia with it's "AI" oriented chips. So eg making something decent, that is still not sensitive to the older nodes.

Don't get me wrong. I'm all for stopping the manufacturers to fart the broken inferior chips and products, for the sake of profits. Especially, since it requires a lot of materials, resouces, which otherwise could be put in more advanced, more stable and more powerful products. But there should be some middle ground.
At least some portion of "inferior" older n6 etc products could be made, for reasonable prices, just to meed the demand for a temporary solution. Since so many people sitting on ancient HW, that needs to be changed, but withhold the purchase, as only overpriced and pointless products fill the market.
oxrufiioxoYeah, I would really like to see a BOM cost if it was high it would make me feel better lol.
Everyone would. But that won't happen enywhere soon. There's reason why margins are about 60% for nVidia, and for AMD until recently.
They won't disclose it as it will shatter their "premium" brand image, that they both managed to maintain, despite being called out for their shenanigans. It happened many times, when it ended up for nVidia to having cheaping out on the design, while still asking a huge premium. Until both nVidia and AMD's reputation and public image and blind followership won't shatter, nothing will change.
oxrufiioxoI think the 5080 will be overpriced again in the 1200 range with the 5090 looking great at 1600-1800 but 50-60% faster....
I guess nVidia won't make it "cheaper"/sell for the same price. As they made it perfectly clear about five years ago, that they would stack their newer and more powerful solutions above previous gen stuff, while keeping the price of previous. Since newer are greater, thus more expensive. I can't find the reference, but AFAIR it was during RTX inception.
evernessinceAside from the 5090 I don't think there's much more Nvidia can charge. They already priced their products at what the market will bear. There's only so much money regular consumers have to spend on a graphics card. It's more likely that Nvidia will give customers less than charge more. It's shrinkflation basically. Of course it is possible that Nvidia increases prices anyways because frankly they'd be just fine selling more dies to the AI and enterprise markets.

Hard to tell what's going to happen this gen although I do agree it's likely AMD and Nvidia price around each other again instead of competing. Intel is another wildcard as well, might have some presence in the midrange if they get a decent uArch out the door.
Regular consumers- no. But there are a lot of crypto-substitutes AKA "AI", that would be gladly buy any compute power for any money. As much as the dumb rich folks and YT influencers, who would create a public image of "acceptable".
ARF1. Err, I am against the chiplets if I have to sacrifice some significant amounts of performance.
2. Like I said previously, they can use older processes with higher IPC architectures in order to offset the transistor count deficit linked to using an older process.
And still, no one will stop them to make a second revision of Navi 31 with larger die size ~700 mm^2 monolithic and transferring the cost as far as it's possible onto the gamers, and then putting the profit margin at negative values or around zero, like they already have done with the consoles.
Sadly, the MCM won't go anywhere, since this means higher profit margins for AMD. They would do anything to keep it this way. Although it still is cheaper to produce, it doesn't manifest itself in the final price formation.
Posted on Reply
#117
evernessince
ARFErr, I am against the chiplets if I have to sacrifice some significant amounts of performance.
The University of Toronto's paper demonstrates that the benefits of chiplets scales as you increase number of CPU cores including having lower latency than a monolithic chip if an active interposer is used. I'd imagine that also applies to GPUs as well, of which is a market that would be more tolerable to the higher price tag of an active interposer.

For large silicon products chiplets are a must have as the cost savings, increased yields, ability to modularize your product (the benefits of which could be it's own subject alone), and better binning (you bin out of every chiplet you make and not just a specific SKU) among other things provide a revolution to chip design.

Interposers will also prove a very fruitfull field for performance gains in the near future as latency, bandwidth, and energy usage among other factors are improved over time, enabling more chiplets at lower latencies using less energy to transmit data between said chiplets.
Posted on Reply
#118
Dr. Dro
Random_UserIndeed. They already got so much wealth, that even if the buble burst today, they will be able to calmly sip the drinks while having a warm bath. They just want to increase the margins even more, while they can.
Also, OpenAI recently got some HW from JHH, so I doub't they are that "Open" after all. Not to mention data sellout to MS, etc. If AI guys want any progress, they should do something really independent, as the cartel lobby has been already established.
Oh they're far from "Open" nowadays. Elon Musk is even suing the company for breach of founding contract, since they were intended to be a non-profit organization and nowadays they're very much a for-profit.
Posted on Reply
#119
AusWolf
ARF1. Err, I am against the chiplets if I have to sacrifice some significant amounts of performance.
2. Like I said previously, they can use older processes with higher IPC architectures in order to offset the transistor count deficit linked to using an older process.
And still, no one will stop them to make a second revision of Navi 31 with larger die size ~700 mm^2 monolithic and transferring the cost as far as it's possible onto the gamers, and then putting the profit margin at negative values or around zero, like they already have done with the consoles.
1. Chiplet or no chiplet, I don't care as long as it works and doesn't cost an arm and leg.
2. Older processes would throw efficiency out of the window. Do you want a 400-450+ Watt 7900 XT? I don't.
Posted on Reply
#120
oxrufiioxo
AusWolf1. Chiplet or no chiplet, I don't care as long as it works and doesn't cost an arm and leg.
2. Older processes would throw efficiency out of the window. Do you want a 400-450+ Watt 7900 XT? I don't.
The one thing that confuses me if RDNA 4 is just fixed RDNA3 with better RT why not do a full lineup. Maybe chiplets were so bad for them and they don't want to make a large die is all I can think of. Going back to the RDNA 1 gameplan just seems defeatist to me.
Posted on Reply
#121
AusWolf
oxrufiioxoThe one thing that confuses me if RDNA 4 is just fixed RDNA3 with better RT why not do a full lineup. Maybe chiplets were so bad for them and they don't want to make a large die is all I can think of. Going back to the RDNA 1 gameplan just seems defeatist to me.
I read somewhere that there's two teams working on GPUs at AMD. The one that worked on RDNA 2 is working on RDNA 4, and the one that worked on RDNA 3 is working on RDNA 5. I don't know how true this is, though, and I can't remember the source.

Edit: Maybe AMD saw that chiplets aren't good for a GPU, yet, so they put the project on hold until they figure things out (speculation).
Posted on Reply
#122
oxrufiioxo
AusWolfI read somewhere that there's two teams working on GPUs at AMD. The one that worked on RDNA 2 is working on RDNA 4, and the one that worked on RDNA 3 is working on RDNA 5. I don't know how true this is, though, and I can't remember the source.

Edit: Maybe AMD saw that chiplets aren't good for a GPU, yet, so they put the project on hold until they figure things out (speculation).
Part of me wonders if RDNA4 is only a thing because Sony wanted better RT and some sort of bespoke AI silicon and amd figures they might as well use a similar but larger version on desktop.
Posted on Reply
#123
evernessince
oxrufiioxoThe one thing that confuses me if RDNA 4 is just fixed RDNA3 with better RT why not do a full lineup. Maybe chiplets were so bad for them and they don't want to make a large die is all I can think of. Going back to the RDNA 1 gameplan just seems defeatist to me.
The only way it makes sense to me is if AMD is going to have multiple GCDs on a single GPU / product (possible AMD has other enterprise products with a combination of CPU / XCU / GCD). Otherwise as you say there's no reason AMD couldn't have just refined over the 7000 series. Of course there's no guarantee that multiple GCDs are coming to the consumer market if that is the case either, could be AMD is able to do it with CoWoS packaging but not with an organic substrate which would restrict it to enterprise only. That would make sense if AMD's strategy is to rapidly push for gains in the AI / enterprise markets. Of course the downside is that it'll undoubtly take a hit in the consumer market unless it prices aggressively.
Posted on Reply
#124
oxrufiioxo
evernessinceThe only way it makes sense to me is if AMD is going to have multiple GCDs on a single GPU / product (possible AMD has other enterprise products with a combination of CPU / XCU / GCD). Otherwise as you say there's no reason AMD couldn't have just refined over the 7000 series. Of course there's no guarantee that multiple GCDs are coming to the consumer market if that is the case either, could be AMD is able to do it with CoWoS packaging but not with an organic substrate which would restrict it to enterprise only. That would make sense if AMD's strategy is to rapidly push for gains in the AI / enterprise markets. Of course the downside is that it'll undoubtly take a hit in the consumer market unless it prices aggressively.
It would be interesting how B100 gaming varient would work with basically 2 gpu dies... Guessing for gaming it's not practical from a cost perspective but they could bring back Titan and charge 3-4k for it assuming it works lol... Not holding my breath.
Posted on Reply
#125
Firedrops
AusWolfDo you think chiplets are about gamers? Far from it. The post you replied to demonstrates that it's a cost saving technique, nothing else. Better yields on smaller chips, the ability to link chips made on different nodes, etc.
Are these cost savings in the room with us right now?
Posted on Reply
Add your own comment
Nov 8th, 2024 06:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts