Friday, May 31st 2024

AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process

According to a report by KED Global, Korean chipmaking giant Samsung is ramping up its efforts to compete with global giants like TSMC and Intel. The latest partnership on the horizon is AMD's collaboration with Samsung. AMD is planning to utilize Samsung's cutting-edge 3 nm technology for its future chips. More specifically, AMD wants to utilize Samsung's gate-all-around FETs (GAAFETs). During ITF World 2024, AMD CEO Lisa Su noted that the company intends to use 3 nm GAA transistors for its future products. The only company offering GAAFETs on a 3 nm process is Samsung. Hence, this report from KED gains more credibility.

While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.
Sources: KED Global, via Tom's Hardware
Add your own comment

18 Comments on AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process

#1
ARF
Good decision, since the competition is intense, and AMD is stuck with the old TSMC N5 process, which is already several year-old (entered risk production 5 years ago o_O).
Posted on Reply
#2
ratirt
Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Posted on Reply
#3
ARF
ratirtDiversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
Posted on Reply
#4
AVATARAT
ARFThe problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
There is Threadripper and Epyc for those who need a faster CPU or better to say with more cores.
About GPU, rumours whisper that the next GPU will be like TR, with a lot of chiplets, yeah probably RDNA5.
Posted on Reply
#5
Fatalfury
always good to have choices or competetors..

but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.

for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
Posted on Reply
#6
AnarchoPrimitiv
ARFThe problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
Amd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
Posted on Reply
#7
ARF
AnarchoPrimitivAmd doesn't deliver technological.progress?
AnarchoPrimitivWeren't they the first to use HBM?
This is not a progress, this is a method.
AnarchoPrimitivFirst to use chiplets in a CPU, something that Intel eventually copied?
This is not a progress, this is a method.
AnarchoPrimitivAren't they the first to use chiplets in a consumer GPU?
AnarchoPrimitivThe first to have an 8 core mainstream desktop chip?
The first to have a 16 core consumer chip?
When? 2018? It's already 6 years later, and there is nothing new. And no new manufacturing process. No 3nm, no 2nm, only the 10-year-old 7nm relabeled as 7+nm, and 5nm from 2019.
AnarchoPrimitivThe first to have 3D v-cache?
And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Posted on Reply
#8
Onasi
Well, hopefully newer processes will go better for Samsung than what we’ve seen before. Most issues with Ampere were caused by their 8nm, if we’re being realistic.
ARFAnd intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Intel is done with monolithic CPUs. It’s a pointless talking point at this stage. For all its drawbacks, AMDs chiplet approach proved superior in the long run. Not to mention that, optics aside, gaming performance is really, REALLY not what either company cares about presently. The real battle is for the server/datacenter/supercomputer market. And that one is all about how many cores you can have on a single socket.
Posted on Reply
#9
Denver
ratirtDiversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Its density is on a par with TSMC's 3nm. But the yields must be terrible, according to the rumors.

But these rumors are old, they must have improved this and given the shortage of customers Samsung is facing, AMD could get a massive discount. AMD GPUs need larger chips or MCMs to compete at the high-end, Samsung could certainly help with both.
Posted on Reply
#10
Random_User
ratirtDiversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Even if Samsung's 3nm GAAFET is slightly inferior, but is cheaper than TSMCs extortion prices, only behemoths like Aple and nVidia can pay, by throwing billions in allocation. It's still is better, than sit and wait when the spare foundry time, with no products, to participate in the market. There's nothing wrong in making significant amount of products of "lesser" importance, even on a bit more inferior node, eg, some low end GPUs for more appealing prices.

Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea.
ARFThe problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
You're probably kidding, don't you? 7950X is astonishing chip. 7900XTX is quite of the same. They both are incredibly great for the tasks they do.
Fatalfuryalways good to have choices or competetors..

but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.

for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
You missing the fact, that the dog-sh*t node, didn't prevent nVidia to make a huge fortune, on their absolutely furnace cards. They sold them in dozens of millions, like the hot cakes. And this was the sole reason, they applied for much more expensive 5nm n4 node later.
AnarchoPrimitivAmd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
Some people, either don't get, or are deliberate Wintelvidia trolls.
ARFThis is not a progress, this is a method.



This is not a progress, this is a method.







When? 2018? It's already 6 years later, and there is nothing new. And no new manufacturing process. No 3nm, no 2nm, only the 10-year-old 7nm relabeled as 7+nm, and 5nm from 2019.



And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Intel monolithic chips are long in the past. They've joined the "snake" oil "glued" chip manufacturing for almost all their products. The glorified "supperior" intel 12,13,14 gen is nowhere near a solid silicon chip.
The sole reason the Wintel cartel still exists, and their "collab" failure called W11, is because Intel is incapable to make 16 Performance cores for the same power envelope as AMD. If they could, they'd put a bunch of e-waste cores under the same lid. Because if the efficiency is there, the P cores are able to operate with efficiency of E cores, while keeping their high perofrmance.

The node superiority is sometimes overrated. Especially for en-masse lower end products, which are the bread and butter, like 7600-7700XT. The efficiancy isn't the problem there, as these consume a tiny bit, and don't have the power to set the benchmark records. Many would swap their old Polaris/Pascal cards in a blink of an eye, if the price/specs was there. Unfortunatelly, paying about $400 for an entry level GPU is absurd. Be it on something less expensive than TSMC 5nm, while maintaining a wider bus, it would be much more... nope it would have become bestseller overnight.

And nVidia is keeping their monolithic design, because they can throw in, literally any amount of money, while keeping their margins astronomically high.
OnasiWell, hopefully newer processes will go better for Samsung than what we’ve seen before. Most issues with Ampere were caused by their 8nm, if we’re being realistic.


Intel is done with monolithic CPUs. It’s a pointless talking point at this stage. For all its drawbacks, AMDs chiplet approach proved superior in the long run. Not to mention that, optics aside, gaming performance is really, REALLY not what either company cares about presently. The real battle is for the server/datacenter/supercomputer market. And that one is all about how many cores you can have on a single socket.
I mean, the mobile GPU in the Samsung SoC for smarthones/tablets might still resemble the chiplet design after all. This might be still useful overall, as it provides the better scalability in the long run. As the MCM GPU design can be used in much wider range of products. From phone and handheld, to premium dGPU.
Posted on Reply
#11
Dave65
AnarchoPrimitivAmd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
THIS
Posted on Reply
#12
alwayssts
Given ARM's new design is catered toward both TSMC/Samsung 3nm, and their performance core for the mainstream market is targeted at 3.8ghz, I don't think Samsung's node is that bad.

Think of Apple whom put their similar-market chip on N3B at 3.78ghz. I think that gives a sufficient comparison.

Probably similar to slightly better than N3B, worse than N3E/P, but not a huge difference...especially not a huge deal given (as mentioned) they may cut some deals that TSMC simply does not need to do.

Which may allow for cheaper and/or otherwise unfeasible products (at this point for AMD) given nVIDIA/Apple likely have TSMC 3nm locked down for roughly the next couple years.

If I had to guess, It's probably similar to N4X performance, which is said to be slightly better than N3B, with better area/power characteristics than the former.

*(FWIW wrt some trolls in this thread...yeah. I'm not going to address those things, but thanks to those that took the time to do so and didn't let them control the narrative [which breeds wider ignorance].)*
Posted on Reply
#13
Geofrancis
There is plenty of stuff AMD can fab at samsung to free up tsmc wafers to use for enterprise, next gen game consoles could be fabbed there along with IO or something like the cache chips that 7900xtx uses.
Posted on Reply
#14
alwayssts
GeofrancisThere is plenty of stuff AMD can fab at samsung to free up tsmc wafers to use for enterprise, next gen game consoles could be fabbed there along with IO or something like the cache chips that 7900xtx uses.
I don't know about game consoles (as I don't know how far they'll stray into MCM etc), but let's think for a second (not to imply this isn't what you're trying to say) about everything else they make:

A (6?) 7nm v-cache chip is something like 36mm2. A (6nm) MCD is something like 37mm2. A 6nm I/O die for Zen 4 is 122mm2. An 8-core zen 4 chiplet is 66mm2. A 16-core Zen 4c chiplet is ~73mm2.

I doubt N44, for instance, is much, if any larger than AD107 (which is 159mm2) and literally has to be smaller than AD106 (188mm2) on 4nm to make sense inc mc/ic, and is likely 2048(4096)sp.

Assuming the way forward is chiplets/mcm, and an ideal GPU chiplet is probably something like 1536sp, AMD would not have need to make any big chips at all...probably just stack/wire them together.

The preposition behind risk manufacturing (and now mobile-focused) early nodes used to be if they could fab a 100mm chip at 70%.

I don't know how big those mining chips they make are, but one would hope Samsung is able to accomplish that by now in a productive manner.

In reality, do AMD need to be able to do much more than that? What's the *most* they would need, especially if the cache is separated wrt GPUs?

Kind of boggles the mind if you think about it.

A 128-bit memory controller with twice the cache or the actual GPU logic would each probably be roughly the same size as a current zen chiplet, or together the size of the current 6nm zen i/o die on 3nm afaict.

I think Samsung could make literally almost EVERYTHING....again, except consoles...bc I don't think anyone knows for sure what process nodes/packaging will make sense and/or be cost-efficient at that time yet.
Posted on Reply
#15
Geofrancis
alwaysstsI don't know about game consoles (as I don't know how far they'll stray into MCM etc), but let's think for a second (not to imply this isn't what you're trying to say) about everything else they make:

A (6?) 7nm v-cache chip is something like 36mm2. A (6nm) MCD is something like 37mm2. A 6nm I/O die for Zen 4 is 122mm2. An 8-core zen 4 chiplet is 66mm2. A 16-core Zen 4c chiplet is ~73mm2.

I doubt N44, for instance, is much, if any larger than AD107 (which is 159mm2) and literally has to be smaller than AD106 (188mm2) on 4nm to make sense inc mc/ic, and is likely 2048(4096)sp.

Assuming the way forward is chiplets/mcm, and an ideal GPU chiplet is probably something like 1536sp, AMD would not have need to make any big chips at all...probably just stack/wire them together.

The preposition behind risk manufacturing (and now mobile-focused) early nodes used to be if they could fab a 100mm chip at 70%.

I don't know how big those mining chips they make are, but one would hope Samsung is able to accomplish that by now in a productive manner.

In reality, do AMD need to be able to do much more than that? What's the *most* they would need, especially if the cache is separated wrt GPUs?

Kind of boggles the mind if you think about it.

A 128-bit memory controller with twice the cache or the actual GPU logic would each probably be roughly the same size as a current zen chiplet, or together the size of the current 6nm zen i/o die on 3nm afaict.

I think Samsung could make literally almost EVERYTHING....again, except consoles...bc I don't think anyone knows for sure what process nodes/packaging will make sense and/or be cost-efficient at that time yet.
there is also all the FPGA stuff that AMD now makes.
Posted on Reply
#16
remixedcat
bc of how fragile the political situation is w taiwan this move makes a lot of sense.
Posted on Reply
#17
ratirt
ARFThe problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
I totally disagree with you on both products. First you need to define slow. Because something tells me its more like, not the fastest rather than slow.
Random_UserEven if Samsung's 3nm GAAFET is slightly inferior, but is cheaper than TSMCs extortion prices, only behemoths like Aple and nVidia can pay, by throwing billions in allocation. It's still is better, than sit and wait when the spare foundry time, with no products, to participate in the market. There's nothing wrong in making significant amount of products of "lesser" importance, even on a bit more inferior node, eg, some low end GPUs for more appealing prices.

Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea.
I disagree with the behemoths can pay thing here. AMD is huge they can pay but diversification is key here. If TSMC fails to deliver, you have an alternative. From what I read, Samsung's 3nm with this GAAFET Is top notch. I'm pretty sure it is not 9:1 with gaming GPUs but OK. I disagree from variety of reasons and one you already know. I will only mention, gaming GPUs are not all the products out there. No company can throw money with no good reason. If you think these behemoths waste money by throwing them without having a plan on what it will bring back in cash then you are delusional.
Posted on Reply
#18
alwayssts
I'm been genuinely curious over the past few days...with Samsung pulling 2nm forward to 2025 MP (with a much-reported mobile chip [perhaps as big a chip as AMD may need] coming beginning '26 for S26)...

...and incorporating BSPD (which TSMC won't incorporate until <2nm) which initially wasn't planned until '1.7nm', which was apparently cancelled because the major enhancement was that BSPD they pulled in...

...What are the odds of a 2026 AMD Samsung 2nm CPU and/or GPU coup?

Don't get me wrong, I'm waiting for Samsung's (SAFE) conference in a couple weeks to get a better idea if that's conceivable, but it appears...possible?

I mean, they already have the tough part done (gate-around fets) for 3nm, which it would appear yields are improving...and they've already successfully tested chips with BSPD.

It's just a thought...A kind of exciting thought, if you ask me.

IIRC, I believe the quote from Dr. Su mentioned something to the effect of 'Samsung's gate-around transistors', not necessarily the 3nm process. That, I think, was inferred by the press, but think about it.

Given their product cadence, it *could* happen...and actually be pretty exciting (depending upon on how TSMC 2nm perform w/o BSPD, the timetable/perf/availability of the process with it; also Intel's 18A).

Before someone says it, yes I am aware Samsung has abitiously been trying to catch up (with little success), even with starting 3nm production earlier than TSMC...but, you never know...It might just work out!
Posted on Reply
Add your own comment
Dec 21st, 2024 21:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts