Friday, May 31st 2024
AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process
According to a report by KED Global, Korean chipmaking giant Samsung is ramping up its efforts to compete with global giants like TSMC and Intel. The latest partnership on the horizon is AMD's collaboration with Samsung. AMD is planning to utilize Samsung's cutting-edge 3 nm technology for its future chips. More specifically, AMD wants to utilize Samsung's gate-all-around FETs (GAAFETs). During ITF World 2024, AMD CEO Lisa Su noted that the company intends to use 3 nm GAA transistors for its future products. The only company offering GAAFETs on a 3 nm process is Samsung. Hence, this report from KED gains more credibility.
While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.
Sources:
KED Global, via Tom's Hardware
While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.
18 Comments on AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
About GPU, rumours whisper that the next GPU will be like TR, with a lot of chiplets, yeah probably RDNA5.
but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.
for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.
...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
But these rumors are old, they must have improved this and given the shortage of customers Samsung is facing, AMD could get a massive discount. AMD GPUs need larger chips or MCMs to compete at the high-end, Samsung could certainly help with both.
Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea. You're probably kidding, don't you? 7950X is astonishing chip. 7900XTX is quite of the same. They both are incredibly great for the tasks they do. You missing the fact, that the dog-sh*t node, didn't prevent nVidia to make a huge fortune, on their absolutely furnace cards. They sold them in dozens of millions, like the hot cakes. And this was the sole reason, they applied for much more expensive 5nm n4 node later. Some people, either don't get, or are deliberate Wintelvidia trolls. Intel monolithic chips are long in the past. They've joined the "snake" oil "glued" chip manufacturing for almost all their products. The glorified "supperior" intel 12,13,14 gen is nowhere near a solid silicon chip.
The sole reason the Wintel cartel still exists, and their "collab" failure called W11, is because Intel is incapable to make 16 Performance cores for the same power envelope as AMD. If they could, they'd put a bunch of e-waste cores under the same lid. Because if the efficiency is there, the P cores are able to operate with efficiency of E cores, while keeping their high perofrmance.
The node superiority is sometimes overrated. Especially for en-masse lower end products, which are the bread and butter, like 7600-7700XT. The efficiancy isn't the problem there, as these consume a tiny bit, and don't have the power to set the benchmark records. Many would swap their old Polaris/Pascal cards in a blink of an eye, if the price/specs was there. Unfortunatelly, paying about $400 for an entry level GPU is absurd. Be it on something less expensive than TSMC 5nm, while maintaining a wider bus, it would be much more... nope it would have become bestseller overnight.
And nVidia is keeping their monolithic design, because they can throw in, literally any amount of money, while keeping their margins astronomically high. I mean, the mobile GPU in the Samsung SoC for smarthones/tablets might still resemble the chiplet design after all. This might be still useful overall, as it provides the better scalability in the long run. As the MCM GPU design can be used in much wider range of products. From phone and handheld, to premium dGPU.
Think of Apple whom put their similar-market chip on N3B at 3.78ghz. I think that gives a sufficient comparison.
Probably similar to slightly better than N3B, worse than N3E/P, but not a huge difference...especially not a huge deal given (as mentioned) they may cut some deals that TSMC simply does not need to do.
Which may allow for cheaper and/or otherwise unfeasible products (at this point for AMD) given nVIDIA/Apple likely have TSMC 3nm locked down for roughly the next couple years.
If I had to guess, It's probably similar to N4X performance, which is said to be slightly better than N3B, with better area/power characteristics than the former.
*(FWIW wrt some trolls in this thread...yeah. I'm not going to address those things, but thanks to those that took the time to do so and didn't let them control the narrative [which breeds wider ignorance].)*
A (6?) 7nm v-cache chip is something like 36mm2. A (6nm) MCD is something like 37mm2. A 6nm I/O die for Zen 4 is 122mm2. An 8-core zen 4 chiplet is 66mm2. A 16-core Zen 4c chiplet is ~73mm2.
I doubt N44, for instance, is much, if any larger than AD107 (which is 159mm2) and literally has to be smaller than AD106 (188mm2) on 4nm to make sense inc mc/ic, and is likely 2048(4096)sp.
Assuming the way forward is chiplets/mcm, and an ideal GPU chiplet is probably something like 1536sp, AMD would not have need to make any big chips at all...probably just stack/wire them together.
The preposition behind risk manufacturing (and now mobile-focused) early nodes used to be if they could fab a 100mm chip at 70%.
I don't know how big those mining chips they make are, but one would hope Samsung is able to accomplish that by now in a productive manner.
In reality, do AMD need to be able to do much more than that? What's the *most* they would need, especially if the cache is separated wrt GPUs?
Kind of boggles the mind if you think about it.
A 128-bit memory controller with twice the cache or the actual GPU logic would each probably be roughly the same size as a current zen chiplet, or together the size of the current 6nm zen i/o die on 3nm afaict.
I think Samsung could make literally almost EVERYTHING....again, except consoles...bc I don't think anyone knows for sure what process nodes/packaging will make sense and/or be cost-efficient at that time yet.
...and incorporating BSPD (which TSMC won't incorporate until <2nm) which initially wasn't planned until '1.7nm', which was apparently cancelled because the major enhancement was that BSPD they pulled in...
...What are the odds of a 2026 AMD Samsung 2nm CPU and/or GPU coup?
Don't get me wrong, I'm waiting for Samsung's (SAFE) conference in a couple weeks to get a better idea if that's conceivable, but it appears...possible?
I mean, they already have the tough part done (gate-around fets) for 3nm, which it would appear yields are improving...and they've already successfully tested chips with BSPD.
It's just a thought...A kind of exciting thought, if you ask me.
IIRC, I believe the quote from Dr. Su mentioned something to the effect of 'Samsung's gate-around transistors', not necessarily the 3nm process. That, I think, was inferred by the press, but think about it.
Given their product cadence, it *could* happen...and actually be pretty exciting (depending upon on how TSMC 2nm perform w/o BSPD, the timetable/perf/availability of the process with it; also Intel's 18A).
Before someone says it, yes I am aware Samsung has abitiously been trying to catch up (with little success), even with starting 3nm production earlier than TSMC...but, you never know...It might just work out!