• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,651 (0.99/day)
According to a report by KED Global, Korean chipmaking giant Samsung is ramping up its efforts to compete with global giants like TSMC and Intel. The latest partnership on the horizon is AMD's collaboration with Samsung. AMD is planning to utilize Samsung's cutting-edge 3 nm technology for its future chips. More specifically, AMD wants to utilize Samsung's gate-all-around FETs (GAAFETs). During ITF World 2024, AMD CEO Lisa Su noted that the company intends to use 3 nm GAA transistors for its future products. The only company offering GAAFETs on a 3 nm process is Samsung. Hence, this report from KED gains more credibility.

While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.



View at TechPowerUp Main Site | Source
 
  • Like
Reactions: ARF

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Good decision, since the competition is intense, and AMD is stuck with the old TSMC N5 process, which is already several year-old (entered risk production 5 years ago o_O).
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.

The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
 
Joined
Sep 20, 2021
Messages
468 (0.39/day)
Processor Ryzen 7 9700x
Motherboard Asrock B650E PG Riptide WiFi
Cooling Underfloor CPU cooling
Memory 2x32GB 6200MT/s
Video Card(s) 4080 SUPER Noctua OC Edition
Storage Kingston Fury Renegade 1TB, Seagate Exos 12TB
Display(s) MSI Optix MAG301RF 2560x1080@200Hz
Case Phanteks Enthoo Pro
Power Supply NZXT C850 850W Gold
Mouse Bloody W95 Max Naraka
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
There is Threadripper and Epyc for those who need a faster CPU or better to say with more cores.
About GPU, rumours whisper that the next GPU will be like TR, with a lot of chiplets, yeah probably RDNA5.
 
Joined
Feb 15, 2018
Messages
260 (0.10/day)
always good to have choices or competetors..

but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.

for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
 
Joined
Nov 6, 2016
Messages
1,773 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
Amd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Amd doesn't deliver technological.progress?

Weren't they the first to use HBM?

This is not a progress, this is a method.

First to use chiplets in a CPU, something that Intel eventually copied?

This is not a progress, this is a method.

Aren't they the first to use chiplets in a consumer GPU?

The first to have an 8 core mainstream desktop chip?

The first to have a 16 core consumer chip?

When? 2018? It's already 6 years later, and there is nothing new. And no new manufacturing process. No 3nm, no 2nm, only the 10-year-old 7nm relabeled as 7+nm, and 5nm from 2019.

The first to have 3D v-cache?

And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
 
Joined
Nov 27, 2023
Messages
2,500 (6.39/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (24H2)
Well, hopefully newer processes will go better for Samsung than what we’ve seen before. Most issues with Ampere were caused by their 8nm, if we’re being realistic.

And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Intel is done with monolithic CPUs. It’s a pointless talking point at this stage. For all its drawbacks, AMDs chiplet approach proved superior in the long run. Not to mention that, optics aside, gaming performance is really, REALLY not what either company cares about presently. The real battle is for the server/datacenter/supercomputer market. And that one is all about how many cores you can have on a single socket.
 
Joined
Oct 6, 2021
Messages
1,605 (1.37/day)
Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Its density is on a par with TSMC's 3nm. But the yields must be terrible, according to the rumors.

But these rumors are old, they must have improved this and given the shortage of customers Samsung is facing, AMD could get a massive discount. AMD GPUs need larger chips or MCMs to compete at the high-end, Samsung could certainly help with both.
 
Joined
Jun 1, 2010
Messages
392 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
Diversifying. That is good for AMD and hopefully for customers. I'm not sure how much advanced the 3nm from Samsung is or how it compares directly to TSMC but from what I ready here, Samsung's 3nm Garte all around FET is nothing to scoff at.
Even if Samsung's 3nm GAAFET is slightly inferior, but is cheaper than TSMCs extortion prices, only behemoths like Aple and nVidia can pay, by throwing billions in allocation. It's still is better, than sit and wait when the spare foundry time, with no products, to participate in the market. There's nothing wrong in making significant amount of products of "lesser" importance, even on a bit more inferior node, eg, some low end GPUs for more appealing prices.

Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea.
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
You're probably kidding, don't you? 7950X is astonishing chip. 7900XTX is quite of the same. They both are incredibly great for the tasks they do.
always good to have choices or competetors..

but Samsung foundry needs to prove that their node can be efficient and well as providing better performance per watt.

for ex: like how RTX 3090(samsung 8nm) vs RTX 4090 (TSMC 5nm), the latter made a huge diffeence in performance and efficiency..when everyone was guessing etc 4090 would consume 800W for 30% more performance.
You missing the fact, that the dog-sh*t node, didn't prevent nVidia to make a huge fortune, on their absolutely furnace cards. They sold them in dozens of millions, like the hot cakes. And this was the sole reason, they applied for much more expensive 5nm n4 node later.
Amd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
Some people, either don't get, or are deliberate Wintelvidia trolls.
This is not a progress, this is a method.



This is not a progress, this is a method.







When? 2018? It's already 6 years later, and there is nothing new. And no new manufacturing process. No 3nm, no 2nm, only the 10-year-old 7nm relabeled as 7+nm, and 5nm from 2019.



And intel has a ring bus between its monolithic cores arrangement and has always been faster in gaming, hence AMD needed to do something to make the performance gap smaller.
Intel monolithic chips are long in the past. They've joined the "snake" oil "glued" chip manufacturing for almost all their products. The glorified "supperior" intel 12,13,14 gen is nowhere near a solid silicon chip.
The sole reason the Wintel cartel still exists, and their "collab" failure called W11, is because Intel is incapable to make 16 Performance cores for the same power envelope as AMD. If they could, they'd put a bunch of e-waste cores under the same lid. Because if the efficiency is there, the P cores are able to operate with efficiency of E cores, while keeping their high perofrmance.

The node superiority is sometimes overrated. Especially for en-masse lower end products, which are the bread and butter, like 7600-7700XT. The efficiancy isn't the problem there, as these consume a tiny bit, and don't have the power to set the benchmark records. Many would swap their old Polaris/Pascal cards in a blink of an eye, if the price/specs was there. Unfortunatelly, paying about $400 for an entry level GPU is absurd. Be it on something less expensive than TSMC 5nm, while maintaining a wider bus, it would be much more... nope it would have become bestseller overnight.

And nVidia is keeping their monolithic design, because they can throw in, literally any amount of money, while keeping their margins astronomically high.
Well, hopefully newer processes will go better for Samsung than what we’ve seen before. Most issues with Ampere were caused by their 8nm, if we’re being realistic.


Intel is done with monolithic CPUs. It’s a pointless talking point at this stage. For all its drawbacks, AMDs chiplet approach proved superior in the long run. Not to mention that, optics aside, gaming performance is really, REALLY not what either company cares about presently. The real battle is for the server/datacenter/supercomputer market. And that one is all about how many cores you can have on a single socket.
I mean, the mobile GPU in the Samsung SoC for smarthones/tablets might still resemble the chiplet design after all. This might be still useful overall, as it provides the better scalability in the long run. As the MCM GPU design can be used in much wider range of products. From phone and handheld, to premium dGPU.
 
Last edited:
Joined
Mar 7, 2010
Messages
993 (0.18/day)
Location
Michigan
System Name Daves
Processor AMD Ryzen 3900x
Motherboard AsRock X570 Taichi
Cooling Enermax LIQMAX III 360
Memory 32 GiG Team Group B Die 3600
Video Card(s) Powercolor 5700 xt Red Devil
Storage Crucial MX 500 SSD and Intel P660 NVME 2TB for games
Display(s) Acer 144htz 27in. 2560x1440
Case Phanteks P600S
Audio Device(s) N/A
Power Supply Corsair RM 750
Mouse EVGA
Keyboard Corsair Strafe
Software Windows 10 Pro
Amd doesn't deliver technological.progress?

Weren't they the first to use HBM? First to use chiplets in a CPU, something that Intel eventually copied? Aren't they the first to use chiplets in a consumer GPU? The first to have an 8 core mainstream desktop chip? The first to have a 16 core consumer chip? The first to have 3D v-cache?....I could go on. It's arguable that AMD is more responsible for shaping the x86 industry and certainly the consumer DIY market more than anybody else since the release of Ryzen.

...oh, and AMD has done all that while having an R&D budget that is over 3x smaller than Intels currently, and in the past was over 7x smaller.
THIS
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Given ARM's new design is catered toward both TSMC/Samsung 3nm, and their performance core for the mainstream market is targeted at 3.8ghz, I don't think Samsung's node is that bad.

Think of Apple whom put their similar-market chip on N3B at 3.78ghz. I think that gives a sufficient comparison.

Probably similar to slightly better than N3B, worse than N3E/P, but not a huge difference...especially not a huge deal given (as mentioned) they may cut some deals that TSMC simply does not need to do.

Which may allow for cheaper and/or otherwise unfeasible products (at this point for AMD) given nVIDIA/Apple likely have TSMC 3nm locked down for roughly the next couple years.

If I had to guess, It's probably similar to N4X performance, which is said to be slightly better than N3B, with better area/power characteristics than the former.

*(FWIW wrt some trolls in this thread...yeah. I'm not going to address those things, but thanks to those that took the time to do so and didn't let them control the narrative [which breeds wider ignorance].)*
 
Last edited:
Joined
Mar 12, 2009
Messages
1,142 (0.20/day)
Location
SCOTLAND!
System Name Machine XX
Processor Ryzen 7600
Motherboard MSI X670E GAMING PLUS
Cooling 120mm heatsink
Memory 32GB DDR5 6000 CL30
Video Card(s) RX5700XT 8Gb
Storage 280GB Optane 900p
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) Soundblaster Z
Power Supply 800W
Software Windows 11
There is plenty of stuff AMD can fab at samsung to free up tsmc wafers to use for enterprise, next gen game consoles could be fabbed there along with IO or something like the cache chips that 7900xtx uses.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
There is plenty of stuff AMD can fab at samsung to free up tsmc wafers to use for enterprise, next gen game consoles could be fabbed there along with IO or something like the cache chips that 7900xtx uses.
I don't know about game consoles (as I don't know how far they'll stray into MCM etc), but let's think for a second (not to imply this isn't what you're trying to say) about everything else they make:

A (6?) 7nm v-cache chip is something like 36mm2. A (6nm) MCD is something like 37mm2. A 6nm I/O die for Zen 4 is 122mm2. An 8-core zen 4 chiplet is 66mm2. A 16-core Zen 4c chiplet is ~73mm2.

I doubt N44, for instance, is much, if any larger than AD107 (which is 159mm2) and literally has to be smaller than AD106 (188mm2) on 4nm to make sense inc mc/ic, and is likely 2048(4096)sp.

Assuming the way forward is chiplets/mcm, and an ideal GPU chiplet is probably something like 1536sp, AMD would not have need to make any big chips at all...probably just stack/wire them together.

The preposition behind risk manufacturing (and now mobile-focused) early nodes used to be if they could fab a 100mm chip at 70%.

I don't know how big those mining chips they make are, but one would hope Samsung is able to accomplish that by now in a productive manner.

In reality, do AMD need to be able to do much more than that? What's the *most* they would need, especially if the cache is separated wrt GPUs?

Kind of boggles the mind if you think about it.

A 128-bit memory controller with twice the cache or the actual GPU logic would each probably be roughly the same size as a current zen chiplet, or together the size of the current 6nm zen i/o die on 3nm afaict.

I think Samsung could make literally almost EVERYTHING....again, except consoles...bc I don't think anyone knows for sure what process nodes/packaging will make sense and/or be cost-efficient at that time yet.
 
Last edited:
Joined
Mar 12, 2009
Messages
1,142 (0.20/day)
Location
SCOTLAND!
System Name Machine XX
Processor Ryzen 7600
Motherboard MSI X670E GAMING PLUS
Cooling 120mm heatsink
Memory 32GB DDR5 6000 CL30
Video Card(s) RX5700XT 8Gb
Storage 280GB Optane 900p
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) Soundblaster Z
Power Supply 800W
Software Windows 11
I don't know about game consoles (as I don't know how far they'll stray into MCM etc), but let's think for a second (not to imply this isn't what you're trying to say) about everything else they make:

A (6?) 7nm v-cache chip is something like 36mm2. A (6nm) MCD is something like 37mm2. A 6nm I/O die for Zen 4 is 122mm2. An 8-core zen 4 chiplet is 66mm2. A 16-core Zen 4c chiplet is ~73mm2.

I doubt N44, for instance, is much, if any larger than AD107 (which is 159mm2) and literally has to be smaller than AD106 (188mm2) on 4nm to make sense inc mc/ic, and is likely 2048(4096)sp.

Assuming the way forward is chiplets/mcm, and an ideal GPU chiplet is probably something like 1536sp, AMD would not have need to make any big chips at all...probably just stack/wire them together.

The preposition behind risk manufacturing (and now mobile-focused) early nodes used to be if they could fab a 100mm chip at 70%.

I don't know how big those mining chips they make are, but one would hope Samsung is able to accomplish that by now in a productive manner.

In reality, do AMD need to be able to do much more than that? What's the *most* they would need, especially if the cache is separated wrt GPUs?

Kind of boggles the mind if you think about it.

A 128-bit memory controller with twice the cache or the actual GPU logic would each probably be roughly the same size as a current zen chiplet, or together the size of the current 6nm zen i/o die on 3nm afaict.

I think Samsung could make literally almost EVERYTHING....again, except consoles...bc I don't think anyone knows for sure what process nodes/packaging will make sense and/or be cost-efficient at that time yet.
there is also all the FPGA stuff that AMD now makes.
 
Joined
May 13, 2010
Messages
6,081 (1.14/day)
System Name RemixedBeast-NX
Processor Intel Xeon E5-2690 @ 2.9Ghz (8C/16T)
Motherboard Dell Inc. 08HPGT (CPU 1)
Cooling Dell Standard
Memory 24GB ECC
Video Card(s) Gigabyte Nvidia RTX2060 6GB
Storage 2TB Samsung 860 EVO SSD//2TB WD Black HDD
Display(s) Samsung SyncMaster P2350 23in @ 1920x1080 + Dell E2013H 20 in @1600x900
Case Dell Precision T3600 Chassis
Audio Device(s) Beyerdynamic DT770 Pro 80 // Fiio E7 Amp/DAC
Power Supply 630w Dell T3600 PSU
Mouse Logitech G700s/G502
Keyboard Logitech K740
Software Linux Mint 20
Benchmark Scores Network: APs: Cisco Meraki MR32, Ubiquiti Unifi AP-AC-LR and Lite Router/Sw:Meraki MX64 MS220-8P
bc of how fragile the political situation is w taiwan this move makes a lot of sense.
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
The problem is that there is a stagnation and AMD doesn't deliver any technological progress. Hopefully, this move will help to release somehow that stagnation, and to begin releasing new products.
Radeon RX 7900 XTX is slow, Ryzen 9 7950X is slow. We need new products.
I totally disagree with you on both products. First you need to define slow. Because something tells me its more like, not the fastest rather than slow.

Even if Samsung's 3nm GAAFET is slightly inferior, but is cheaper than TSMCs extortion prices, only behemoths like Aple and nVidia can pay, by throwing billions in allocation. It's still is better, than sit and wait when the spare foundry time, with no products, to participate in the market. There's nothing wrong in making significant amount of products of "lesser" importance, even on a bit more inferior node, eg, some low end GPUs for more appealing prices.

Don't get me wrong, though! I'm all for the progress, and especially for the power efficiency, which is crucial, for personal reasons. But when the supply of the cards is non-existant, and the rival outsells with 9:1 quantitative ratio, then the idea to get some alternative allocation elsewhere, for at least experimental effort, is not a bad idea.
I disagree with the behemoths can pay thing here. AMD is huge they can pay but diversification is key here. If TSMC fails to deliver, you have an alternative. From what I read, Samsung's 3nm with this GAAFET Is top notch. I'm pretty sure it is not 9:1 with gaming GPUs but OK. I disagree from variety of reasons and one you already know. I will only mention, gaming GPUs are not all the products out there. No company can throw money with no good reason. If you think these behemoths waste money by throwing them without having a plan on what it will bring back in cash then you are delusional.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
I'm been genuinely curious over the past few days...with Samsung pulling 2nm forward to 2025 MP (with a much-reported mobile chip [perhaps as big a chip as AMD may need] coming beginning '26 for S26)...

...and incorporating BSPD (which TSMC won't incorporate until <2nm) which initially wasn't planned until '1.7nm', which was apparently cancelled because the major enhancement was that BSPD they pulled in...

...What are the odds of a 2026 AMD Samsung 2nm CPU and/or GPU coup?

Don't get me wrong, I'm waiting for Samsung's (SAFE) conference in a couple weeks to get a better idea if that's conceivable, but it appears...possible?

I mean, they already have the tough part done (gate-around fets) for 3nm, which it would appear yields are improving...and they've already successfully tested chips with BSPD.

It's just a thought...A kind of exciting thought, if you ask me.

IIRC, I believe the quote from Dr. Su mentioned something to the effect of 'Samsung's gate-around transistors', not necessarily the 3nm process. That, I think, was inferred by the press, but think about it.

Given their product cadence, it *could* happen...and actually be pretty exciting (depending upon on how TSMC 2nm perform w/o BSPD, the timetable/perf/availability of the process with it; also Intel's 18A).

Before someone says it, yes I am aware Samsung has abitiously been trying to catch up (with little success), even with starting 3nm production earlier than TSMC...but, you never know...It might just work out!
 
Top