Wednesday, January 1st 2025

TSMC Is Getting Ready to Launch Its First 2nm Production Line

TSMC is making progress with its most advanced 2 nm (N2) node, a recent report from MoneyDJ quoting industry sources indicates that the company is setting up a test production line at the Hsinchu Baoshan fab (Fab 20) in Taiwan. In the early stages, TSMC aims for small monthly outputs with about 3,000-3,500 wafers. However, the company has big plans to combine production from two factories in Hsinchu and Kaohsiung, TSMC expects to deliver more than 50,000 wafers monthly by the end of 2025 and by the end of 2026 projecting a production of around 125,000 wafers per month. Breaking it down by location, the Hsinchu factory should reach 20,000-25,000 wafers monthly by late 2025, growing to about 60,000-65,000 by early 2027. Meanwhile, the Kaohsiung factory is expected to produce 25,000-30,000 wafers monthly by late 2025, also increasing to 60,000-65,000 by early 2027.

TSMC's chairman C.C. Wei says there's more demand for these 2 nm chips than there was for the 3 nm. This increased "appetite" for 2 nm chips is likely due to the significant improvements this technology brings: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips. Apple will be the first company to use these chips, followed by other major tech companies like MediaTek, Qualcomm, Intel, NVIDIA, AMD, and Broadcom.
Sources: TrendForce, MoneyDJ
Add your own comment

34 Comments on TSMC Is Getting Ready to Launch Its First 2nm Production Line

#1
A Computer Guy
Wow 2nm, are we nearly reaching the limit to how small we can go?
Posted on Reply
#2
THU31
15% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
Posted on Reply
#3
oxrufiioxo
A Computer GuyWow 2nm, are we nearly reaching the limit to how small we can go?
It will be interesting once they do hit a limit on what actually happens with the hardware we love. This is probably why Nvidia is investing so much in other technologies that at a min give the perception that a game is running faster imagine a Frame gen like technology that you could set the actual framerate with no artifacts or at the very least none that are visible with the naked eye with no latency penalty..... I think stuff like this will be the only way forward in a couple generations. I personally hope that I am wrong but I guess we will see.
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
Yeah the power savings is nice but that is about it 15% more performance at the same power sucks for what this cost but when you are the only player in town you set the market.
Posted on Reply
#4
Prima.Vera
A Computer GuyWow 2nm, are we nearly reaching the limit to how small we can go?
2nm is just a marketing name. There is nothing that small inside a chip. The gate size is actually around 45nm, while the metal pitch smallest is ~20nm
en.wikipedia.org/wiki/2_nm_process
Posted on Reply
#5
Marcus L
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
$4k+ RTX 6090/7090 incoming :fear:
Posted on Reply
#6
oxrufiioxo
Marcus L$4k+ RTX 6090/7090 incoming :fear:
I doubt Nvidia will use 2n or whatever TSMC actually calls it for GPUs and if they do it will be discounted by then we are probably 24-28 months or so from the next gpu launch after 5000 and the way the lack of competition is going maybe longer..... Think about it at what point will AMD even have a 4090 competitor the way its going.... 18-24 months from now maybe.
Posted on Reply
#7
kondamin
A Computer GuyWow 2nm, are we nearly reaching the limit to how small we can go?
it's not unlikely we're going to be seeing stacked compute pretty soon.
Sram and IO right on top of the cores or below.
Posted on Reply
#8
Wirko
kondaminit's not unlikely we're going to be seeing stacked compute pretty soon.
Sram and IO right on top of the cores or below.
We already have that (if "we" are the companies who buy MI300).
If you mean monolithic stacking, the road ahead is winding and unclear. CFETs may come in a few years but that's only two layers, and it doesn't seem anyone has an idea how to stack more.
Prima.Vera2nm is just a marketing name. There is nothing that small inside a chip. The gate size is actually around 45nm, while the metal pitch smallest is ~20nm
en.wikipedia.org/wiki/2_nm_process
IBM made a test chip on 2 nm about three and a half years ago. If you're an optimist, you'll appreciate the fact that some insulation or passivation layer was actually 2 nm thick on x-ray images (but of course that's not the reason IBM called it 2 nm).
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
The intended customers won't call it terrible as long as perf/watt keeps going up.
Posted on Reply
#9
matar
2-nm then 1-nm then 0.1-nm then skynet.nm
Posted on Reply
#10
windwhirl
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
The more cutting edge the technology, the more expensive it gets. And that price curve rarely if ever matches proportionally to the actual improvement.

For this kind of thing in particular, you're not just paying the operating costs of making the product, but also the R&D for this process and for future processes. And TSMC's margins on top, they're not a charity after all.

That aside, I assume Apple is going to be the first one to use this fab process. And maybe the only one to use it for some time. IIRC they got a bit of an exclusivity over 3nm for a little bit?
Posted on Reply
#11
bonehead123
Only 2 more steps to go on the (current) silicon road, then what ?

Prefix Measurement Scientific Notation
Milli-0.001 m1 x 10-3 m
Micro-0.000001 m1 x 10-6 m
Nano- **chip sizes now **0.000000001 m1 x 10-9 m
Pico-0.000000000001 m1 x 10-12 m
Femto-0.000000000000001 m1 x 10-15 m
Posted on Reply
#12
kondamin
windwhirlThe more cutting edge the technology, the more expensive it gets. And that price curve rarely if ever matches proportionally to the actual improvement.

For this kind of thing in particular, you're not just paying the operating costs of making the product, but also the R&D for this process and for future processes. And TSMC's margins on top, they're not a charity after all.

That aside, I assume Apple is going to be the first one to use this fab process. And maybe the only one to use it for some time. IIRC they got a bit of an exclusivity over 3nm for a little bit?
Kinda, n3b is a bad node other than apple and intel no one was interested. I think it’s why moved away from the m3 lineup as fast as they did In favour of n3e which is a lot better.
Posted on Reply
#13
Frank_100
A Computer GuyWow 2nm, are we nearly reaching the limit to how small we can go?
A Gate all around transistor is as efficient and can be packed as dense as a 2nm MosFets would be if 2nm MosFets were possible.

At least that is the current story.
Posted on Reply
#14
freeagent
bonehead123Only 2 more steps to go on the (current) silicon road, then what ?

Prefix Measurement Scientific Notation
Milli-0.001 m1 x 10-3 m
Micro-0.000001 m1 x 10-6 m
Nano- **chip sizes now **0.000000001 m1 x 10-9 m
Pico-0.000000000001 m1 x 10-12 m
Femto-0.000000000000001 m1 x 10-15 m
Posted on Reply
#15
TheinsanegamerN
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
On the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
Posted on Reply
#16
oxrufiioxo
TheinsanegamerNOn the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
I do think they will figure out how to use multiple smaller dies on one chip and will likely be the way forward.... I was hoping AMDs push into chiplet gpu's would lead to this but they seemingly gave up after one generation I guess we will see what UDNA brings.
Posted on Reply
#17
Macro Device
oxrufiioxomultiple smaller dies on one chip
My uneducated guess is we need to work this humongous latency penalty around before this becomes a thing. Most likely it requires game engines to be reworked from the scratch to rely on this "minigun" model. 'parently we gotta wait till Team Green make their move, they're the boss here.
With gaming effectively becoming a byproduct I don't really know what prevents this from development though. This latency isn't that big of an issue in most non-gaming tasks.

And yes, I don't care personally for what amount of nanometres they're using. How much I get for my money is what matters and if they can't shrink the node the way it's more cost efficient than the previous iteration then it doesn't really make me happy. Yes, sure, many people need whatever performance they can get for whatever cost so these guys (ahem, more like corporations) are to be satisfied. If whoever.
Posted on Reply
#18
oxrufiioxo
Macro DeviceMy uneducated guess is we need to work this humongous latency penalty around before this becomes a thing. Most likely it requires game engines to be reworked from the scratch to rely on this "minigun" model. 'parently we gotta wait till Team Green make their move, they're the boss here.
With gaming effectively becoming a byproduct I don't really know what prevents this from development though. This latency isn't that big of an issue in most non-gaming tasks.

And yes, I don't care personally for what amount of nanometres they're using. How much I get for my money is what matters and if they can't shrink the node the way it's more cost efficient than the previous iteration then it doesn't really make me happy. Yes, sure, many people need whatever performance they can get for whatever cost so these guys (ahem, more like corporations) are to be satisfied. If whoever.
The hope would be that they figure out how to make windows/games just view it as one large gpu. People much smarter than me will hopefully figure this out because large die on the consumer gpu's might be going to the way of the dodo or so expensive 99% of buyers won't be able to purchase them one could say that is already the case..... The day's of the xx70 card matching the flagship one gen later are seemingly dead at a min.


All speculation at this point who know what will actually happen.
Posted on Reply
#19
TheinsanegamerN
oxrufiioxoThe hope would be that they figure out how to make windows/games just view it as one large gpu. People much smarter than me will hopefully figure this out because large die on the consumer gpu's might be going to the way of the dodo or so expensive 99% of buyers won't be able to purchase them one could say that is already the case..... The day's of the xx70 card matching the flagship one gen later are seemingly dead at a min.


All speculation at this point who know what will actually happen.
The thing that gets me is....we have that technology! It's called DX12 multi GPU. It works across vendors, and as ashes of the singularity showed, doesnt have the latency or driver issues of SLI/crossfire of old.

Why this tech has just been sidelined is beyond me. The simplest answer is multiple smaller dies would be more efficient as node shrinks stop being possible.
Posted on Reply
#20
oxrufiioxo
TheinsanegamerNThe thing that gets me is....we have that technology! It's called DX12 multi GPU. It works across vendors, and as ashes of the singularity showed, doesnt have the latency or driver issues of SLI/crossfire of old.

Why this tech has just been sidelined is beyond me. The simplest answer is multiple smaller dies would be more efficient as node shrinks stop being possible.
My guess is developers do not want to implement it from my understand that is completely on the developer and is a lot more work they can't even get shader compilation and traversal stutter solved so I doubt the implementation would be all that great across the board. I agree with you though this being abandoned has sucked, although i can't imagine rocking two 4080/90s lmao..... At least not unless I was rocking an open test bench.
Posted on Reply
#22
TheinsanegamerN
freeagentSLi and Crossfire were fun, RIP.
I still have my 1200w platinum PSU and big case, just begging for new GPUs.....
oxrufiioxoMy guess is developers do not want to implement it from my understand that is completely on the developer and is a lot more work they can't even get shader compilation and traversal stutter solved so I doubt the implementation would be all that great across the board. I agree with you though this being abandoned has sucked, although i can't imagine rocking two 4080/90s lmao..... At least not unless I was rocking an open test bench.
I cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.

I'm surprised that MS, for instance, doesnt mandate its use int heir games. THEY made the API. Why cant I rock dual GPUs in halo infinite or Gears or Forza? What about EA, they used to support SLI, lets see some dual GPU action in battlefield! Especially with raytracing and all sorts of new demanding tech, games are begging for two or even three GPUs running in sync.

I'm just saying, imagine three 16GB 4060s running in sync. That would be something.

We could handle the heat. We handled three or even four GTX 580s back in the day, those were 350 watt apiece and didnt have the thermal transfer issues of modern hardware, so they were DUMPING out the heat. Side fans on cases provided absolute wonders.
Posted on Reply
#23
N/A
oxrufiioxoI doubt Nvidia will use 2n or whatever TSMC actually calls it for GPUs and if they do it will be discounted by then we are probably 24-28 months or so from the next gpu launch after 5000 and the way the lack of competition is going maybe longer..... Think about it at what point will AMD even have a 4090 competitor the way its going.... 18-24 months from now maybe.
Nvidia needs to use A16 to deliver some meaningful generational improvement and shink the 102 die to 600mm2 again.. 744 is ridiculously huge
The next GPU can't be that late... so it might be even shorter than 24 months.
Posted on Reply
#24
oxrufiioxo
N/ANvidia needs to use A16 to deliver some meaningful generational improvement and shink the 102 die to 600mm2 again.. 744 is ridiculously huge
The next GPU can't be that late... so it might be even shorter than 24 months.
If it is shorter it's because they want to supply AI startups with it.... Most generations are 24 months with some going slightly longer I think this one has been 26 months assuming at least one 5000 series card is launched in January and it's using basically the same node.....
TheinsanegamerNI cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.
Nixxes tried it with Rise of the Tomb raider then abandoned it in all future releases that's really the only high profile game I can remember actually using it. I don't even remember how it performed.
Posted on Reply
#25
Random_User
TheinsanegamerNOn the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
Yes, and the node R&D is already paid up, and nodes are usually more mature and effecient, with bigger supply, and less deffects. Thus it should cost cheaper.
But we all know, the silicon companies, do not like simple and cheap stuff, as there's nowhere to stuff their 60-70% margins. Especially if the foundries are flooding with defectless chips. They won't sell more. They will still make a scarcity.
Another shortage this, another accident there...
Posted on Reply
Add your own comment
Jan 4th, 2025 14:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts