Wednesday, January 1st 2025

TSMC Is Getting Ready to Launch Its First 2nm Production Line

TSMC is making progress with its most advanced 2 nm (N2) node, a recent report from MoneyDJ quoting industry sources indicates that the company is setting up a test production line at the Hsinchu Baoshan fab (Fab 20) in Taiwan. In the early stages, TSMC aims for small monthly outputs with about 3,000-3,500 wafers. However, the company has big plans to combine production from two factories in Hsinchu and Kaohsiung, TSMC expects to deliver more than 50,000 wafers monthly by the end of 2025 and by the end of 2026 projecting a production of around 125,000 wafers per month. Breaking it down by location, the Hsinchu factory should reach 20,000-25,000 wafers monthly by late 2025, growing to about 60,000-65,000 by early 2027. Meanwhile, the Kaohsiung factory is expected to produce 25,000-30,000 wafers monthly by late 2025, also increasing to 60,000-65,000 by early 2027.

TSMC's chairman C.C. Wei says there's more demand for these 2 nm chips than there was for the 3 nm. This increased "appetite" for 2 nm chips is likely due to the significant improvements this technology brings: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips. Apple will be the first company to use these chips, followed by other major tech companies like MediaTek, Qualcomm, Intel, NVIDIA, AMD, and Broadcom.
Sources: TrendForce, MoneyDJ
Add your own comment

38 Comments on TSMC Is Getting Ready to Launch Its First 2nm Production Line

#26
THU31
TheinsanegamerNOn the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
I don't know if stagnation is good when you have such gigantic differences within a generation. The 5090 will most likely be 4 times faster than the 5060. How is ray tracing ever supposed to become popular, when the entry level cards can only run it theoretically?

I never had a problem upgrading GPUs every generation when prices were steady. I always got a big improvement, and the cost wasn't very high after selling the old GPU. But these days you have to pay more to get a small improvement, which is also a result of diminishing returns in graphics technology. You need the latest card to run a brand new game that looks marginally better than games from 5 years ago.
Hardware is expensive to r&d and manufacture, games are expensive to develop (and take a long time), it's a slippery slope. Something needs to change before the gaming industry crashes. But I guess these companies don't really care about gaming, the entire focus is on professional markets, which are eating up all these new chips no matter the price.
Posted on Reply
#27
windwhirl
TheinsanegamerNI cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.
Most likely, the development cost doesn't make sense compared to the number of people that would actually use mGPU.

Not sure it's that much easier either, since it seems like it's low level code?

You'd have to ask the big game engine makers anyway (UE5, Unity, idTech, Godot and such) over whether they actually provide mGPU support, as well.
Posted on Reply
#28
Daven
Prima.Vera2nm is just a marketing name. There is nothing that small inside a chip. The gate size is actually around 45nm, while the metal pitch smallest is ~20nm
en.wikipedia.org/wiki/2_nm_process
I’m not sure if our fellow tech enthusiasts or tech journalists will ever drop the ‘nm’ from these articles. Even the fabs don’t use ‘nm’ calling nodes names like 18A, SF2, N2P, etc. But thank you for continuing to push the real feature sizes.

Edit: Here is a great article if you want to know everything about chip sizes.

www.angstronomics.com/p/the-truth-of-tsmc-5nm
Posted on Reply
#29
maximumterror
DavenI’m not sure if our fellow tech enthusiasts or tech journalists will ever drop the ‘nm’ from these articles. Even the fabs don’t use ‘nm’ calling nodes names like 18A, SF2, N2P, etc. But thank you for continuing to push the real feature sizes.

Edit: Here is a great article if you want to know everything about chip sizes.

www.angstronomics.com/p/the-truth-of-tsmc-5nm
it is a Xnm process (PROCESS). this means that according to this process there are that many transistors per square millimeter:
7nm process has 100 MTr/mm2
5nm process has 130 MTr/mm2
3nm process has 200 MTr/mm2

Actually, my brain can't process such numbers, 200,000,000 transistors per square millimeter
Posted on Reply
#30
Wirko
DavenEven the fabs don’t use ‘nm’ calling nodes names
True, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.
maximumterrorActually, my brain can't process such numbers, 200,000,000 transistors per square millimeter
Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
Posted on Reply
#31
maximumterror
WirkoTrue, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.


Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
yes, that's definitely "True". Except, none of them said that the transistor is 5nm in size, it's a 5nm process/technology.

p.s. but again I have a feeling I'm "off topic"
WirkoTrue, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.


Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
If it's that easy, go ahead and make 200 million transistors per square millimeter.
Posted on Reply
#32
Vayra86
TheinsanegamerNThe thing that gets me is....we have that technology! It's called DX12 multi GPU. It works across vendors, and as ashes of the singularity showed, doesnt have the latency or driver issues of SLI/crossfire of old.

Why this tech has just been sidelined is beyond me. The simplest answer is multiple smaller dies would be more efficient as node shrinks stop being possible.
What you need from the API though is some sort of abstraction layer that eliminates the dev work towards using mGPU. Or you need to have logic for that in each GPU, which I think is far more plausible - the hardware itself needs to adjust for the best results in terms of latency. There's also inevitably going to be some scaling penalty.

Its been tried... even before DX12. I can't remember the name. There was a thing that did want to use your IGP alongside your dGPU.
Posted on Reply
#34
3valatzy
THU3115% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.
Author: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips
This means it is a "plus" process node. Not a full-node shrink, not a half-node shrink, but rather a less and worse than a quarter-node shrink.

The good news is that if you buy something decent today, such as the Ryzen AI 9 and Radeon RX 7600 or Radeon 9070, you will be fine to not upgrade forever, since you will never get a upgrade-worthy performance upgrade from the next-generation CPUs and GPUs.

This said, 3nm Radeons/Ryzens are a 2027 thing, 2nm Radeons/Ryzens are toward 2030.
Posted on Reply
#35
user556
TSMC's chairman C.C. Wei says there's more demand for these 2 nm chips than there was for the 3 nm. This increased "appetite" for 2 nm chips is likely due to the significant improvements this technology brings: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips. Apple will be the first company to use these chips, followed by other major tech companies like MediaTek, Qualcomm, Intel, NVIDIA, AMD, and Broadcom.
We've got some conflicting info now. Rumours have Apple, Nvidia and Qualcomm all pulling out. Although, if it's anything at all, I suspect it's only grumblings about price, ie: They haven't really pulled out.
Posted on Reply
#36
Wirko
3valatzyThis means it is a "plus" process node. Not a full-node shrink, not a half-node shrink, but rather a less and worse than a quarter-node shrink.
No. This means that the era of full node shrinks is over. My law, "Just add thirty", applies here, even if very roughly. Going from N3 to N2 is more like going from 33 nm to 32 nm.
Posted on Reply
#37
kapone32
TheinsanegamerNOn the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
What stagnation are we talking about? My current CPU trounces my last CPU and if you think a 6800XT is as fast as a 7900XT, you would be wrong. My current PC is the fastest I have ever owned and progress feels pretty good to me.
TheinsanegamerNI still have my 1200w platinum PSU and big case, just begging for new GPUs.....

I cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.

I'm surprised that MS, for instance, doesnt mandate its use int heir games. THEY made the API. Why cant I rock dual GPUs in halo infinite or Gears or Forza? What about EA, they used to support SLI, lets see some dual GPU action in battlefield! Especially with raytracing and all sorts of new demanding tech, games are begging for two or even three GPUs running in sync.

I'm just saying, imagine three 16GB 4060s running in sync. That would be something.

We could handle the heat. We handled three or even four GTX 580s back in the day, those were 350 watt apiece and didnt have the thermal transfer issues of modern hardware, so they were DUMPING out the heat. Side fans on cases provided absolute wonders.
Star Wars Jedi supports Dual GPUs. Good look finding a driver from AMD or NVIDIA. We have moved onto GPU features. The funniest thing about upscaling is that about 2 years before DLSS became a thing, TRIXX software from Sapphire had a vary similar technology in their software. It is the same thing as Custom resolution in AMD software now but nothing really is new. Crossfire would be a success today as AMD got it to the Driver level, just like Hyper RX. If TW still supported Crossfire I actually would run 2 GPUs. Imagine what 2 7900XTs vs anything would do with Crossfire support. I don't trust Nvidia with Multi GPU support as they are the party that basically killed it. For who ever is going to comment about sttutering, that is why you went with the WIKI page and bought Games on that list. Games like Shadow Of Mordor worked great with Crossfire as well as Sleeping Dogs.
Posted on Reply
#38
A Computer Guy
kapone32What stagnation are we talking about? My current CPU trounces my last CPU and if you think a 6800XT is as fast as a 7900XT, you would be wrong. My current PC is the fastest I have ever owned and progress feels pretty good to me.


Star Wars Jedi supports Dual GPUs. Good look finding a driver from AMD or NVIDIA. We have moved onto GPU features. The funniest thing about upscaling is that about 2 years before DLSS became a thing, TRIXX software from Sapphire had a vary similar technology in their software. It is the same thing as Custom resolution in AMD software now but nothing really is new. Crossfire would be a success today as AMD got it to the Driver level, just like Hyper RX. If TW still supported Crossfire I actually would run 2 GPUs. Imagine what 2 7900XTs vs anything would do with Crossfire support. I don't trust Nvidia with Multi GPU support as they are the party that basically killed it. For who ever is going to comment about sttutering, that is why you went with the WIKI page and bought Games on that list. Games like Shadow Of Mordor worked great with Crossfire as well as Sleeping Dogs.
Sometimes I wonder if Crossfire could be useful in a way with iGPU and dGPU now that AMD essentially comes with an iGPU across the board.
Posted on Reply
Add your own comment
Jan 6th, 2025 14:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts