• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TSMC Is Getting Ready to Launch Its First 2nm Production Line

Joined
Jun 1, 2010
Messages
417 (0.08/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
On the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
Yes, and the node R&D is already paid up, and nodes are usually more mature and effecient, with bigger supply, and less deffects. Thus it should cost cheaper.
But we all know, the silicon companies, do not like simple and cheap stuff, as there's nowhere to stuff their 60-70% margins. Especially if the foundries are flooding with defectless chips. They won't sell more. They will still make a scarcity.
Another shortage this, another accident there...
 
Joined
Dec 12, 2012
Messages
780 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
On the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.

I don't know if stagnation is good when you have such gigantic differences within a generation. The 5090 will most likely be 4 times faster than the 5060. How is ray tracing ever supposed to become popular, when the entry level cards can only run it theoretically?

I never had a problem upgrading GPUs every generation when prices were steady. I always got a big improvement, and the cost wasn't very high after selling the old GPU. But these days you have to pay more to get a small improvement, which is also a result of diminishing returns in graphics technology. You need the latest card to run a brand new game that looks marginally better than games from 5 years ago.
Hardware is expensive to r&d and manufacture, games are expensive to develop (and take a long time), it's a slippery slope. Something needs to change before the gaming industry crashes. But I guess these companies don't really care about gaming, the entire focus is on professional markets, which are eating up all these new chips no matter the price.
 
Joined
Dec 16, 2017
Messages
2,950 (1.14/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
I cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.
Most likely, the development cost doesn't make sense compared to the number of people that would actually use mGPU.

Not sure it's that much easier either, since it seems like it's low level code?

You'd have to ask the big game engine makers anyway (UE5, Unity, idTech, Godot and such) over whether they actually provide mGPU support, as well.
 
Joined
Dec 12, 2016
Messages
1,998 (0.68/day)
2nm is just a marketing name. There is nothing that small inside a chip. The gate size is actually around 45nm, while the metal pitch smallest is ~20nm
View attachment 378008
I’m not sure if our fellow tech enthusiasts or tech journalists will ever drop the ‘nm’ from these articles. Even the fabs don’t use ‘nm’ calling nodes names like 18A, SF2, N2P, etc. But thank you for continuing to push the real feature sizes.

Edit: Here is a great article if you want to know everything about chip sizes.

 
Last edited:
Joined
May 14, 2024
Messages
40 (0.17/day)
I’m not sure if our fellow tech enthusiasts or tech journalists will ever drop the ‘nm’ from these articles. Even the fabs don’t use ‘nm’ calling nodes names like 18A, SF2, N2P, etc. But thank you for continuing to push the real feature sizes.

Edit: Here is a great article if you want to know everything about chip sizes.

it is a Xnm process (PROCESS). this means that according to this process there are that many transistors per square millimeter:
7nm process has 100 MTr/mm2
5nm process has 130 MTr/mm2
3nm process has 200 MTr/mm2

Actually, my brain can't process such numbers, 200,000,000 transistors per square millimeter
 
Joined
Jan 3, 2021
Messages
3,665 (2.50/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Even the fabs don’t use ‘nm’ calling nodes names
True, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.

Actually, my brain can't process such numbers, 200,000,000 transistors per square millimeter
Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
 
Joined
May 14, 2024
Messages
40 (0.17/day)
True, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.


Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
yes, that's definitely "True". Except, none of them said that the transistor is 5nm in size, it's a 5nm process/technology.

p.s. but again I have a feeling I'm "off topic"

True, they have become a bit shy, but TSMC for example still tells us that variants of N5 and N4 nodes belong in the 5 nm family.


Imagine 200 transistors per square micrometer. 200 is easy. A micrometer is easy too, right?
If it's that easy, go ahead and make 200 million transistors per square millimeter.
 
Joined
Sep 17, 2014
Messages
22,800 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
The thing that gets me is....we have that technology! It's called DX12 multi GPU. It works across vendors, and as ashes of the singularity showed, doesnt have the latency or driver issues of SLI/crossfire of old.

Why this tech has just been sidelined is beyond me. The simplest answer is multiple smaller dies would be more efficient as node shrinks stop being possible.
What you need from the API though is some sort of abstraction layer that eliminates the dev work towards using mGPU. Or you need to have logic for that in each GPU, which I think is far more plausible - the hardware itself needs to adjust for the best results in terms of latency. There's also inevitably going to be some scaling penalty.

Its been tried... even before DX12. I can't remember the name. There was a thing that did want to use your IGP alongside your dGPU.
 
Joined
Dec 16, 2017
Messages
2,950 (1.14/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
Joined
Jan 27, 2024
Messages
394 (1.14/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
15% more transistors? That's terrible. Aren't they supposed to be charging 50% more for this node compared to 3 nm (30k vs. 20k)?

Things are not looking good.

Author: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips

This means it is a "plus" process node. Not a full-node shrink, not a half-node shrink, but rather a less and worse than a quarter-node shrink.

The good news is that if you buy something decent today, such as the Ryzen AI 9 and Radeon RX 7600 or Radeon 9070, you will be fine to not upgrade forever, since you will never get a upgrade-worthy performance upgrade from the next-generation CPUs and GPUs.

This said, 3nm Radeons/Ryzens are a 2027 thing, 2nm Radeons/Ryzens are toward 2030.
 
Joined
Feb 11, 2020
Messages
255 (0.14/day)
TSMC's chairman C.C. Wei says there's more demand for these 2 nm chips than there was for the 3 nm. This increased "appetite" for 2 nm chips is likely due to the significant improvements this technology brings: it uses 24-35% less power, can run 15% faster at the same power level, and can fit 15% more transistors in the same space compared to the 3 nm chips. Apple will be the first company to use these chips, followed by other major tech companies like MediaTek, Qualcomm, Intel, NVIDIA, AMD, and Broadcom.
We've got some conflicting info now. Rumours have Apple, Nvidia and Qualcomm all pulling out. Although, if it's anything at all, I suspect it's only grumblings about price, ie: They haven't really pulled out.
 
Joined
Jan 3, 2021
Messages
3,665 (2.50/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
This means it is a "plus" process node. Not a full-node shrink, not a half-node shrink, but rather a less and worse than a quarter-node shrink.
No. This means that the era of full node shrinks is over. My law, "Just add thirty", applies here, even if very roughly. Going from N3 to N2 is more like going from 33 nm to 32 nm.
 
Joined
Jun 2, 2017
Messages
9,475 (3.41/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
On the plus side, the stagnation of hardware means that what you buy should last longer. 6 years out of GPUs isnt out of the realm of possibility now and maybe we'll be seeing 8-10 years as GPU cadence slows down further, with new generations focusing more on Maxwell style efficiency.
What stagnation are we talking about? My current CPU trounces my last CPU and if you think a 6800XT is as fast as a 7900XT, you would be wrong. My current PC is the fastest I have ever owned and progress feels pretty good to me.

I still have my 1200w platinum PSU and big case, just begging for new GPUs.....

I cant imagine its hard since its part of the API now right? I got the impression that it's far easier to work with then SLI/crossfire.

I'm surprised that MS, for instance, doesnt mandate its use int heir games. THEY made the API. Why cant I rock dual GPUs in halo infinite or Gears or Forza? What about EA, they used to support SLI, lets see some dual GPU action in battlefield! Especially with raytracing and all sorts of new demanding tech, games are begging for two or even three GPUs running in sync.

I'm just saying, imagine three 16GB 4060s running in sync. That would be something.

We could handle the heat. We handled three or even four GTX 580s back in the day, those were 350 watt apiece and didnt have the thermal transfer issues of modern hardware, so they were DUMPING out the heat. Side fans on cases provided absolute wonders.
Star Wars Jedi supports Dual GPUs. Good look finding a driver from AMD or NVIDIA. We have moved onto GPU features. The funniest thing about upscaling is that about 2 years before DLSS became a thing, TRIXX software from Sapphire had a vary similar technology in their software. It is the same thing as Custom resolution in AMD software now but nothing really is new. Crossfire would be a success today as AMD got it to the Driver level, just like Hyper RX. If TW still supported Crossfire I actually would run 2 GPUs. Imagine what 2 7900XTs vs anything would do with Crossfire support. I don't trust Nvidia with Multi GPU support as they are the party that basically killed it. For who ever is going to comment about sttutering, that is why you went with the WIKI page and bought Games on that list. Games like Shadow Of Mordor worked great with Crossfire as well as Sleeping Dogs.
 
Joined
Jul 30, 2019
Messages
3,373 (1.70/day)
System Name Still not a thread ripper but pretty good.
Processor Ryzen 9 7950x, Thermal Grizzly AM5 Offset Mounting Kit, Thermal Grizzly Extreme Paste
Motherboard ASRock B650 LiveMixer (BIOS/UEFI version P3.08, AGESA 1.2.0.2)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, D5 PWM, EK-CoolStream PE 360, XSPC TX360
Memory Micron DDR5-5600 ECC Unbuffered Memory (2 sticks, 64GB, MTC20C2085S1EC56BD1) + JONSBO NF-1
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 4TB 980 PRO, 2 x Optane 905p 1.5TB (striped), AMD Radeon RAMDisk
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Audio Device(s) Corsair Commander Pro for Fans, RGB, & Temp Sensors (x4)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores RIP Ryzen 9 5950x, ASRock X570 Taichi (v1.06), 128GB Micron DDR4-3200 ECC UDIMM (18ASF4G72AZ-3G2F1)
What stagnation are we talking about? My current CPU trounces my last CPU and if you think a 6800XT is as fast as a 7900XT, you would be wrong. My current PC is the fastest I have ever owned and progress feels pretty good to me.


Star Wars Jedi supports Dual GPUs. Good look finding a driver from AMD or NVIDIA. We have moved onto GPU features. The funniest thing about upscaling is that about 2 years before DLSS became a thing, TRIXX software from Sapphire had a vary similar technology in their software. It is the same thing as Custom resolution in AMD software now but nothing really is new. Crossfire would be a success today as AMD got it to the Driver level, just like Hyper RX. If TW still supported Crossfire I actually would run 2 GPUs. Imagine what 2 7900XTs vs anything would do with Crossfire support. I don't trust Nvidia with Multi GPU support as they are the party that basically killed it. For who ever is going to comment about sttutering, that is why you went with the WIKI page and bought Games on that list. Games like Shadow Of Mordor worked great with Crossfire as well as Sleeping Dogs.
Sometimes I wonder if Crossfire could be useful in a way with iGPU and dGPU now that AMD essentially comes with an iGPU across the board.
 
Top