Monday, September 26th 2022

ASML CTO Expects Post High-NA Lithography to be Prohibitively Costly

In an interview with Bits & Chips, ASML's CTO Martin van den Brink said that he believes that we might be reaching the end of the road for current semiconductor lithography technology in the not so distant future. However, for the time being, ASML is executing on its roadmap and after EUV, the next step is high-NA or high-numerical aperture and ASML is currently planning to have its first research high-NA scanner ready for a joint R&D venture with Imec in 2023. Assuming everything goes to plan, ASML is then planning on delivering the first R&D machines to its customers in 2024, followed by deliver of the first volume production machines using high-NA sometime in 2025. Van den Brink points out that due to the current supply chain uncertainties could affect the timing, in combination with the fact that ASML has a high demand for its EUV machines and the two technologies share a lot of components.

As such, current orders are the priority and high-NA development might be put on the back burner if need be, or as Van den Brink puts it "today's meal takes priority over tomorrow's." High-NA scanners are expected to be even more power hungry than EUV machines and are as such expected to pull around two Megawatts for the various stages. The next step in the evolution of semiconductor lithography is where ASML is expecting things to get problematic, as what the company is currently calling hyper-NA is expected to be prohibitively costly to manufacture and use. If the cost of hyper-NA grows as fast as we've seen in high-NA, it will pretty much be economically unfeasible," Van den Brink said. ASML is hoping to overcome the cost issues, but for now, the company has a plan for the next decade and things could very well change during that time and remove some of the obstacles that are currently being seen.
Sources: Bits & Chips, via @dnystedt
Add your own comment

18 Comments on ASML CTO Expects Post High-NA Lithography to be Prohibitively Costly

#1
Valantar
Two megawatts per scanner? :eek:
Posted on Reply
#2
Space Lynx
Astronaut
ValantarTwo megawatts per scanner? :eek:
I mean, there are entire coal plants that have been started up to fund ASIC mining of Bitcoin... no one seems to be gaping a mouth about that, because they are still up and running.
Posted on Reply
#3
TheLostSwede
News Editor
ValantarTwo megawatts per scanner? :eek:
Yeah, it's really quite something.
Finally, Van den Brink doesn’t want to underestimate the complexity of a system that’s larger than a typical transit bus. “It’s a monstrosity. Back in the day, a scanner needed a few hundred kilowatts. For EUV, it’s 1.5 megawatts, primarily because of the light source. We use the same light source for high-NA, but we need an additional 0.5 megawatts for the stages. We use water-cooled copper wires to power them. We push a lot of engineering.”
Posted on Reply
#4
P4-630
We use water-cooled copper wires to power them.
The future of powering next gen GPU's....
Posted on Reply
#5
Daven
Silicon based semiconductors have had a nice run. I expect improvements will continue into the next decade but they will be very incremental at best. Stacking chips might extend life a little more at which point a total paradigm shift (e.g. photonic chips) will be needed to continue technological improvements. IMHO, mid 2030 will see quite a few C-level executives with worried looks on their faces.
Posted on Reply
#6
bug
Rubbish, I routinely build sub-nm transistor for pennies at home. But I keep losing them, being so tiny end everything :P
Posted on Reply
#7
Valantar
bugRubbish, I routinely build sub-nm transistor for pennies at home. But I keep losing them, being so tiny end everything :p
Have you checked your vacuum cleaner?

Also, if each transistor costs "pennies", plural, that doesn't bode well for the cost of complex ICs with billions of them. Even a single penny would arguably be orders of magnitude too expensive. You need to cut your costs, man!
CallandorWoTI mean, there are entire coal plants that have been started up to fund ASIC mining of Bitcoin... no one seems to be gaping a mouth about that, because they are still up and running.
Somewhat true, but those are powering tens of thousands of GPUs - warehouses full of them, across huge areas. They aren't powering a single ~5*5*20m box.
Posted on Reply
#8
Space Lynx
Astronaut
ValantarHave you checked your vacuum cleaner?

Also, if each transistor costs "pennies", plural, that doesn't bode well for the cost of complex ICs with billions of them. Even a single penny would arguably be orders of magnitude too expensive. You need to cut your costs, man!


Somewhat true, but those are powering tens of thousands of GPUs - warehouses full of them, across huge areas. They aren't powering a single ~5*5*20m box.
gpu's don't mine bitcoin. giant ASIC machines do. so you have it wrong entirely. also, either way it doesn't matter... its disgusting...
Posted on Reply
#9
Prima.Vera
Why not build layered CPUs and GPUs, just like 3D NAND flash memories with their 200+ layers?
Most likely huge power consumption and extremely low yields?
Posted on Reply
#10
Valantar
CallandorWoTgpu's don't mine bitcoin. giant ASIC machines do. so you have it wrong entirely. also, either way it doesn't matter... its disgusting...
Well, for bitcoin you're right, though those coal plants are AFAIK for crypto mining more generally. Miners don't tend to be picky.

Also, ASIC miners aren't "giant", they're the size of a small suitcase at most.
Posted on Reply
#11
Space Lynx
Astronaut
Prima.VeraWhy not build layered CPUs and GPUs, just like 3D NAND flash memories with their 200+ layers?
Most likely huge power consumption and extremely low yields?
Maybe they plan to? they have to milk incremental gains to appease the shareholders. if you came out with your best cpu possible today, why would i invest in your company 4 years from now when the markets are saturated and no other cpu still matters in 4 years?
Posted on Reply
#12
Valantar
Prima.VeraWhy not build layered CPUs and GPUs, just like 3D NAND flash memories with their 200+ layers?
Most likely huge power consumption and extremely low yields?
Thermals. NAND has next to no power consumption and heat output, while CPUs and GPUs generate tons of heat. Trapping that heat under layers of more silicon is generally what you would call a bad idea. Even if each layer consumed just a few watts you'd reach uncontrollable thermals very quickly.
Posted on Reply
#13
bug
ValantarHave you checked your vacuum cleaner?

Also, if each transistor costs "pennies", plural, that doesn't bode well for the cost of complex ICs with billions of them. Even a single penny would arguably be orders of magnitude too expensive. You need to cut your costs, man!


Somewhat true, but those are powering tens of thousands of GPUs - warehouses full of them, across huge areas. They aren't powering a single ~5*5*20m box.
You mean 2p x 10M transistors = 1 expensive CPU? I figure once you start making them en-masse, you can halve the production cost :P
(damn, I'm on fire today)
Posted on Reply
#14
TheLostSwede
News Editor
Prima.VeraWhy not build layered CPUs and GPUs, just like 3D NAND flash memories with their 200+ layers?
Most likely huge power consumption and extremely low yields?
Because it's hard to get the same performance? Cooling is also on issue, as NAND flash runs ice cold in comparison to CPUs and GPUs.
This is why things like CoWoS are more likely to happen first.
Posted on Reply
#15
Prima.Vera
ValantarThermals. NAND has next to no power consumption and heat output, while CPUs and GPUs generate tons of heat. Trapping that heat under layers of more silicon is generally what you would call a bad idea. Even if each layer consumed just a few watts you'd reach uncontrollable thermals very quickly.
Liquid cooling on BOTH sides of a CPU/GPU will solve some of those problems. The GPUs are already getting stupidly ridiculous expensive anyways....
Posted on Reply
#16
Valantar
Prima.VeraLiquid cooling on BOTH sides of a CPU/GPU will solve some of those problems. The GPUs are already getting stupidly ridiculous expensive anyways....
Sounds like a great concept. Maybe we could patent a liquid chip substrate? I'm sure having hundreds or thousands of wires running through a cooling liquid is completely fine!
bugYou mean 2p x 10M transistors = 1 expensive CPU? I figure once you start making them en-masse, you can halve the production cost :p
(damn, I'm on fire today)
You might want to pitch this production process to Nvidia, they would no doubt be interested.
Posted on Reply
#17
mechtech
Compact Disk x CTO x k1 x Lambda/North America = pretty much be economically unfeasible

wow

who knew?!?! ;)
Posted on Reply
#18
Frank_100
P4-630The future of powering next gen GPU's....
:roll:
Posted on Reply
Add your own comment
Jul 16th, 2024 03:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts