• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Artificial Intelligence Helped Tape Out More than 200 Chips

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,641 (0.99/day)
In its recent Second Quarter of the Fiscal Year 2023 conference, Synopsys issued interesting information about the recent moves of chip developers and their usage of artificial intelligence. As the call notes, over 200+ chips have been taped out using Synopsys DSO.ai place-and-route (PnR) tool, making it a successful commercially proven AI chip design tool. The DSO.ai uses AI to optimize the placement and routing of the chip's transistors so that the layout is compact and efficient with regard to the strict timing constraints of the modern chip. According to Aart J. de Geus, CEO of Synopsys, "By the end of 2022, adoption, including 9 of the top 10 semiconductor vendors have moved forward at great speed with 100 AI-driven commercial tape-outs. Today, the tally is well over 200 and continues to increase at a very fast clip as the industry broadly adopts AI for design from Synopsys."

This is an interesting fact that means that customers are seeing the benefits of AI-assisted tools like DSO.ai. However, the company is not stopping there, and a whole suite of tools is getting an AI makeover. "We unveiled the industry's first full-stack AI-driven EDA suite, sydnopsys.ai," noted the CEO, adding that "Specifically, in parallel to second-generation advances in DSO.ai we announced VSO.ai, which stands for verification space optimization; and TSO.ai, test space optimization. In addition, we are extending AI across the design stack to include analog design and manufacturing." Synopsys' partners in this include NVIDIA, TSMC, MediaTek, Renesas, and IBM Research, all of which used AI-assisted tools for chip design efforts. A much wider range of industry players is expected to adopt these tools as chip design costs continue to soar as we scale the nodes down. With future 3 nm GPU costing an estimated $1.5 billion, 40% of that will account for software, and Synopsys plans to take a cut in that percentage.



View at TechPowerUp Main Site | Source
 
Joined
May 18, 2009
Messages
2,983 (0.52/day)
Location
MN
System Name Personal / HTPC
Processor Ryzen 5900x / Ryzen 5600X3D
Motherboard Asrock x570 Phantom Gaming 4 /ASRock B550 Phantom Gaming
Cooling Corsair H100i / bequiet! Pure Rock Slim 2
Memory 32GB DDR4 3200 / 16GB DDR4 3200
Video Card(s) EVGA XC3 Ultra RTX 3080Ti / EVGA RTX 3060 XC
Storage 500GB Pro 970, 250 GB SSD, 1TB & 500GB Western Digital / lots
Display(s) Dell - S3220DGF & S3222DGM 32"
Case CoolerMaster HAF XB Evo / CM HAF XB Evo
Audio Device(s) Logitech G35 headset
Power Supply 850W SeaSonic X Series / 750W SeaSonic X Series
Mouse Logitech G502
Keyboard Black Microsoft Natural Elite Keyboard
Software Windows 10 Pro 64 / Windows 10 Pro 64
As interesting as it sounds, maybe I'm just jumping the gun here, but to quote Detective Spooner:

"Robots building robots? Now that's just stupid."
 
Joined
Jun 21, 2021
Messages
3,121 (2.44/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
As interesting as it sounds, maybe I'm just jumping the gun here, but to quote Detective Spooner:

"Robots building robots? Now that's just stupid."

People have been using computers to build computers for a long time. The pandemic related semiconductor shortage also affected the chipmaking industry.

You have to go back to the Seventies to find computer scientists using slide rules, pencils and notepads to design computers, visionaries like Gene Amdahl (RIP).

Newer machine learning algorithms are better optimized for processing the large number of calculations for today's semiconductor designs but there's nothing new about using computer chips to design computer chips.

You can even invest in companies that create products to design and manufacture semiconductors like Cadence Design, Applied Materials, ASML, etc.
 
Last edited:

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,079 (1.83/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
I wonder if the prevalence of this design tool will cause the performances of all the chips to converge to similar perf/watt or perf/mm2. Would be interesting to compare this against another competing tool.
 
D

Deleted member 185088

Guest
People have been using computers to build computers for a long time. The pandemic related semiconductor shortage also affected the chipmaking industry.

You have to go back to the Seventies to find computer scientists using slide rules, pencils and notepads to design computers, visionaries like Gene Amdahl (RIP).

Newer machine learning algorithms are better optimized for processing the large number of calculations for today's semiconductor designs but there's nothing new about using computer chips to design computer chips.

You can even invest in companies that create products to design and manufacture semiconductors like Cadence Design, Applied Materials, ASML, etc.
What's interesting is that Concorde was designed in the 50s, the Blackbird not far later, Soviet and American space programmes not later than that, yet their achievements were unmatched. Now with better knowledge (though limited by the stupidity of copyright systems), better technology we don't innovate as much.
 
Joined
Nov 23, 2010
Messages
317 (0.06/day)
I wonder if the prevalence of this design tool will cause the performances of all the chips to converge to similar perf/watt or perf/mm2. Would be interesting to compare this against another competing tool.
You still have to design the chip, this sounds like is mostly to aid in optimizing the traces which is just one component in VLSI.
 
Joined
Jun 16, 2013
Messages
1,457 (0.35/day)
Location
Australia
What's interesting is that Concorde was designed in the 50s, the Blackbird not far later, Soviet and American space programmes not later than that, yet their achievements were unmatched. Now with better knowledge (though limited by the stupidity of copyright systems), better technology we don't innovate as much.
If IP is not protected with copyright law then where is the incentive to innovate?
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,417 (4.69/day)
Location
Kepler-186f
Processor 7800X3D -25 all core
Motherboard B650 Steel Legend
Cooling Frost Commander 140
Video Card(s) Merc 310 7900 XT @3100 core -.75v
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p
Case NZXT H710 (Red/Black)
Audio Device(s) Asgard 2, Modi 3, HD58X
Power Supply Corsair RM850x Gold
I wonder if the prevalence of this design tool will cause the performances of all the chips to converge to similar perf/watt or perf/mm2. Would be interesting to compare this against another competing tool.

I don't think anything will change regardless of how good this AI is.

Capitalism has built in slowing factors to milk max profit. Intel and AMD are a Duopoly, and will continue to throw punches at each other that only do light damage, they probably could have done heavy damage one way or another by releasing some node way ahead of schedule, but that means less sells in the long term, cause eventually the node gets so small they no longer have anywhere to go.
 

Fourstaff

Moderator
Staff member
Joined
Nov 29, 2009
Messages
10,079 (1.83/day)
Location
Home
System Name Orange! // ItchyHands
Processor 3570K // 10400F
Motherboard ASRock z77 Extreme4 // TUF Gaming B460M-Plus
Cooling Stock // Stock
Memory 2x4Gb 1600Mhz CL9 Corsair XMS3 // 2x8Gb 3200 Mhz XPG D41
Video Card(s) Sapphire Nitro+ RX 570 // Asus TUF RTX 2070
Storage Samsung 840 250Gb // SX8200 480GB
Display(s) LG 22EA53VQ // Philips 275M QHD
Case NZXT Phantom 410 Black/Orange // Tecware Forge M
Power Supply Corsair CXM500w // CM MWE 600w
You still have to design the chip, this sounds like is mostly to aid in optimizing the traces which is just one component in VLSI.
Sure, but then we will increasingly rely on AI on to optimise chip design.

I don't think anything will change regardless of how good this AI is.

Capitalism has built in slowing factors to milk max profit. Intel and AMD are a Duopoly, and will continue to throw punches at each other that only do light damage, they probably could have done heavy damage one way or another by releasing some node way ahead of schedule, but that means less sells in the long term, cause eventually the node gets so small they no longer have anywhere to go.
10 years ago a lot of people were upgrading their phones once very year or two, nowadays people stick with their phone for 4years or more. If AMD/Intel/Nvidia doesn't improve, they will find out that no one will buy their newest and greatest.
 
Joined
Jan 11, 2022
Messages
910 (0.85/day)
What's interesting is that Concorde was designed in the 50s, the Blackbird not far later, Soviet and American space programmes not later than that, yet their achievements were unmatched. Now with better knowledge (though limited by the stupidity of copyright systems), better technology we don't innovate as much.
Low hanging fruit is picked first.

Nations have been working on scramjet propulsion for decades, there is also the issue of having the flesh bags survive speeds of mach 10 never mind having them do maneuvers which generate g's that will make blood squirt out of your toes.
 
Joined
May 3, 2018
Messages
2,881 (1.19/day)
Surely no one seriously thinks humans can design the layout of a multi-billion transistor chip without help. Ai is just the latest form of assistance. I read a comment from one chip designer several years ago that said designer that said humans cannot possibly comprehend the full layout and it would be impossible for them to decide how to connect so many transistors etc.
 
Joined
Mar 10, 2021
Messages
40 (0.03/day)
Low hanging fruit is picked first.

Nations have been working on scramjet propulsion for decades, there is also the issue of having the flesh bags survive speeds of mach 10 never mind having them do maneuvers which generate g's that will make blood squirt out of your toes.
Speed is irrelevant in survivability. Only acceleration matters.
 
Joined
Dec 30, 2010
Messages
2,199 (0.43/day)
I wonder if the prevalence of this design tool will cause the performances of all the chips to converge to similar perf/watt or perf/mm2. Would be interesting to compare this against another competing tool.

You can run a thousands of simulations to get the best perf/mm2 before you actually tape it out. And ofcourse a well trained Ai could outscore a team of 150 engineers if done properly.

Side-effect is; you might start to get CPU's with functions nobody perhaps understand, or it goes beyond our understanding.

It's dangerous.
 
Joined
Sep 17, 2014
Messages
22,642 (6.04/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
What's interesting is that Concorde was designed in the 50s, the Blackbird not far later, Soviet and American space programmes not later than that, yet their achievements were unmatched. Now with better knowledge (though limited by the stupidity of copyright systems), better technology we don't innovate as much.
I think a really big part of that is just the fact that when you've discovered something, there is another thing less to discover.

Things are not infinite, diminishing returns happen. You see it everywhere. And there is even a point where the investment exceeds the benefits, look at fossil & climate.

What we're really good at is incrementally expanding our toolset to get a step further, but we're also seeing that every step forward is becoming a harder, bigger one to make. Look at chip nodes! Its more visible there than anywhere else. The problem behind that is that we've built societies and economies/systems on the idea we can keep going forward, ever bigger. At the same time, its only natural that we want a complete understanding of all the things, being who we are.

I think also that we don't innovate less - we innovate more, but the innovations themselves are most of the time of a more dubious nature. Hit or miss, if you will, and the factors that influence that aren't always factors of merit of the innovation itself, but more of the environment, mindset, time, etc around it. Even the tremendous speed at which all these small innovations follow up on each other is a factor we can't deny. There simply isn't enough time to consume all the info. A great example of that is the LHC. No innovation you say...? This project puts the finger on it perfectly: we're trying to find the last puzzle pieces, and its damn hard.

1684701659084.png
 
Last edited:
D

Deleted member 185088

Guest
I think a really big part of that is just the fact that when you've discovered something, there is another thing less to discover.

Things are not infinite, diminishing returns happen. You see it everywhere. And there is even a point where the investment exceeds the benefits, look at fossil & climate.

What we're really good at is incrementally expanding our toolset to get a step further, but we're also seeing that every step forward is becoming a harder, bigger one to make. Look at chip nodes! Its more visible there than anywhere else. The problem behind that is that we've built societies and economies/systems on the idea we can keep going forward, ever bigger. At the same time, its only natural that we want a complete understanding of all the things, being who we are.

I think also that we don't innovate less - we innovate more, but the innovations themselves are most of the time of a more dubious nature. Hit or miss, if you will, and the factors that influence that aren't always factors of merit of the innovation itself, but more of the environment, mindset, time, etc around it. Even the tremendous speed at which all these small innovations follow up on each other is a factor we can't deny. There simply isn't enough time to consume all the info. A great example of that is the LHC. No innovation you say...? This project puts the finger on it perfectly: we're trying to find the last puzzle pieces, and its damn hard.

View attachment 296978
What I mean is we have the means, but we don't use them because we rely on the private sector too much which only cares about money for their stakeholders. The LHC is a good example because it doesn't follow the inefficient private system rather international collaboration, the same with the Concorde before.
 
Top