News Posts matching #Research

Return to Keyword Browsing

Artificial Intelligence Helped Tape Out More than 200 Chips

In its recent Second Quarter of the Fiscal Year 2023 conference, Synopsys issued interesting information about the recent moves of chip developers and their usage of artificial intelligence. As the call notes, over 200+ chips have been taped out using Synopsys DSO.ai place-and-route (PnR) tool, making it a successful commercially proven AI chip design tool. The DSO.ai uses AI to optimize the placement and routing of the chip's transistors so that the layout is compact and efficient with regard to the strict timing constraints of the modern chip. According to Aart J. de Geus, CEO of Synopsys, "By the end of 2022, adoption, including 9 of the top 10 semiconductor vendors have moved forward at great speed with 100 AI-driven commercial tape-outs. Today, the tally is well over 200 and continues to increase at a very fast clip as the industry broadly adopts AI for design from Synopsys."

This is an interesting fact that means that customers are seeing the benefits of AI-assisted tools like DSO.ai. However, the company is not stopping there, and a whole suite of tools is getting an AI makeover. "We unveiled the industry's first full-stack AI-driven EDA suite, sydnopsys.ai," noted the CEO, adding that "Specifically, in parallel to second-generation advances in DSO.ai we announced VSO.ai, which stands for verification space optimization; and TSO.ai, test space optimization. In addition, we are extending AI across the design stack to include analog design and manufacturing." Synopsys' partners in this include NVIDIA, TSMC, MediaTek, Renesas, and IBM Research, all of which used AI-assisted tools for chip design efforts. A much wider range of industry players is expected to adopt these tools as chip design costs continue to soar as we scale the nodes down. With future 3 nm GPU costing an estimated $1.5 billion, 40% of that will account for software, and Synopsys plans to take a cut in that percentage.

IonQ Aria Now Available on Amazon Braket Cloud Quantum Computing Service

Today at Commercialising Quantum Global 2023, IonQ (NYSE: IONQ), an industry leader in quantum computing, announced the availability of IonQ Aria on Amazon Braket, AWS's quantum computing service. This expands upon IonQ's existing presence on Amazon Braket, following the debut of IonQ's Harmony system on the platform in 2020. With broader access to IonQ Aria, IonQ's flagship system with 25 algorithmic qubits (#AQ)—more than 65,000 times more powerful than IonQ Harmony—users can now explore, design, and run more complex quantum algorithms to tackle some of the most challenging problems of today.

"We are excited for IonQ Aria to become available on Amazon Braket, as we expand the ways users can access our leading quantum computer on the most broadly adopted cloud service provider," said Peter Chapman, CEO and President, IonQ. "Amazon Braket has been instrumental in commercializing quantum, and we look forward to seeing what new approaches will come from the brightest, most curious, minds in the space."

Sharp Working on LCD Screen for Next Generation Gaming Console

Sharp Corporation's Chief Executive Officer, Robert Wu, has revealed that the Japanese company's display division is involved in the research and development of an LCD screen intended for use in an "upcoming" gaming console. This small detail was announced during an earnings briefing/call with analysts (on May 11), and Wu was careful to not name the key client: "I can't comment on any details regarding specific customers. But as to a new gaming console, we've been involved in its R&D stage." Bloomberg reports that information about a gaming device was removed from presentation material, soon after the conclusion of the conference call. Sharp expects to launch pilot LCD-panel production lines by the end of this fiscal year.

Given the secrecy surrounding this business partnership and projected time frame for the new line of gaming oriented LCD screens, games industry analysts point to Nintendo being the "unnamed" client. Sharp is the current supplier of 6.2-inch LCD panels that are featured on the standard Nintendo Switch (2017) handheld console. Samsung provides a 7-inch screen that is fitted to the premium Switch OLED (2021), and InnoLux makes a 5.5-inch LCD display - sported by the entry-level Switch Lite (2019). A next-gen Switch console successor is rumored to be deep into development but is not expected to arrive this year - Nintendo's CEO, Shuntaro Furukawa, informed investors (last week) that new hardware is due at some point after April 2024.

AMD faulTPM Exploit Targets Zen 2 and Zen 3 Processors

Researchers at the Technical University of Berlin have published a paper called "faulTPM: Exposing AMD fTPMs' Deepest Secrets," highlighting AMD's firmware-based Trusted Platform Module (TPM) is susceptible to the new exploit targeting Zen 2 and Zen 3 processors. The faulTPM attack against AMD fTPMs involves utilizing the AMD secure processor's (SP) vulnerability to voltage fault injection attacks. This allows the attacker to extract a chip-unique secret from the targeted CPU, which is then used to derive the storage and integrity keys protecting the fTPM's non-volatile data stored on the BIOS flash chip. The attack consists of a manual parameter determination phase and a brute-force search for a final delay parameter. The first step requires around 30 minutes of manual attention, but it can potentially be automated. The second phase consists of repeated attack attempts to search for the last-to-be-determined parameter and execute the attack's payload.

Once these steps are completed, the attacker can extract any cryptographic material stored or sealed by the fTPM regardless of authentication mechanisms, such as Platform Configuration Register (PCR) validation or passphrases with anti-hammering protection. Interestingly, BitLocker uses TPM as a security measure, and faulTPM compromises the system. Researchers suggested that Zen 2 and Zen 3 CPUs are vulnerable, while Zen 4 wasn't mentioned. The attack requires several hours of physical access, so remote vulnerabilities are not a problem. Below, you can see the $200 system used for this attack and an illustration of the physical connections necessary.

NVIDIA DGX H100 Systems are Now Shipping

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence. They're creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom—and working to transform their industries in the process. Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow's plants can be more efficient.

Called Green Physics AI, it adds information like an object's CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

MIT Researchers Grow Transistors on Top of Silicon Wafers

MIT researchers have developed a groundbreaking technology that allows for the growth of 2D transition metal dichalcogenide (TMD) materials directly on fully fabricated silicon chips, enabling denser integrations. Conventional methods require temperatures of about 600°C, which can damage silicon transistors and circuits as they break down above 400°C. The MIT team overcame this challenge by creating a low-temperature growth process that preserves the chip's integrity, allowing 2D semiconductor transistors to be directly integrated on top of standard silicon circuits. The new approach grows a smooth, highly uniform layer across an entire 8-inch wafer, unlike previous methods that involved growing 2D materials elsewhere before transferring them to a chip or wafer. This process often led to imperfections that negatively impacted device and chip performance.

Additionally, the novel technology can grow a uniform layer of TMD material in less than an hour over 8-inch wafers, a significant improvement from previous methods that required over a day for a single layer. The enhanced speed and uniformity of this technology make it suitable for commercial applications, where 8-inch or larger wafers are essential. The researchers focused on molybdenum disulfide, a flexible, transparent 2D material with powerful electronic and photonic properties ideal for semiconductor transistors. They designed a new furnace for the metal-organic chemical vapor deposition process, which has separate low and high-temperature regions. The silicon wafer is placed in the low-temperature region while vaporized molybdenum and sulfur precursors flow into the furnace. Molybdenum remains in the low-temperature region, while the sulfur precursor decomposes in the high-temperature region before flowing back into the low-temperature region to grow molybdenum disulfide on the wafer surface.

43rd Symposium on VLSI Technology & Circuits to Focus on Multi-chiplet Devices and Packaging Innovations as Moore's Law Buckles

The 43rd edition of the Symposium on VLSI Technology & Circuits, held annually in Kyoto Japan, is charting the way forward for the devices of the future. Held between June 11-16, 2023, this year's symposium will see structured presentations, Q&A, and discussions on some of the biggest technological developments in the logic chip world. The lead (plenary) sessions drop a major hint on the way the wind is blowing. Leadning from the front is an address by Suraya Bhattacharya, Director, System-in-Package, A*STAR, IME, on "Multi-Chiplet Heterogeneous Integration Packaging for Semiconductor System Scaling."

Companies such as AMD and Intel read the tea-leaves, that Moore's Law is buckling, and it's no longer economically feasible to build large monolithic processors at the kind of prices they commanded a decade ago. This has caused companies to ration their allocation of the latest foundry node to only the specific components of their chip design that benefit the most from the latest node, and identify components that don't benefit as much, and disintegrate them into separate dies build on older foundry nodes, which are then connected through innovative packaging technologies.

YMTC Using Locally Sourced Equipment for Advanced 3D NAND Manufacturing

According to the South China Morning Post (SCMP) sources, Yangtze Memory Technologies Corp (YMTC) has been plotting to manufacture its advanced 3D NAND flash using locally sourced equipment. As the source notes, YMTC has placed big orders from local equipment makers in a secret project codenamed Wudangshan, named after the Taoist mountain in the company's home province of Hubei. Last year, YTMC announced significant progress towards creating 200+ layer 3D NAND flash before other 3D NAND makers like Micron and SK Hynix. Called X3-9070, the chip is a 232-layer 3D NAND based on the company's advanced Xtacking 3.0 architecture.

As the SCMP finds, YTMC has placed big orders at Beijing-based Naura Technology Group, maker of etching tools and competitor to Lam Research, to manufacture its advanced flash memory. Additionally, YTMC has reportedly asked all its tool suppliers to remove all logos and other marks from equipment to avoid additional US sanctions holding the development back. This significant order block comes after the state invested 7 billion US Dollars into YTMC to boost its production capacity, and we see the company utilizing those resources right away. However, few industry analysts have identified a few "choke points" in YTMC's path to independent manufacturing, as there are still no viable domestic alternatives to US-based tool makers in areas such as metrology tools, where KLA is the dominant player, and lithography tools, where ASML, Nikon, and Canon, are noteworthy. The Wuhan-based Wudangshan project remains secret about dealing with those choke points in the future.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

University of Chicago Molecular Engineering Team Experimenting With Stretchable OLED Display

A researcher team operating out of the Pritzker School of Molecular Engineering (PME) at the University of Chicago are developing a special type of material that is simultaneously capable of emitting fluorescent pattern and undergoing deformation via forced stretches or bends. This thin piece of experimental elastic can function as a digital display, even under conditions of great force - its creators claim that their screen technology material can be stretched to twice the original length without any deterioration or failures.

Sihong Wang (assistant professor of molecular engineering) has lead this research project, with Juan de Pablo (Liew Family Professor of Molecular Engineering) providing senior supervision. The team predicts that the polymer-based display will offer a wide range of applications including usage foldable computer screens, UI-driven wearables and health monitoring equipment. Solid OLED displays are featured in many modern devices that we use on a daily basis, but the traditional nature of that technology is not suitable for material flexibility due to inherent properties of "tight chemical bonds and stiff structures". Wang hopes to address these problems with his new polymer-type: "The materials currently used in these state-of-the-art OLED displays are very brittle; they don't have any stretchability. Our goal was to create something that maintained the electroluminescence of OLED but with stretchable polymers."

Arm-based PCs to Nearly Double Market Share by 2027, Says Report

Personal computers (PCs) based on Arm architecture will grow in popularity and their market share will almost double from 14% now to 25% by 2027, according to Counterpoint Research's latest projections. The ability of Arm-based hardware to run Mac OS has allowed Apple to capture 90% of the Arm-based notebook computer market. However, the full support of Windows and Office365 and the speed of native Arm-based app adoption are also critical factors in determining the Arm SoC penetration rate in PCs. Once these factors are addressed, Arm-based PCs will become a viable option for both daily users and businesses.

As more existing PC OEMs/ODMs and smartphone manufacturers enter the market, they will bring their expertise in Arm-based hardware and software, which will further boost the popularity of Arm-based PCs. The availability of more native Arm-based apps will also increase user comfort and familiarity with the platform. Overall, the trend towards Arm-based PCs is expected to continue and their market share will likely increase significantly in the coming years.

Mitsui and NVIDIA Announce World's First Generative AI Supercomputer for Pharmaceutical Industry

Mitsui & Co., Ltd., one of Japan's largest business conglomerates, is collaborating with NVIDIA on Tokyo-1—an initiative to supercharge the nation's pharmaceutical leaders with technology, including high-resolution molecular dynamics simulations and generative AI models for drug discovery.

Announced today at the NVIDIA GTC global AI conference, the Tokyo-1 project features an NVIDIA DGX AI supercomputer that will be accessible to Japan's pharma companies and startups. The effort is poised to accelerate Japan's $100 billion pharma industry, the world's third largest following the U.S. and China.

UK Government Seeks to Invest £900 Million in Supercomputer, Native Research into Advanced AI Deemed Essential

The UK Treasury has set aside a budget of £900 million to invest in the development of a supercomputer that would be powerful enough to chew through more than one billion billion simple calculations a second. A new exascale computer would fit the bill, for utilization by newly established advanced AI research bodies. It is speculated that one key goal is to establish a "BritGPT" system. The British government has been keeping tabs on recent breakthroughs in large language models, the most notable example being OpenAI's ChatGPT. Ambitions to match such efforts were revealed in a statement, with the emphasis: "to advance UK sovereign capability in foundation models, including large language models."

The current roster of United Kingdom-based supercomputers looks to be unfit for the task of training complex AI models. In light of being outpaced by drives in other countries to ramp up supercomputer budgets, the UK Government outlined its own future investments: "Because AI needs computing horsepower, I today commit around £900 million of funding, for an exascale supercomputer," said the chancellor, Jeremy Hunt. The government has declared that quantum technologies will receive an investment of £2.5 billion over the next decade. Proponents of the technology have declared that it will supercharge machine learning.
Return to Keyword Browsing
May 15th, 2024 15:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts