News Posts matching #AI

Return to Keyword Browsing

Apple Trained its Apple Intelligence Models on Google TPUs, Not NVIDIA GPUs

Apple has disclosed that its newly announced Apple Intelligence features were developed using Google's Tensor Processing Units (TPUs) rather than NVIDIA's widely adopted hardware accelerators like H100. This unexpected choice was detailed in an official Apple research paper, shedding light on the company's approach to AI development. The paper outlines how systems equipped with Google's TPUv4 and TPUv5 chips played a crucial role in creating Apple Foundation Models (AFMs). These models, including AFM-server and AFM-on-device, are designed to power both online and offline Apple Intelligence features introduced at WWDC 2024. For the training of the 6.4 billion parameter AFM-server, Apple's largest language model, the company utilized an impressive array of 8,192 TPUv4 chips, provisioned as 8×1024 chip slices. The training process involved a three-stage approach, processing a total of 7.4 trillion tokens. Meanwhile, the more compact 3 billion parameter AFM-on-device model, optimized for on-device processing, was trained using 2,048 TPUv5p chips.

Apple's training data came from various sources, including the Applebot web crawler and licensed high-quality datasets. The company also incorporated carefully selected code, math, and public datasets to enhance the models' capabilities. Benchmark results shared in the paper suggest that both AFM-server and AFM-on-device excel in areas such as Instruction Following, Tool Use, and Writing, positioning Apple as a strong contender in the AI race despite its relatively late entry. However, Apple's penetration tactic into the AI market is much more complex than any other AI competitor. Given Apple's massive user base and millions of devices compatible with Apple Intelligence, the AFM has the potential to change user interaction with devices for good, especially for everyday tasks. Hence, refining AI models for these tasks is critical before massive deployment. Another unexpected feature is transparency from Apple, a company typically known for its secrecy. The AI boom is changing some of Apple's ways, and revealing these inner workings is always interesting.

Dynabook Unveils 14-inch Portégé X40-M Laptops with Intel Core Ultra CPUs and Copilot AI

Dynabook Americas, Inc., the gold standard for long-lasting, professional-grade laptops, today unveiled its latest addition to the premium Portégé family - the all-new 14-inch Portégé X40-M. Engineered with Intel Core Ultra processors (Series 1) and advanced AI capabilities, this new Windows 11 Pro laptop impeccably blends the most powerful modern hardware with sophisticated AI to maximize workplace productivity.

"The Portégé X40 has been one of Dynabook's best-selling laptops for years, and I believe that this latest iteration of an already-winning formula will further solidify the Portégé line as an indispensable booster of business efficiency and productivity," said James Robbins, General Manager, Dynabook Americas Inc. "With the integration of Intel's latest Core Ultra processors and Copilot AI enhancements, this laptop continues our tradition of delivering premium, cutting-edge capabilities that empower professionals."

MaxLinear to Showcase Panther III at Future of Memory and Storage 2024 Trade Show

MaxLinear, Inc., a leading provider of data storage acceleration solutions for enterprise and data center applications, today announced it will demonstrate the advanced compression, encryption, and security performance of its storage acceleration solution, Panther III, at the Future of Memory and Storage (FMS) 2024 trade show from August 6-8, 2024. The demos will show that Panther III can achieve up to 40 times more throughput, up to 190 times better latency, and up to 1000 times less CPU utilization than a software-only solution, leading to significant cost savings in terms of flash drives and needed CPU cores.

MaxLinear's Panther III creates a bold new product category for maximizing the performance of data storage systems - a comprehensive, all-in-one "storage accelerator." Unlike encryption and/or compression solutions, MaxLinear's Panther III consolidates a comprehensive suite of storage acceleration functions, including compression, deduplication, encryption, data protection, and real-time validation, in a single hardware-based solution. Panther III is engineered to offload and expedite specific data processing tasks, thus providing a significant performance boost, storage cost savings, and energy savings compared to traditional software-only, FPGA, and other competitive solutions.

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Report: AI Software Sales to Experience Massive Growth with 40.6% CAGR Over the Next Five Years

The market for artificial intelligence (AI) platforms software grew at a rapid pace in 2023 and is projected to maintain its remarkable momentum, driven by the increasing adoption of AI across many industries. A new International Data Corporation (IDC) forecast shows that worldwide revenue for AI platforms software will grow to $153.0 billion in 2028 with a compound annual growth rate (CAGR) of 40.6% over the 2023-2028 forecast period.

"The AI platforms market shows no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning," said Ritu Jyoti, group vice president and general manager of IDC's Artificial Intelligence, Automation, Data and Analytics research. "IDC expects this upward trajectory will continue to accelerate with the emergence of unified platforms for predictive and generative AI that supports interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale."

NVIDIA Announces Generative AI Models and NIM Microservices for OpenUSD Language, Geometry, Physics and Materials

NVIDIA today announced major advancements to Universal Scene Description, or OpenUSD, that will expand adoption of the universal 3D data interchange framework to robotics, industrial design and engineering, and accelerate developers' abilities to build highly accurate virtual worlds for the next evolution of AI.

Through new OpenUSD-based generative AI and NVIDIA-accelerated development frameworks built on the NVIDIA Omniverse platform, more industries can now develop applications for visualizing industrial design and engineering projects, and for simulating environments to build the next wave of physical AI and robots.

NVIDIA Accelerates Humanoid Robotics Development

To accelerate humanoid development on a global scale, NVIDIA today announced it is providing the world's leading robot manufacturers, AI model developers and software makers with a suite of services, models and computing platforms to develop, train and build the next generation of humanoid robotics.

Among the offerings are new NVIDIA NIM microservices and frameworks for robot simulation and learning, the NVIDIA OSMO orchestration service for running multi-stage robotics workloads, and an AI- and simulation-enabled teleoperation workflow that allows developers to train robots using small amounts of human demonstration data.

ASUS Announces Complete Portfolio of AMD Ryzen AI Laptops

ASUS announced availability for the company's new lineup of AMD Ryzen AI laptops featuring advanced AI capability with 50 TOPS NPU AI engines.

ASUS ProArt P16 / ProArt PX13
The new ASUS ProArt laptop lineup is designed to empower every creator — whether they are everyday users, outdoor content creators, or professionals — to transform their precious life moments into enduring stories. The lightweight, durable, and powerful laptops allow users to create anywhere, create faster, and create smarter.

Intel Releases AI Playground, a Unified Generative AI and Chat App for Intel Arc GPUs

Intel on Monday rolled out the first public release of AI Playground, an AI productivity suite the company showcased in its 2024 Computex booth. AI Playground is a well-packaged suite of generative AI applications and a chatbot, which are designed to leverage Intel Arc discrete GPUs with at least 8 GB of video memory. All utilities in the suite are designed under the OpenVINO framework, and take advantage of the XMX cores of Arc A-series discrete GPUs. Currently, only three GPU models from the lineup come with 8 GB or higher amounts of video memory, the A770, A750, and A580; and their mobile variants. The company is working on a variant of the suite that can work on Intel Core Ultra-H series processors, where it uses a combination of the NPU and the iGPU for acceleration. AI Playground is open source. Intel put in effort to make the suite as client-friendly as possible, by giving it a packaged installer that looks after installation of all software dependencies.

Intel AI Playground tools include an image generative AI that can turn prompts into standard or HD images, which is based on Stable Diffusion backed by DreamShaper 8 and Juggernaut XL models. It also supports Phi3, LCM LoRA, and LCM LoRA SDXL. All of these have been optimized for acceleration on Arc "Alchemist" GPUs. The utility also includes an AI image enhancement utility that can be used for upscaling along with detail reconstruction, styling, inpainting and outpainting, and certain kinds of image manipulation. The third most important tool is the text AI chatbot with all popular LLMs.

DOWNLOAD: Intel AI Playground

Lattice Introduces Certus NX-28 and Certus NX-09 Small FPGAs

Lattice Semiconductor, the low power programmable leader, today announced the addition of new, logic-optimized Lattice Certus -NX FPGA devices to its leadership small FPGA portfolio. The new offering includes two new capacity points, the Certus -NX-28 and Certus -NX-09, and multiple package options that offer class-leading power efficiency, small size, and reliability with flexible migration options. These devices are designed to accelerate a broad range of Communications, Computing, Industrial, and Automotive applications.

"Lattice is committed to delivering continued innovation in small, low power FPGAs to empower our customers with optimized solutions for space-constrained applications ranging from sensor interfacing to co-processing to low power AI," said Dan Mansur, Corporate Vice President, Product Marketing, Lattice Semiconductor. "We're excited to expand our Nexus-based small FPGA offerings by adding more migratable logic and package options including 0.8 mm pitch, ideal for Industrial applications."

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

Dynabook Unveils New AI-Infused 14- and 16-inch Tecra A-Series Business Laptops

Dynabook Americas, Inc., the gold standard for long-lasting, professional-grade laptops, today unveiled its latest Copilot-enhanced professional laptops - the all-new 14-inch Tecra A40-M and 16-inch Tecra A60-M. Engineered with Intel Core Ultra processors (Series 1) these new Windows 11 Pro laptops fuse cutting-edge hardware with advanced AI to redefine productivity and performance for modern professionals and educators.

"These new Tecra laptops exemplify Dynabook's commitment to delivering premium, cutting-edge solutions that empower professionals and educators," said James Robbins, General Manager, Dynabook Americas Inc. "With the integration of Intel's latest Core Ultra processors and Copilot AI capabilities, these laptops set new standards for efficiency, security, and user experience in the business and education sectors."

Ex-Xeon Chief Lisa Spelman Leaves Intel and Joins Cornelis Networks as CEO

Cornelis Networks, a leading independent provider of intelligent, high-performance networking solutions, today announced the appointment of Lisa Spelman as its new chief executive officer (CEO), effective August 15. Spelman joins Cornelis from Intel Corporation, where she held executive leadership roles for more than two decades, including leading the company's core data center business. Spelman will succeed Philip Murphy, who will assume the role of president and chief operating officer (COO).

"Cornelis is unique in having the products, roadmap, and talent to help customers address this issue. I look forward to joining the team to bring their innovations to even more organizations around the globe."

Tenstorrent Launches Next Generation Wormhole-based Developer Kits and Workstations

Tenstorrent is launching their next generation Wormhole chip featuring PCIe cards and workstations designed for developers who are interested in scalability for multi-chip development using Tenstorrent's powerful open-source software stacks.

These Wormhole-based cards and systems are now available for immediate order on tenstorrent.com:
  • Wormhole n150, powered by a single processor
  • Wormhole n300, powered by two processors
  • TT-LoudBox, a developer workstation powered by four Wormhole n300s (eight processors)

Femtosense Launches AI-ADAM-100, a System in Package (SiP) for Consumer Applications

Femtosense, in partnership with ABOV Semiconductor, today launched the AI-ADAM-100, an artificial intelligence microcontroller unit (AI MCU) built on sparse AI technology to enable on-device AI features such as voice-based control in home appliances and other products. On-device AI provides immediate, no-latency user responses with low power consumption, security, operational stability, and low cost compared to GPUs or cloud-based AI.

The AI-ADAM-100 integrates the Femtosense Sparse Processing Unit 001 (SPU-001), a neural processing unit (NPU), and an ABOV Semiconductor MCU to provide deep learning-powered AI voice processing and voice-cleanup capabilities on-device at the edge. With language processing, appliances can implement "say what you mean" voice interfaces that allow users to speak naturally and express their intent freely in multiple ways. For example, "Turn the lights off", "Turn off the lights," and "Lights off" all convey the same intent and are understood as such.

Gigabyte AI TOP Utility Reinventing Your Local AI Fine-tuning

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of motherboards, graphics cards, and hardware solutions, released the GIGABYTE exclusive groundbreaking AI TOP Utility. With reinvented workflows, user-friendly interface, and real-time progress monitoring, AI TOP Utility provides a reinventing touch of local AI model training and fine-tuning. It features a variety of groundbreaking technologies that can be easily adapted by beginners or experts, for most common open-source LLMs, in anyplace even on your desk.

GIGABYTE AI TOP is the all-round solution for local AI Model Fine-tuning. Running local AI training and fine-tuning on sensitive data can relatively provide greater privacy and security with maximum flexibility and real-time adjustment. Collocating with GIGABYTE AI TOP hardware and AI TOP Utility, the common constraints of GPU VRAM insufficiency when trying to execute AI fine-tuning locally can be addressed. By GIGABYTE AI TOP series motherboard, PSU, and SSD, as well as GIGABYTE graphics cards lineup covering NVIDIA GeForce RTX 40 Series, AMD Radeon RX 7900 Series, Radeon Pro W7900 and W7800 series, the size of open-source LLM fine-tuning can now reach up to 236B and more.

Global AI Server Demand Surge Expected to Drive 2024 Market Value to US$187 Billion; Represents 65% of Server Market

TrendForce's latest industry report on AI servers reveals that high demand for advanced AI servers from major CSPs and brand clients is expected to continue in 2024. Meanwhile, TSMC, SK hynix, Samsung, and Micron's gradual production expansion has significantly eased shortages in 2Q24. Consequently, the lead time for NVIDIA's flagship H100 solution has decreased from the previous 40-50 weeks to less than 16 weeks.

TrendForce estimates that AI server shipments in the second quarter will increase by nearly 20% QoQ, and has revised the annual shipment forecast up to 1.67 million units—marking a 41.5% YoY growth.

ASML Reports €6.2 Billion Total Net Sales and €1.6 Billion Net Income in Q2 2024

Today, ASML Holding NV (ASML) has published its 2024 second-quarter results.
  • Q2 total net sales of €6.2 billion, gross margin of 51.5%, net income of €1.6 billion
  • Quarterly net bookings in Q2 of €5.6 billion of which €2.5 billion is EUV
  • ASML expects Q3 2024 total net sales between €6.7 billion and €7.3 billion and a gross margin between 50% and 51%
CEO statement and outlook
"Our second-quarter total net sales came in at €6.2 billion, at the high-end of our guidance, with a gross margin of 51.5% which is above guidance, both primarily driven by more immersion systems sales. In line with previous quarters, overall semiconductor inventory levels continue to improve, and we also see further improvement in litho tool utilization levels at both Logic and Memory customers. While there are still uncertainties in the market, primarily driven by the macro environment, we expect industry recovery to continue in the second half of the year. We expect third-quarter total net sales between €6.7 billion and €7.3 billion with a gross margin between 50% and 51%. ASML expects R&D costs of around €1,100 million and SG&A costs of around €295 million. Our outlook for the full year 2024 remains unchanged. We see 2024 as a transition year with continued investments in both capacity ramp and technology. We currently see strong developments in AI, driving most of the industry recovery and growth, ahead of other market segments," said ASML President and Chief Executive Officer Christophe Fouquet.

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

2.1 Billion Pixels in Las Vegas Sphere are Powered by 150 NVIDIA RTX A6000 GPUs

The city of Las Vegas late last year added another attraction to its town: the Sphere. The Sphere is a 1.2 million pixel outdoor display venue famous for its massive size and inner 18,600-seat auditorium. The auditorium space is a feat of its own with features like a 16x16 resolution wraparound interior LED screen, speakers with beamforming and wave field synthesis technologies, and 4D physical effects. However, we have recently found out that NVIDIA GPUs power the Sphere. And not only a handful of them, as 150 NVIDIA RTX A6000 power the Sphere and its 1.2 million outside pixels spread on 54,000 m², as well as 16 of 16K inner displays with a total output of 2.1 billion pixels. Interestingly, the 150 NVIDIA RTX A6000 have a combined output cable number of 600 DisplayPort 1.4a ports.

With each card having 48 GB of memory, that equals to 7.2 TB of GDDR6 ECC memory in the total system. With the Sphere being a $2.3 billion project, it is expected to have an infotainment system capable of driving the massive venue. And it certainly delivers on that. Only a handful of cards powers most massive media projects, but this scale is something we see for the first time in non-AI processing systems. The only scale we are used to today is massive thousand-GPU clusters used for AI processing, so seeing a different and interesting application is refreshing.

SoftBank Group Acquires Graphcore to Build Next-Generation of AI Compute

Graphcore today announced that the company has been acquired by SoftBank Group Corp. Under the deal, Graphcore becomes a wholly owned subsidiary of SoftBank and will continue to operate under the Graphcore name.

"This is a tremendous endorsement of our team and their ability to build truly transformative AI technologies at scale, as well as a great outcome for our company," said Graphcore co-founder and CEO Nigel Toon. "Demand for AI compute is vast and continues to grow. There remains much to do to improve efficiency, resilience, and computational power to unlock the full potential of AI. In SoftBank, we have a partner that can enable the Graphcore team to redefine the landscape for AI technology."

AMD Plans to Use Glass Substrates in its 2025/2026 Lineup of High-Performance Processors

AMD reportedly plans to incorporate glass substrates into its high-performance system-in-packages (SiPs) sometimes between 2025 and 2026. Glass substrates offer several advantages over traditional organic substrates, including superior flatness, thermal properties, and mechanical strength. These characteristics make them well-suited for advanced SiPs containing multiple chiplets, especially in data center applications where performance and durability are critical. The adoption of glass substrates aligns with the industry's broader trend towards more complex chip designs. As leading-edge process technologies become increasingly expensive and yield gains diminish, manufacturers turn to multi-chiplet designs to improve performance. AMD's current EPYC server processors already incorporate up to 13 chiplets, while its Instinct AI accelerators feature 22 pieces of silicon. A more extreme testament is Intel's Ponte Vecchio, which utilized 63 tiles in a single package.

Glass substrates could enable AMD to create even more complex designs without relying on costly interposers, potentially reducing overall production expenses. This technology could further boost the performance of AI and HPC accelerators, which are a growing market and require constant innovation. The glass substrate market is heating up, with major players like Intel, Samsung, and LG Innotek also investing heavily in this technology. Market projections suggest explosive growth, from $23 million in 2024 to $4.2 billion by 2034. Last year, Intel committed to investing up to 1.3 trillion Won (almost one billion USD) to start applying glass substrates to its processors by 2028. Everything suggests that glass substrates are the future of chip design, and we await to see first high-volume production designs.

AMD Readies Ryzen 7 8745HS Hawk Point APU with Disabled NPU

According to a recent leak from Golden Pig on Weibo, AMD is gearing up to introduce the Ryzen 7 8745HS, a modified version of the existing Ryzen 7 8845HS APU. The key difference in this new chip lies in its neural processing capabilities. While the 8845HS boasts AMD's XDNA-based NPU (Neural Processing Unit), the upcoming 8745HS is rumored to have this feature disabled. Specifications for the 8745HS are expected to closely mirror its predecessor, featuring eight Zen 4 cores, 16 threads, and a configurable TDP range of 35-54 W. The chip will likely retain the Radeon 780M integrated GPU with 12 Compute Units. However, it is possible that AMD might introduce slight clock speed reductions to differentiate the new model further.

It is also worth pointing out that Hawk Point generation is not Copilot+ certified due to first-generation XDNA NPU being only 16 TOPS out of 40 TOPS required, so having an NPU doesn't help AMD advertise these processors as Copilot+ ready. The success of this new variant will largely depend on its pricing and adoption by laptop/mobile OEMs. Without the NPU, the 8745HS could offer a more budget-friendly option for users who don't require extensive local AI processing capabilities. After all, AI workloads remain a niche segment in consumer computing, and many users may find the 8745HS an attractive alternative if pricing is reduced, especially given the availability of cloud-based AI tools.

TSMC to Raise Wafer Prices by 10% in 2025, Customers Seemingly Agree

Taiwanese semiconductor giant TSMC is reportedly planning to increase its wafer prices by up to 10% in 2025, according to a Morgan Stanley note cited by investor Eric Jhonsa. The move comes as demand for cutting-edge processors in smartphones, PCs, AI accelerators, and HPC continues to surge. Industry insiders reveal that TSMC's state-of-the-art 4 nm and 5 nm nodes, used for AI and HPC customers such as AMD, NVIDIA, and Intel, could see up to 10% price hikes. This increase would push the cost of 4 nm-class wafers from $18,000 to approximately $20,000, representing a significant 25% rise since early 2021 for some clients and an 11% rise from the last price hike. Talks about price hikes with major smartphone manufacturers like Apple have proven challenging, but there are indications that modest price increases are being accepted across the industry. Morgan Stanley analysts project a 4% average selling price increase for 3 nm wafers in 2025, which are currently priced at $20,000 or more per wafer.

Mature nodes like 16 nm are unlikely to see price increases due to sufficient capacity. However, TSMC is signaling potential shortages in leading-edge capacity to encourage customers to secure their allocations. Adding to the industry's challenges, advanced chip-on-wafer-on-substrate (CoWoS) packaging prices are expected to rise by 20% over the next two years, following previous increases in 2022 and 2023. TSMC aims to boost its gross margin to 53-54% by 2025, anticipating that customers will absorb these additional costs. The impact of these price hikes on end-user products remains uncertain. Competing foundries like Intel and Samsung may seize this opportunity to offer more competitive pricing, potentially prompting some chip designers to consider alternative manufacturing options. Additionally, TSMC's customers could reportedly be unable to secure their capacity allocation without "appreciating TSMC's value."

AMD to Acquire Silo AI to Expand Enterprise AI Solutions Globally

AMD today announced the signing of a definitive agreement to acquire Silo AI, the largest private AI lab in Europe, in an all-cash transaction valued at approximately $665 million. The agreement represents another significant step in the company's strategy to deliver end-to-end AI solutions based on open standards and in strong partnership with the global AI ecosystem. The Silo AI team consists of world-class AI scientists and engineers with extensive experience developing tailored AI models, platforms and solutions for leading enterprises spanning cloud, embedded and endpoint computing markets.

Silo AI CEO and co-founder Peter Sarlin will continue to lead the Silo AI team as part of the AMD Artificial Intelligence Group, reporting to AMD senior vice president Vamsi Boppana. The acquisition is expected to close in the second half of 2024.
Return to Keyword Browsing
Nov 21st, 2024 07:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts