News Posts matching #Broadcom

Return to Keyword Browsing

Google Teams up with MediaTek for Next-Generation TPU v7 Design

According to Reuters, citing The Information, Google will collaborate with MediaTek to develop its seventh-generation Tensor Processing Unit (TPU), which is also known as TPU v7. Google maintains its existing partnership with Broadcom despite the new MediaTek collaboration. The AI accelerator is scheduled for production in 2026, and TSMC is handling manufacturing duties. Google will lead the core architecture design while MediaTek manages I/O and peripheral components, as Economic Daily News reports. This differs from Google's ongoing relationship with Broadcom, which co-develops core TPU architecture. The MediaTek partnership reportedly stems from the company's strong TSMC relationship and lower costs compared to Broadcom.

There is also a possibility that MediaTek could design inference-focused TPU v7 chips while Broadcom focuses on training architecture. Nonetheless, the development of TPU is a massive market as Google is using so many chips that it could use a third company, hypothetically. The development of TPU continues Google's vertical integration strategy for AI infrastructure. Google reduces dependency on NVIDIA hardware by designing proprietary AI chips for internal R&D and cloud operations. At the same time, competitors like OpenAI, Anthropic, and Meta rely heavily on NVIDIA's processors for AI training and inference. At Google's scale, serving billions of queries a day, designing custom chips makes sense from both financial and technological sides. As Google develops its own specific workloads, translating that into hardware acceleration is the game that Google has been playing for years now.

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

TSMC Still Continues to Explore Joint Venture for Intel Foundry Ownership

TSMC is still considering a strategic joint venture to operate Intel's manufacturing capacity, according to four sources close to Reuters that are familiar with the discussions. The proposed arrangement would limit TSMC's ownership to less than 50% and potentially distribute stakes to major American chip designers, including AMD, Broadcom, NVIDIA, and Qualcomm. The initiative emerged following direct intervention from the Trump administration, which has prioritized revitalizing domestic semiconductor manufacturing while maintaining American control of critical technology infrastructure. Under the proposed framework, Intel would spin off its Intel Foundry division, with TSMC acquiring a minority stake and bringing in partner companies as co-investors.

Apple, TSMC's largest customer, is absent from these preliminary discussions, suggesting careful strategic positioning within the competitive ecosystem—however, significant technical and operational challenges are facing the potential joint venture. Intel's manufacturing and real estate assets are valued at approximately $108 billion, requiring substantial capital commitments from prospective partners. More fundamentally, the technological integration presents massive obstacles, as Intel and TSMC utilize fundamentally different manufacturing processes with distinct equipment configurations and material requirements. However, the complex negotiations remain in the early stages, with significant technical, financial, and regulatory hurdles to overcome before any formal agreement materializes. Intel is still not giving the clear green light to spin off rumors.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

NVIDIA and Broadcom Testing Intel 18A Node for Chip Production

TSMC appears to be in for a competitive period, as sources close to Reuters note that both NVIDIA and Broadcom have tested Intel's 18A node with initial test chips. These tests are early indicators of whether Intel can successfully pivot into the contract manufacturing sector currently dominated by TSMC. Intel's 18A technology—featuring RibbonFET transistors and PowerVia backside power delivery—continues progressing through its development roadmap. The technology's performance characteristics reportedly sit between TSMC's current and next-generation nodes, creating a narrow window of competitive opportunity that Intel must capitalize on. What makes these particular tests significant is their positioning relative to actual production commitments. Chip designers typically run multiple test phases before allocating high-volume manufacturing contracts, with each progression reducing technical risk.

Reuters also reported that a six-month qualification delay for third-party IP blocks, which represents a critical vulnerability in Intel's foundry strategy, potentially undermining its ability to service smaller chip designers who rely on these standardized components. However, when this IP (PHY, controller, PCIe interface, etc.) is qualified for the 18A node, it is expected to go into many SoCs that will equal in millions of shipped chips. Additionally, the geopolitical dimensions of Intel's foundry efforts ease concerns of US-based chip designers as they gain a valuable manufacturing partner in their supply chain. Nonetheless, the 18A node is competitive with TSMC, and Intel plans only to evolve from here. Intel's current financial trajectory is the number one beneficiary if it proves good. With foundry revenues declining 60% year-over-year and profitability pushed beyond 2027, the company must demonstrate commercial viability to investors increasingly skeptical of its capital-intensive manufacturing strategy. Securing high-profile customers like NVIDIA could provide the market validation necessary to sustain continued investment in its foundry infrastructure.

ASUSTOR Launches Enterprise-Grade SAS JBOD Xpanstor 12R and AS-SAS8e SAS HBA

ASUSTOR Inc. today is announcing the enterprise-grade Xpanstor 12R JBOD expansion unit and the SAS HBA AS-SAS8e. ASUSTOR NAS devices supporting the AS-SAS8e support the Xpanstor 12R JBOD expansion unit to provide convenient and robust file sharing, remote access, and backup functionality. When expansion of storage capacity is required, pair your NAS with the ASUSTOR Xpanstor 12R to allow additional hard drives to be connected to an ASUSTOR NAS through a SAS connection to create a robust enterprise storage environment for managing large volumes of data.

Effortless Configuration
Storage Manager in ADM enables easy management and configuration between an ASUSTOR NAS and an attached ASUSTOR expansion unit. Storage Manager also supports the creation of multiple volume types and snapshot features, further enhancing data security and reliability.

Intel Faces Potential Breakup as TSMC and Broadcom Explore Acquisition

According to sources close to the Wall Street Journal, Intel is weighing preliminary acquisition offers that could split the company into two parts: product and foundry. TSMC and Broadcom are independently exploring deals that would divide Intel's chip design and manufacturing operations. Broadcom has initiated informal discussions regarding Intel's chip design and marketing divisions, while TSMC is considering assembling an investor consortium to acquire Intel's facilities. This solution is improbable, as Intel's fabs are strategically one of the most critical aspects of the US semiconductor supply chain. Intel manufactures custom chips for the US Department of Defense; hence, having a foreign owner of fabs is not acceptable. The news about the acquisition comes as Intel grapples with manufacturing setbacks, including a total $13.4 billion loss in its foundry segment during 2024 and a significant erosion of market share in the AI processor market.

The acquisition talks face substantial regulatory hurdles, particularly regarding national security concerns. The US government has signaled resistance to foreign ownership of Intel's domestic manufacturing capabilities, which are deemed strategically vital to American technological sovereignty. This could particularly impact TSMC's bid for Intel's plants despite the Taiwanese company's position as the world's leading contract chipmaker. Intel's vulnerability to acquisition follows a series of strategic missteps under former leadership, including delayed manufacturing innovations and an increasing reliance on government subsidies for facility expansion. The company's share price has declined 60% from its 2021 highs amid these challenges, attracting potential buyers despite the complexity of any potential deal structure. Successful execution would require navigating both regulatory approval and the practical difficulties of disaggregating Intel's deeply integrated design and manufacturing operations.

Report Suggests OpenAI Finalizing Proprietary GPU Design

Going back a year, we started hearing about an OpenAI proprietary AI chip project—this (allegedly) highly ambitious endeavor included grand plans for a dedicated fabrication network. TSMC was reportedly in the equation, but indirectly laughed at the AI research organization's ardent requests. Fast-forward to the present day; OpenAI appears to be actively pursuing a proprietary GPU design through traditional means. A Reuters exclusive report points to 2025 being an important year for the company's aforementioned "in-house" AI chip—the publication believes that OpenAI's debut silicon design has reached the finalization stage. Insiders have divulged that the project is only months away from being submitted to TSMC for "taping out." The foundry's advanced 3-nanometer process technology is reported to be on the cards. A Reuters source reckons that the unnamed chip features: "a commonly used systolic array architecture with high-bandwidth memory (HBM)...and extensive networking capabilities."

Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."

Broadcom Delivers Quantum Resistant Network Encryption for Real-time Ransomware Detection

Broadcom Inc. today announced an industry-first—the new, innovative Emulex Secure Fiber Channel Host Bus Adapters (HBA)—a cost-effective, easy-to-manage solution that encrypts all data as it moves between servers and storage.

Encrypting mission-critical data is no longer a nice-to-have, but a must-have. The cost of ransomware attacks continues to rise with attacks in 2024 costing USD $5.37 million on average per attack. Upcoming generative AI and quantum computers magnify the risk if data is not encrypted at all points in the data center including the network.

Trump Administration Plans to Impose 25-100% Tariffs on Taiwan-Sourced Chips, Including TSMC

The United States, currently led by the Trump administration, could be preparing a surprise package to its close silicon ally—Taiwan. During a House GOP issues conference in Florida, US President Donald Trump announced that he would impose 25% to 100% tariffs on Taiwan-made chips, including the world's leading silicon manufacturer, TSMC. Trump addressed the conference, saying, "In the very near future, we are going to be placing tariffs on foreign production of computer chips, semiconductors, and pharmaceuticals to return production of these essential goods to the United States. They left us and went to Taiwan; we want them to come back. We do not want to give them billions of dollars like this ridiculous program that Biden has given everybody billions of dollars. They already have billions of dollars. […] They did not need money. They needed an incentive. And the incentive is going to be they [do not want to] pay a 25%, 50% or even a 100% tax."

The issue for TSMC is its massive reliance on US companies to drive revenue. The majority of its cutting-edge silicon is going to only a handful of companies, including Apple, NVIDIA, Qualcomm, and Broadcom. With tariffs, the supply chain economics, especially in the world of semiconductors, will break. TSMC's most significant export country is the US, and US companies with trillions of US Dollars of market capitalization rely on Taiwanese silicon. As a result, TSMC will most likely raise its wafer prices, with results trickling down to US companies raising their product prices with additional price hikes. TSMC plans to bring its advanced manufacturing on American soil, but given that these tariffs might break the economic model it currently operates under, it may need to happen sooner. Taiwan-based silicon giant has planned to leave US facilities trailing behind by a generation or two of advanced manufacturing, while domestic facilities produce the newest nodes. If Trump decides to go through tariffs, TSMC could make additional changes to its US-based manufacturing plans.

Solidigm Extends Agreement with Broadcom on High-Capacity SSD Controllers for AI

Solidigm, a leading provider of innovative NAND flash memory solutions, today announced a multi-year extension of its agreement with Broadcom Inc. on the use of high-capacity solid-state drive (SSD) controllers to support artificial intelligence (AI) and data-intensive workloads. Solidigm is the leading provider of high-capacity storage for AI, and Broadcom's custom controllers have served as a critical component of Solidigm SSDs for more than a decade. With more than 120 million units of Solidigm SSDs shipped featuring Broadcom controllers, the partnership between the two companies has continued through key industry SSD milestones including Serial ATA (SATA), Serial-Attached SCSI (SAS) and Non-Volatile Memory Express (NVMe).

The agreement also includes collaboration on Solidigm's recently announced 122 TB (terabyte) Solidigm D5-P5336 data center SSD, the world's highest capacity PCIe SSD that delivers industry-leading storage efficiency from the core data center to the edge. "With our new 122 TB SSD, Solidigm further extends our high-capacity QLC (quad-level cell) leadership from 8 to 122 TB drives that all share the same controller from Broadcom, making our drives easier for customers to qualify," said Solidigm Co-CEO Kevin Noh. "Our relationship with Broadcom is pivotal to Solidigm as we collectively work to help our customers achieve efficiency benefits in the buildout of AI infrastructure."

New Raspberry Pi 5 With 16 GB Goes On Sale At $120

We first announced Raspberry Pi 5 back in the autumn of 2023, with just two choices of memory density: 4 GB and 8 GB. Last summer, we released the 2 GB variant, aimed at cost-sensitive applications. And today we're launching its bigger sibling, the 16 GB variant, priced at $120.

Why 16 GB, and why now?
We're continually surprised by the uses that people find for our hardware. Many of these fit into 8 GB (or even 2 GB) of SDRAM, but the threefold step up in performance between Raspberry Pi 4 and Raspberry Pi 5 opens up use cases like large language models and computational fluid dynamics, which benefit from having more storage per core. And while Raspberry Pi OS has been tuned to have low base memory requirements, heavyweight distributions like Ubuntu benefit from additional memory capacity for desktop use cases. The optimized D0 stepping of the Broadcom BCM2712 application processor includes support for memories larger than 8 GB. And our friends at Micron were able to offer us a single package containing eight of their 16Gbit LPDDR4X die, making a 16 GB product feasible for the first time.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

Intel 18A Process Node Clocks an Abysmal 10% Yield: Report

In case you're wondering why Intel went with TSMC 3 nm to build the Compute tile of its "Arrow Lake" processor, and the SoC tile of "Lunar Lake," instead of Intel 3, or even Intel 20A, perhaps there's more to the recent story about Broadcom voicing its disappointment in the Intel 18A foundry node. The September 2024 report didn't specify a number to what yields on the Intel 18A node looked like to spook Broadcom, but we now have some idea as to just how bad things are. Korean publication Chosun, which tracks developments in the electronics and ICT industries, reports that yields on the Intel 18A foundry node stand at an abysmal 10%, making it unfit for mass-production. Broadcom validated Intel 18A as it was prospecting a cutting-edge node for its high-bandwidth network processors.

The report also hints that Intel's in-house foundry nodes going off the rails could be an important event leading up to the company's Board letting go of former CEO Pat Gelsinger, as huge 2nd order effects will be felt across the company's entire product stack in development. For example, company roadmaps put the company's next-generation "Clearwater Forest" server processor, slated for 2025, as being designed for the Intel 18A node. Unless Intel Foundry can pull a miracle, an effort must be underway to redesign the chip for whichever TSMC node is considered cutting-edge in 2025.

Raspberry Pi Compute Module 5 Officially Launches With Broadcom BCM2712 Quad-Core SoC

Today we're happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey
We founded the Raspberry Pi Foundation back in 2008 with a mission to give today's young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC's Computer Literacy Project. But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

Intel's Silver Lining is $8.5 Billion CHIPS Act Funding, Possibly by the End of the Year

Intel's recent financial woes have brought the company into severe cost-cutting measures, including job cuts and project delays. However, a silver lining remains—Intel is reportedly in the final stages of securing $8.5 billion in direct funding from the US government under the CHIPS Act, delivered by the end of the year. The potential financing comes at a crucial time for Intel, which has been grappling with financial challenges. The company reported a $1.6 billion loss in the second quarter of 2024, leading to short-term setbacks. However, thanks to sources close to the Financial Times, we learn that Intel's funding target will represent the CHIPS Act's largest share, leading to a massive boost to US-based semiconductor manufacturing.

Looking ahead, the potential CHIPS Act funding could serve as a catalyst for Intel's resurgence, reassuring both investors and customers about the company's future. A key element of Intel's recovery strategy lies in the ramp-up of production for its advanced 18A node, which should become the primary revenue driver for its foundry unit. This advancement, coupled with the anticipated government backing, positions Intel to potentially capture market share from established players like TSMC and Samsung. The company has already secured high-profile customers such as Amazon and (allegedly) Broadcom, hinting at its growing appeal in the foundry space. Moreover, Intel's enhanced domestic manufacturing capabilities align well with potential US government mandates for companies like NVIDIA and Apple to produce processors locally, a consideration driven by escalating geopolitical tensions.

Intel 20A Node Cancelled for Foundry Customers, "Arrow Lake" Mainly Manufactured Externally

Intel has announced the cancellation of its 20A node for Foundry customers, as well as shifting majority of Arrow Lake production to external foundries. The tech giant will instead focus its resources on the more advanced 18A node while relying on external partners for Arrow Lake production, likely tapping TSMC or Samsung for their 2 nm nodes. The decision follows Intel's successful release of the 18A Process Design Kit (PDK) 1.0 in July, which garnered positive feedback from the ecosystem, according to the company. Intel reports that the 18A node is already operational, booting operating systems and yielding well, keeping the company on track for a 2025 launch. This early success has enabled Intel to reallocate engineering resources from 20A to 18A sooner than anticipated. As a result, the "Arrow Lake processor family will be built primarily using external partners and packaged by Intel Foundry".

The 20A node, while now cancelled for Arrow Lake, has played a crucial role in Intel's journey towards 18A. It served as a testbed for new techniques, materials, and transistor architectures essential for advancing Moore's Law. The 20A node successfully integrated both RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery for the first time, providing valuable insights that directly informed the development of 18A. Intel's decision to focus on 18A is also driven by economic factors. With the current 18A defect density already at D0 <0.40, the company sees an opportunity to optimize its engineering investments by transitioning now. However, challenges remain, as evidenced by recent reports of Broadcom's disappointment in the 18A node. Despite these hurdles, Intel remains optimistic about the future of its foundry services and the potential of its advanced manufacturing processes. The coming months will be crucial as the company works to demonstrate the capabilities of its 18A node and secure more partners for its foundry business.

Broadcom's Testing of Intel 18A Node Signals Disappointment, Still Not Ready for High-Volume Production

According to a recent Reuters report, Intel's 18A node doesn't seem to be production-ready. As the sources indicate, Broadcom has been reportedly testing Intel's 18A node on its internal company designs, which include an extensive range of products from AI accelerators to networking switches. However, as Broadcom received the initial production run from Intel, the 18A node seems to be in a worse state than initially expected. After testing the wafers and powering them on, Broadcom reportedly concluded that the 18A process is not yet ready for high-volume production. With Broadcom's comments reflecting high-volume production, it signals that the 18A node is not producing a decent yield that would satisfy external customers.

While this is not a good sign of Intel's Fundry contract business development, it shows that the node is presumably in a good state in terms of power/performance. Intel's CEO Pat Gelsinger confirmed that 18A is now at 0.4 d0 defect density, and it is now a "healthy process." However, alternatives exist at TSMC, which proves to be a very challenging competitor to take on, as its N7 and N5 nodes had a defect density of 0.33 during development and 0.1 defect density during high-volume production. This leads to better yields and lower costs for the contracting party, resulting in higher profits. Ultimately, it is up to Intel to improve its production process further to satisfy customers. Gelsinger wants to see Intel Foundry as "manufacturing ready" by the end of the year, and we can see the first designs in 2025 reach volume production. There are still a few more months to improve the node, and we expect to see changes implemented by the end of the year.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

Broadcom Unveils Newest Innovations for VMware Cloud Foundation

Broadcom Inc. today unveiled the latest updates to VMware Cloud Foundation (VCF), the company's flagship private cloud platform. The latest advancements in VCF support customers' digital innovation with faster infrastructure modernization, improved developer productivity, and better cyber resiliency and security with low total cost of ownership.

"VMware Cloud Foundation is the industry's first private-cloud platform to offer the combined power of public and private clouds with unmatched operational simplicity and proven total cost of ownership value," said Paul Turner, Vice President of Products, VMware Cloud Foundation Division, Broadcom. "With our latest release, VCF is delivering on key requirements driven by customer input. The new VCF Import functionality will be a game changer in accelerating VCF adoption and improving time to value. We are also delivering a set of new capabilities that helps IT more quickly meet the needs of developers without increasing business risk. This latest release of VCF puts us squarely on the path to delivering on the full promise of VCF for our customers."

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.
Return to Keyword Browsing
Mar 24th, 2025 01:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts