News Posts matching #Broadcom

Return to Keyword Browsing

Robust AI Demand Drives 6% QoQ Growth in Revenue for Top 10 Global IC Design Companies in 1Q25

TrendForce's latest investigations reveal that 1Q25 revenue for the global IC design industry reached US$77.4 billion, marking a 6% QoQ increase and setting a new record high. This growth was fueled by early stocking ahead of new U.S. tariffs on electronics and the ongoing construction of AI data centers around the world, which sustained strong chip demand despite the traditional off-season.

NVIDIA remained the top-ranking IC design company, with Q1 revenue surging to $42.3 billion—up 12% QoQ and 72% YoY—thanks to increasing shipments of its new Blackwell platform. Although its H20 chip is constrained by updated U.S. export controls and is expected to incur losses in Q2, the higher-margin Blackwell is poised to replace the Hopper platform gradually, cushioning the financial impact.

TSMC Prepares "CoPoS": Next-Gen 310 × 310 mm Packages

As demand for ever-growing AI compute power continues to rise and manufacturing advanced nodes becomes more difficult, packaging is undergoing its golden era of development. Today's advanced accelerators often rely on TSMC's CoWoS modules, which are built on wafer cuts measuring no more than 120 × 150 mm in size. In response to the need for more space, TSMC has unveiled plans for CoPoS, or "Chips on Panel on Substrate," which could expand substrate dimensions to 310 × 310 mm and beyond. By shifting from round wafers to rectangular panels, CoPoS offers more than five times the usable area. This extra surface makes it possible to integrate additional high-bandwidth memory stacks, multiple I/O chiplets and compute dies in a single package. It also brings panel-level packaging (PLP) to the fore. Unlike wafer-level packaging (WLP), PLP assembles components on large, rectangular panels, delivering higher throughput and lower cost per unit. Systems with PLP will be actually viable for production runs and allow faster iterations over WLP.

TSMC will establish a CoPoS pilot line in 2026 at its Visionchip subsidiary. In 2027, the pilot facility will focus on refining the process, to meet partner requirements by the end of the year. Mass production is projected to begin between the end of 2028 and early 2029 at TSMC's Chiayi AP7 campus. That site, chosen for its modern infrastructure and ample space, is also slated to host production of multi-chip modules and System-on-Wafer technologies. NVIDIA is expected to be the launch partner for CoPoS. The company plans to leverage the larger panel area to accommodate up to 12 HBM4 chips alongside several GPU chiplets, offering significant performance gains for AI workloads. At the same time, AMD and Broadcom will continue using TSMC's CoWoS-L and CoWoS-R variants for their high-end products. Beyond simply increasing size, CoPoS and PLP may work in tandem with other emerging advances, such as glass substrates and silicon photonics. If development proceeds as planned, the first CoPoS-enabled devices could reach the market by late 2029.

Synopsys Achieves PCIe 6.x Interoperability Milestone with Broadcom's PEX90000 Series Switch

Synopsys, Inc. today announced that its collaboration with Broadcom has achieved interoperability between Synopsys' PCIe 6.x IP solution and Broadcom's PEX90000 series switch. As a cornerstone of next-generation AI infrastructures, PCIe switches play a critical role in enabling the scalability required to meet the demands of modern AI workloads. This milestone demonstrates that future products integrating PCIe 6.x solutions from Synopsys and Broadcom will operate seamlessly within the ecosystem, reducing design risk and accelerating time-to-market for high-performance computing and AI data center systems.

The interoperability demonstration with Broadcom features a Synopsys PCIe 6.x IP solution, including PHY and controller, operating as a root complex and an endpoint running at 64 GT/s with Broadcom's PEX90000 switch. Synopsys will showcase this interoperability demonstration at PCI-SIG DevCon 2025 at booth #13, taking place June 11 and 12, where attendees can see a variety of successful Synopsys PCIe 7.0 and PCIe 6.x IP interoperability demonstrations in both the Synopsys booth and partners' booths.

HighPoint Announces the Industry's 1st Gen5 NVMe RAID AIC at $999

We are thrilled to announce a landmark pricing strategy for our Gen 5 x16 4x M.2 NVMe AIC series, making cutting-edge performance more accessible than ever. Now available for USD$999 and $899 respectively, the Rocket 7604A RAID AIC and Rocket 1604A Switch AIC represent an exceptional value by doubling the performance capabilities of conventional 4-port NVMe solutions with over 50 GB/s+ of real-world transfer speed.

This significant pricing offer reflects our commitment to accelerating market adoption of today's leading SSD media and enabling a wider range of industrial and high-performance computing applications to leverage the full power of Gen 5 NVMe storage technology.

Broadcom Ships Tomahawk 6 Switch Chip Series with 102.4 Tbps

Broadcom Inc. announced today that it is now shipping the Tomahawk 6 switch series, delivering the world's first 102.4 Terabits/sec of switching capacity in a single chip - double the bandwidth of any Ethernet switch currently available on the market. With unprecedented scale, energy efficiency, and AI-optimized features, Tomahawk 6 is built to power the next generation of scale-up and scale-out AI networks, delivering unmatched flexibility with support for 100G/200G SerDes and co-packaged optics (CPO). It offers the industry's most comprehensive set of AI routing features and interconnect options, designed to meet the demands of AI clusters with more than one million XPUs.

"Tomahawk 6 is not just an upgrade - it's a breakthrough," said Ram Velaga, senior vice president and general manager, Core Switching Group, Broadcom. "It marks a turning point in AI infrastructure design, combining the highest bandwidth, power efficiency, and adaptive routing features for scale-up and scale-out networks into one platform. Demand from customers and partners has been unprecedented. Tomahawk 6 is poised to make a rapid and dramatic impact on the deployment of large AI clusters."

QNAP Introduces New Dual-port 10GbE SFP+ Network Cards Supporting SR-IOV and RDMA

QNAP Systems, Inc., a leading computing, networking and storage solution innovator, today launched the new QXG-10G2SF-NXE 10GbE SFP+ network expansion card. Equipped with the advanced Broadcom 57412 Ethernet Controller, this PCIe Gen 3 card can be installed into a QNAP NAS (including the TS-x64 and TS-x64U series) or Windows /Linux PC and servers, instantly augmenting connectivity with two high-speed 10GbE ports.

The QXG-10G2SF-NXE comes with two 10GbE SFP+ (10G/1G) network ports. Users can utilize SMB Multichannel or Port Trunking to combine bandwidth, providing up to 20 Gbps of data transfer potential, thereby accelerating large file sharing and intensive data transmission. The QXG-10G2SF-NXE supports RDMA to enhance data transfer efficiency and reduce latency, while also supporting SR-IOV that enhances network resource allocation for VMware virtualization applications, reducing network bandwidth consumption and significantly lowering CPU usage for virtual machine servers (hypervisors).

AMD Splits Instinct MI SKUs: MI450X Targets AI, MI430X Tackles HPC

AMD is gearing up to expand its Instinct MI family in the latter half of 2026 with two purpose‑built UDNA‑based accelerators. These new models, MI430X UL4 and MI450X, will cater respectively to high‑precision HPC tasks and large‑scale AI workloads. The MI430X UL4 is designed for applications that rely heavily on double‑precision FP64 floating‑point performance, such as scientific simulations, climate modeling, and others. It features a large array of FP64 tensor cores, ensuring consistent throughput for tightly coupled compute jobs. Because a dedicated UALink switch from partners like Astera Labs or Broadcom won't be available at launch, AMD has chosen a four‑GPU point‑to‑point mesh configuration for the MI430X UL4. This limited scale‑up approach keeps latency low and synchronization tight, making it ideal for small cluster deployments. On the AI side, the MI450X embraces Ethernet‑based Ultra Ethernet for scale‑out.

With UEC‑ready switches already on the market, this open‑standard networking solution lets users build expansive AI farms across dozens or even hundreds of nodes. By relying on established Ethernet technology instead of the new UALink ecosystem, AMD provides customers with immediate, hardware‑accelerated networking for both inference and training. While UALink promises an open, vendor‑neutral accelerator-to-accelerator fabric, its progress has been slow. Committee reviews and limited investment in switch silicon have delayed switch availability. Broadcom, viewed as the primary UALink switch supplier, has been cautious about its commitment, given expected lower volumes compared to Ethernet alternatives. AMD's acceleration segmentation addresses these realities. The MI430X UL4 delivers high‑precision compute in compact GPU meshes. The MI450X offers large‑scale AI connectivity via Ethernet. Should UALink switch development catch up in the future, AMD could revisit native GPU‑to‑GPU fabrics.

Intel Foundry's 18A Process Reportedly Generates Much Praise from ASIC Customers

As revealed during a recent Q1 earnings call, Intel leadership mentioned that "external clients are getting their ASICs designs tested." The company's foundry business is working towards the finalization of its much discussed 18A node process, with alleged trial samples receiving an "impressive performance rating." According to Ctee Taiwan, Team Blue's foundry service has submitted test subjects to the likes of NVIDIA, Broadcom and Faraday Technology. The latter organization has (reportedly) disclosed that the 18A platform tape-out was completed last October—since then, received samples have been "successfully connected." Industry moles believe that NVIDIA and Broadcom are in the middle of conducting manufacturing tests. Additional whispers suggest the delivery of 18A prototypes chez IBM and several other unnamed partner companies. Insiders have indicated impressive/good "verification results." Contrary to reports from other sources, Ctee has picked up on insider chatter about Intel's next-gen Nova Lake compute tile design being "not entirely outsourced." Further conjecture points to Team Blue becoming increasingly confident in its own manufacturing techniques.

Intel's Foundry Eyes NVIDIA and Broadcom as Clients for Future Growth

According to an investment bank UBS note, two industry titans—NVIDIA and Broadcom—are potential future clients that could significantly enhance Intel's Foundry business revenue. To revitalize Intel, newly appointed CEO Lip-Bu Tan reportedly aims to forge strategic alliances with two AI chip manufacturers. Tan, who assumed leadership earlier this month, is determined to rebuild the company's reputation by focusing on customer satisfaction and accelerating the development of its foundry business. UBS analyst Tim Arcuri suggests that while Broadcom might join the client roster, NVIDIA appears to be the more likely candidate. Rather than initially manufacturing NVIDIA's AI GPUs, Intel is expected to begin production with gaming GPUs. NVIDIA could even move to AI GPU production at Intel's fabs if satisfied.

Despite some early optimism, Intel's new CEO is now committed to addressing issues related to power consumption in Intel's manufacturing processes. UBS analyst Tim Arcuri noted that the firm is pushing hard to introduce a lower-power version of its 18A process, the so-called 18AP, which has reportedly struggled to meet energy requirements. Additionally, Intel is working to improve its advanced packaging techniques to rival Taiwan's TSMC CoWoS (S/L/R variants) technology, aiming to overcome packaging constraints that have slowed AI chip production. Analysts speculate that Intel might also become a secondary supplier to tech giant Apple. A promising partnership with Taiwan's United Microelectronics (UMC) could pave the way for Intel's chips to find their way into future Apple products. Whatever materializes, we are yet to see. Switching foundries from TSMC to Intel entirely is not possible for any of the aforementioned fabless designers, so it will likely be dual-sourcing at first, with some non-flagship SKUs getting the full port to Intel 18A.

Google Teams up with MediaTek for Next-Generation TPU v7 Design

According to Reuters, citing The Information, Google will collaborate with MediaTek to develop its seventh-generation Tensor Processing Unit (TPU), which is also known as TPU v7. Google maintains its existing partnership with Broadcom despite the new MediaTek collaboration. The AI accelerator is scheduled for production in 2026, and TSMC is handling manufacturing duties. Google will lead the core architecture design while MediaTek manages I/O and peripheral components, as Economic Daily News reports. This differs from Google's ongoing relationship with Broadcom, which co-develops core TPU architecture. The MediaTek partnership reportedly stems from the company's strong TSMC relationship and lower costs compared to Broadcom.

There is also a possibility that MediaTek could design inference-focused TPU v7 chips while Broadcom focuses on training architecture. Nonetheless, the development of TPU is a massive market as Google is using so many chips that it could use a third company, hypothetically. The development of TPU continues Google's vertical integration strategy for AI infrastructure. Google reduces dependency on NVIDIA hardware by designing proprietary AI chips for internal R&D and cloud operations. At the same time, competitors like OpenAI, Anthropic, and Meta rely heavily on NVIDIA's processors for AI training and inference. At Google's scale, serving billions of queries a day, designing custom chips makes sense from both financial and technological sides. As Google develops its own specific workloads, translating that into hardware acceleration is the game that Google has been playing for years now.

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

TSMC Still Continues to Explore Joint Venture for Intel Foundry Ownership

TSMC is still considering a strategic joint venture to operate Intel's manufacturing capacity, according to four sources close to Reuters that are familiar with the discussions. The proposed arrangement would limit TSMC's ownership to less than 50% and potentially distribute stakes to major American chip designers, including AMD, Broadcom, NVIDIA, and Qualcomm. The initiative emerged following direct intervention from the Trump administration, which has prioritized revitalizing domestic semiconductor manufacturing while maintaining American control of critical technology infrastructure. Under the proposed framework, Intel would spin off its Intel Foundry division, with TSMC acquiring a minority stake and bringing in partner companies as co-investors.

Apple, TSMC's largest customer, is absent from these preliminary discussions, suggesting careful strategic positioning within the competitive ecosystem—however, significant technical and operational challenges are facing the potential joint venture. Intel's manufacturing and real estate assets are valued at approximately $108 billion, requiring substantial capital commitments from prospective partners. More fundamentally, the technological integration presents massive obstacles, as Intel and TSMC utilize fundamentally different manufacturing processes with distinct equipment configurations and material requirements. However, the complex negotiations remain in the early stages, with significant technical, financial, and regulatory hurdles to overcome before any formal agreement materializes. Intel is still not giving the clear green light to spin off rumors.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

NVIDIA and Broadcom Testing Intel 18A Node for Chip Production

TSMC appears to be in for a competitive period, as sources close to Reuters note that both NVIDIA and Broadcom have tested Intel's 18A node with initial test chips. These tests are early indicators of whether Intel can successfully pivot into the contract manufacturing sector currently dominated by TSMC. Intel's 18A technology—featuring RibbonFET transistors and PowerVia backside power delivery—continues progressing through its development roadmap. The technology's performance characteristics reportedly sit between TSMC's current and next-generation nodes, creating a narrow window of competitive opportunity that Intel must capitalize on. What makes these particular tests significant is their positioning relative to actual production commitments. Chip designers typically run multiple test phases before allocating high-volume manufacturing contracts, with each progression reducing technical risk.

Reuters also reported that a six-month qualification delay for third-party IP blocks, which represents a critical vulnerability in Intel's foundry strategy, potentially undermining its ability to service smaller chip designers who rely on these standardized components. However, when this IP (PHY, controller, PCIe interface, etc.) is qualified for the 18A node, it is expected to go into many SoCs that will equal in millions of shipped chips. Additionally, the geopolitical dimensions of Intel's foundry efforts ease concerns of US-based chip designers as they gain a valuable manufacturing partner in their supply chain. Nonetheless, the 18A node is competitive with TSMC, and Intel plans only to evolve from here. Intel's current financial trajectory is the number one beneficiary if it proves good. With foundry revenues declining 60% year-over-year and profitability pushed beyond 2027, the company must demonstrate commercial viability to investors increasingly skeptical of its capital-intensive manufacturing strategy. Securing high-profile customers like NVIDIA could provide the market validation necessary to sustain continued investment in its foundry infrastructure.

ASUSTOR Launches Enterprise-Grade SAS JBOD Xpanstor 12R and AS-SAS8e SAS HBA

ASUSTOR Inc. today is announcing the enterprise-grade Xpanstor 12R JBOD expansion unit and the SAS HBA AS-SAS8e. ASUSTOR NAS devices supporting the AS-SAS8e support the Xpanstor 12R JBOD expansion unit to provide convenient and robust file sharing, remote access, and backup functionality. When expansion of storage capacity is required, pair your NAS with the ASUSTOR Xpanstor 12R to allow additional hard drives to be connected to an ASUSTOR NAS through a SAS connection to create a robust enterprise storage environment for managing large volumes of data.

Effortless Configuration
Storage Manager in ADM enables easy management and configuration between an ASUSTOR NAS and an attached ASUSTOR expansion unit. Storage Manager also supports the creation of multiple volume types and snapshot features, further enhancing data security and reliability.

Intel Faces Potential Breakup as TSMC and Broadcom Explore Acquisition

According to sources close to the Wall Street Journal, Intel is weighing preliminary acquisition offers that could split the company into two parts: product and foundry. TSMC and Broadcom are independently exploring deals that would divide Intel's chip design and manufacturing operations. Broadcom has initiated informal discussions regarding Intel's chip design and marketing divisions, while TSMC is considering assembling an investor consortium to acquire Intel's facilities. This solution is improbable, as Intel's fabs are strategically one of the most critical aspects of the US semiconductor supply chain. Intel manufactures custom chips for the US Department of Defense; hence, having a foreign owner of fabs is not acceptable. The news about the acquisition comes as Intel grapples with manufacturing setbacks, including a total $13.4 billion loss in its foundry segment during 2024 and a significant erosion of market share in the AI processor market.

The acquisition talks face substantial regulatory hurdles, particularly regarding national security concerns. The US government has signaled resistance to foreign ownership of Intel's domestic manufacturing capabilities, which are deemed strategically vital to American technological sovereignty. This could particularly impact TSMC's bid for Intel's plants despite the Taiwanese company's position as the world's leading contract chipmaker. Intel's vulnerability to acquisition follows a series of strategic missteps under former leadership, including delayed manufacturing innovations and an increasing reliance on government subsidies for facility expansion. The company's share price has declined 60% from its 2021 highs amid these challenges, attracting potential buyers despite the complexity of any potential deal structure. Successful execution would require navigating both regulatory approval and the practical difficulties of disaggregating Intel's deeply integrated design and manufacturing operations.

Report Suggests OpenAI Finalizing Proprietary GPU Design

Going back a year, we started hearing about an OpenAI proprietary AI chip project—this (allegedly) highly ambitious endeavor included grand plans for a dedicated fabrication network. TSMC was reportedly in the equation, but indirectly laughed at the AI research organization's ardent requests. Fast-forward to the present day; OpenAI appears to be actively pursuing a proprietary GPU design through traditional means. A Reuters exclusive report points to 2025 being an important year for the company's aforementioned "in-house" AI chip—the publication believes that OpenAI's debut silicon design has reached the finalization stage. Insiders have divulged that the project is only months away from being submitted to TSMC for "taping out." The foundry's advanced 3-nanometer process technology is reported to be on the cards. A Reuters source reckons that the unnamed chip features: "a commonly used systolic array architecture with high-bandwidth memory (HBM)...and extensive networking capabilities."

Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."

Broadcom Delivers Quantum Resistant Network Encryption for Real-time Ransomware Detection

Broadcom Inc. today announced an industry-first—the new, innovative Emulex Secure Fiber Channel Host Bus Adapters (HBA)—a cost-effective, easy-to-manage solution that encrypts all data as it moves between servers and storage.

Encrypting mission-critical data is no longer a nice-to-have, but a must-have. The cost of ransomware attacks continues to rise with attacks in 2024 costing USD $5.37 million on average per attack. Upcoming generative AI and quantum computers magnify the risk if data is not encrypted at all points in the data center including the network.

Trump Administration Plans to Impose 25-100% Tariffs on Taiwan-Sourced Chips, Including TSMC

The United States, currently led by the Trump administration, could be preparing a surprise package to its close silicon ally—Taiwan. During a House GOP issues conference in Florida, US President Donald Trump announced that he would impose 25% to 100% tariffs on Taiwan-made chips, including the world's leading silicon manufacturer, TSMC. Trump addressed the conference, saying, "In the very near future, we are going to be placing tariffs on foreign production of computer chips, semiconductors, and pharmaceuticals to return production of these essential goods to the United States. They left us and went to Taiwan; we want them to come back. We do not want to give them billions of dollars like this ridiculous program that Biden has given everybody billions of dollars. They already have billions of dollars. […] They did not need money. They needed an incentive. And the incentive is going to be they [do not want to] pay a 25%, 50% or even a 100% tax."

The issue for TSMC is its massive reliance on US companies to drive revenue. The majority of its cutting-edge silicon is going to only a handful of companies, including Apple, NVIDIA, Qualcomm, and Broadcom. With tariffs, the supply chain economics, especially in the world of semiconductors, will break. TSMC's most significant export country is the US, and US companies with trillions of US Dollars of market capitalization rely on Taiwanese silicon. As a result, TSMC will most likely raise its wafer prices, with results trickling down to US companies raising their product prices with additional price hikes. TSMC plans to bring its advanced manufacturing on American soil, but given that these tariffs might break the economic model it currently operates under, it may need to happen sooner. Taiwan-based silicon giant has planned to leave US facilities trailing behind by a generation or two of advanced manufacturing, while domestic facilities produce the newest nodes. If Trump decides to go through tariffs, TSMC could make additional changes to its US-based manufacturing plans.

Solidigm Extends Agreement with Broadcom on High-Capacity SSD Controllers for AI

Solidigm, a leading provider of innovative NAND flash memory solutions, today announced a multi-year extension of its agreement with Broadcom Inc. on the use of high-capacity solid-state drive (SSD) controllers to support artificial intelligence (AI) and data-intensive workloads. Solidigm is the leading provider of high-capacity storage for AI, and Broadcom's custom controllers have served as a critical component of Solidigm SSDs for more than a decade. With more than 120 million units of Solidigm SSDs shipped featuring Broadcom controllers, the partnership between the two companies has continued through key industry SSD milestones including Serial ATA (SATA), Serial-Attached SCSI (SAS) and Non-Volatile Memory Express (NVMe).

The agreement also includes collaboration on Solidigm's recently announced 122 TB (terabyte) Solidigm D5-P5336 data center SSD, the world's highest capacity PCIe SSD that delivers industry-leading storage efficiency from the core data center to the edge. "With our new 122 TB SSD, Solidigm further extends our high-capacity QLC (quad-level cell) leadership from 8 to 122 TB drives that all share the same controller from Broadcom, making our drives easier for customers to qualify," said Solidigm Co-CEO Kevin Noh. "Our relationship with Broadcom is pivotal to Solidigm as we collectively work to help our customers achieve efficiency benefits in the buildout of AI infrastructure."

New Raspberry Pi 5 With 16 GB Goes On Sale At $120

We first announced Raspberry Pi 5 back in the autumn of 2023, with just two choices of memory density: 4 GB and 8 GB. Last summer, we released the 2 GB variant, aimed at cost-sensitive applications. And today we're launching its bigger sibling, the 16 GB variant, priced at $120.

Why 16 GB, and why now?
We're continually surprised by the uses that people find for our hardware. Many of these fit into 8 GB (or even 2 GB) of SDRAM, but the threefold step up in performance between Raspberry Pi 4 and Raspberry Pi 5 opens up use cases like large language models and computational fluid dynamics, which benefit from having more storage per core. And while Raspberry Pi OS has been tuned to have low base memory requirements, heavyweight distributions like Ubuntu benefit from additional memory capacity for desktop use cases. The optimized D0 stepping of the Broadcom BCM2712 application processor includes support for memories larger than 8 GB. And our friends at Micron were able to offer us a single package containing eight of their 16Gbit LPDDR4X die, making a 16 GB product feasible for the first time.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

Intel 18A Process Node Clocks an Abysmal 10% Yield: Report

In case you're wondering why Intel went with TSMC 3 nm to build the Compute tile of its "Arrow Lake" processor, and the SoC tile of "Lunar Lake," instead of Intel 3, or even Intel 20A, perhaps there's more to the recent story about Broadcom voicing its disappointment in the Intel 18A foundry node. The September 2024 report didn't specify a number to what yields on the Intel 18A node looked like to spook Broadcom, but we now have some idea as to just how bad things are. Korean publication Chosun, which tracks developments in the electronics and ICT industries, reports that yields on the Intel 18A foundry node stand at an abysmal 10%, making it unfit for mass-production. Broadcom validated Intel 18A as it was prospecting a cutting-edge node for its high-bandwidth network processors.

The report also hints that Intel's in-house foundry nodes going off the rails could be an important event leading up to the company's Board letting go of former CEO Pat Gelsinger, as huge 2nd order effects will be felt across the company's entire product stack in development. For example, company roadmaps put the company's next-generation "Clearwater Forest" server processor, slated for 2025, as being designed for the Intel 18A node. Unless Intel Foundry can pull a miracle, an effort must be underway to redesign the chip for whichever TSMC node is considered cutting-edge in 2025.
Return to Keyword Browsing
Jul 5th, 2025 23:02 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts