News Posts matching #XPU

Return to Keyword Browsing

Marvell Announces Breakthrough Custom HBM Compute Architecture to Optimize Cloud AI Accelerators

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced that it has pioneered a new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. The new technology is available to all of its custom silicon customers to improve the performance, efficiency and TCO of their custom XPUs. Marvell is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK hynix to define and develop custom HBM solutions for next-generation XPUs.

HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.
Return to Keyword Browsing
Dec 11th, 2024 10:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts