News Posts matching #SOCAMM

Return to Keyword Browsing

ASUS Unveils the Latest ASUS AI POD Featuring NVIDIA GB300 NVL72

ASUS today joined GTC 2025 as a Diamond sponsor to showcase the latest ASUS AI POD with the NVIDIA GB300 NVL72 platform. The company is also proud to announce that it has already garnered substantial order placements, marking a significant milestone in the technology industry. At the forefront of AI innovation, ASUS also presents the latest AI servers in the Blackwell and HGX family line-up. These include ASUS XA NB3I-E12 powered by NVIDIA B300 NVL16, ASUS ESC NB8-E11 with NVIDIA DGX B200 8-GPU, ASUS ESC N8-E11V with NVIDIA HGX H200, and ASUS ESC8000A-E13P/ESC8000-E12P supporting NVIDIA RTX PRO 6000 Blackwell Server Edition with MGX architecture. With a strong focus on fostering AI adoption across industries, ASUS is positioned to provide comprehensive infrastructure solutions in combination with the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, empowering clients to accelerate their time to market.

By integrating the immense power of the NVIDIA GB300 NVL72 server platform, ASUS AI POD offers exceptional processing capabilities—empowering enterprises to tackle massive AI challenges with ease. Built with NVIDIA Blackwell Ultra, GB300 NVL72 leads the new era of AI with optimized compute, increased memory, and high-performance networking, delivering breakthrough performance.

Micron Innovates From the Data Center to the Edge With NVIDIA

Secular growth of AI is built on the foundation of high-performance, high-bandwidth memory solutions. These high-performing memory solutions are critical to unlock the capabilities of GPUs and processors. Micron Technology, Inc., today announced it is the world's first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center. This extends Micron's industry leadership in designing and delivering low-power DDR (LPDDR) for data center applications.

Micron's SOCAMM, a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. The Micron HBM3E 12H 36 GB is also designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24 GB is available for the NVIDIA HGX B200 and GB200 NVL72 platforms. The deployment of Micron HBM3E products in NVIDIA Hopper and NVIDIA Blackwell systems underscores Micron's critical role in accelerating AI workloads.

SK hynix Showcases Industry-Leading Memory Technology at GTC 2025

SK hynix Inc. announced today that it will participate in the GTC 2025, a global AI conference taking place March 17-21 in San Jose, California, with a booth titled "Memory, Powering AI and Tomorrow". The company will present HBM and other memory products for AI data centers and on-device and memory solutions for automotive business essential for AI era.

Among the industry-leading AI memory technology to be displayed at the show are 12-high HBM3E and SOCAMM (Small Outline Compression Attached Memory Module), a new memory standard for AI servers.

NVIDIA Preparing "SOCAMM" Memory Standard for AI PCs Similar to Project DIGITS

NVIDIA and its memory partners, SK Hynix, Samsung, and Micron, are preparing a new memory form factor called System on Chip Advanced Memory Module—SOCAMM shortly. This technology, adapting to the now well-known CAMM memory module standard, aims to bring additional memory density to NVIDIA systems. Taking inspiration from NVIDIA's Project DIGITS, it has now been developed independently by NVIDIA outside of any official memory consortium like JEDEC. Utilizing a detachable module design, SOCAMM delivers superior specifications compared to existing solutions, featuring 694 I/O ports (versus LPCAMM's 644 and traditional DRAM's 260), direct LPDDR5X memory substrate integration, and a more compact form factor.

For reference, current-generation Project DIGITS is using NVIDIA GB10 Grace Blackwell Superchip capable of delivering one PetaFLOP of FP4 compute, paired with 128 GB of LPDDR5X memory. This configuration limits the size of AI models that run locally on the device, resulting in up to 200 billion parameter models running on a single Project DIGITS AI PC. Two stacked Project DIGITS PCs are needed for models like Llama 3.1 405B with 405 billion parameters. Most interestingly, memory capacity is the primary limiting factor; hence, NVIDIA devotes its time to developing a more memory-dense SOCAMM standard. Being a replacement compatible with LPDDR5, it can use the same controller silicon IP with only the SoC substrate being modified to fit the new memory. With NVIDIA rumored to enter the consumer PC market this year, we could also see an early implementation in consumer PC products, but the next-generation Project DIGITS is the primary target.
Return to Keyword Browsing
Mar 19th, 2025 12:29 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts