News Posts matching #SXM5

Return to Keyword Browsing

Supermicro Previews New Max Performance Intel-based X14 Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro's efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro's Early Ship Program or for remote testing with Supermicro JumpStart.

"We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features," said Charles Liang, president and CEO of Supermicro. "Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry's most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO."

NVIDIA Allegedly Preparing H100 GPU with 94 and 64 GB Memory

NVIDIA's compute and AI-oriented H100 GPU is supposedly getting an upgrade. The H100 GPU is NVIDIA's most powerful offering and comes in a few different flavors: H100 PCIe, H100 SXM, and H100 NVL (a duo of two GPUs). Currently, the H100 GPU comes with 80 GB of HBM2E, both in the PCIe and SXM5 version of the card. A notable exception if the H100 NVL, which comes with 188 GB of HBM3, but that is for two cards, making it 94 GB per each. However, we could see NVIDIA enable 94 and 64 GB options for the H100 accelerator soon, as the latest PCI ID Repository shows.

According to the PCI ID Repository listing, two messages are posted: "Kindly help to add H100 SXM5 64 GB into 2337." and "Kindly help to add H100 SXM5 94 GB into 2339." These two messages indicate that NVIDIA could prepare its H100 in more variations. In September 2022, we saw NVIDIA prepare an H100 variation with 120 GB of memory, but that still isn't official. These PCIe IDs could just come from engineering samples that NVIDIA is testing in the labs, and these cards could never appear on any market. So, we have to wait and see how it plays out.

Giga Computing Leaked Server Roadmap Points to 600 W CPUs & 700 W GPUs

A leaked roadmap (that seems to be authored) by Giga Computing provides an interesting peak into the future of next generation enterprise-oriented CPUs and GPUs. TDP details of Intel, AMD and NVIDIA hardware are featured within the presentation slide - and all indications point to a trend of continued power consumption growth. Intel's server CPU lineups, including fourth generation Sapphire Rapids-SP and fifth-gen Emerald Rapids-SP Xeon chips, are projected to hit maximum TGPs of 350 W by mid-2024. Team Blue's sixth gen Granite Rapids is expected to arrive in the latter half of 2024, and Gigabyte's leaked roadmap points to a push into 500 W territories going forward into 2025.

AMD's Zen 5-based Turin server CPUs are expected to ship by the second half of 2024, and power consumption is estimated to hit a maximum of 600 W - representing a 50% increase over the Zen 4-based Genoa family. The 2024 NVIDIA PCIe GPU lineup is likely hitting TDPs of up to 500 W, it is rumored that these enterprise cards will be based on the Blackwell chip architecture - set to succeed current generation H100 "Hopper" PCIe accelerators (featuring 350-450 W TDPs). It is possible that AMD's Instinct-class PCIe accelerator family will become the direct competition, these cards are rated up to 400 W. The AMD Instinct MI250 OAM category has a maximum rating of 560 W. The NVIDIA Grace and Grace Hopper CPU Superchips are said to feature 600 W and 1000 W TDPs (respectively).
Return to Keyword Browsing
Dec 18th, 2024 03:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts