TheLostSwede
News Editor
- Joined
- Nov 11, 2004
- Messages
- 17,766 (2.42/day)
- Location
- Sweden
System Name | Overlord Mk MLI |
---|---|
Processor | AMD Ryzen 7 7800X3D |
Motherboard | Gigabyte X670E Aorus Master |
Cooling | Noctua NH-D15 SE with offsets |
Memory | 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68 |
Video Card(s) | Gainward GeForce RTX 4080 Phantom GS |
Storage | 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000 |
Display(s) | Acer XV272K LVbmiipruzx 4K@160Hz |
Case | Fractal Design Torrent Compact |
Audio Device(s) | Corsair Virtuoso SE |
Power Supply | be quiet! Pure Power 12 M 850 W |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair K70 Max |
Software | Windows 10 Pro |
Benchmark Scores | https://valid.x86.fr/yfsd9w |
GIGABYTE Technology and Giga Computing, a subsidiary of GIGABYTE and an industry leader in enterprise solutions, will showcase their solutions at the GIGABYTE booth #1224 at NVIDIA GTC, a global AI developer conference running through March 21. This event will offer GIGABYTE the chance to connect with its valued partners and customers, and together explore what the future in computing holds.
The GIGABYTE booth will focus on GIGABYTE's enterprise products that demonstrate AI training and inference delivered by versatile computing platforms based on NVIDIA solutions, as well as direct liquid cooling (DLC) for improved compute density and energy efficiency. Also not to be missed at the NVIDIA booth is the MGX Pavilion, which features a rack of GIGABYTE servers for the NVIDIA GH200 Grace Hopper Superchip architecture.
Powering AI Breakthroughs with GPU Clusters
Highlighting one of the most important AI platforms, GIGABYTE's booth includes a compact GPU cluster scalable unit -- a rack with GIGABYTE G593-SD2 servers tailored to the NVIDIA HGX H100 8-GPU design and supported by dual 5th Gen Intel Xeon Scalable processors. This HGX platform is the most powerful accelerator platform being used for generative AI and LLMs, with optimized software supporting a wealth of science applications. This rack-scale infrastructure is ideal to scale out as the GPU cluster expands based on demand. This server is an NVIDIA-Certified System, tested for predictable performance and fast deployment.
Optimized for NVIDIA Omniverse
The GIGABYTE G493-SB0, also an NVIDIA-Certified System, is one of the first systems to be validated as an NVIDIA OVX server. Because many data centers put greater weight on scaling up rather than out, there is a need for multiple expansion slots for PCIe Gen 5 cards, NICs, or DPUs. GIGABYTE has an NVIDIA OVX-based server configured to support the ideal ratio of NVIDIA GPUs to DPUs to NICs. The OVX platform is purpose built to power the creation and operation of applications developed on the NVIDIA Omniverse platform at data center scale. This more modularized approach is best exemplified in the G493-SB0, which can support two CPUs, four NVIDIA L40S GPUs, two NVIDIA ConnectX -7 NICs, and one NVIDIA BlueField -3 DPU, all in a single 4U server.
Symphony of an Interconnected CPU & GPU
GIGABYTE's XH23-VG0 is a 2U server with one GH200 Superchip that supports additional I/O slots for BlueField-3 DPUs and ConnectX-7 NICs. This NVIDIA GH200 Grace Hopper platform is the first NVIDIA heterogeneous CPU-GPU unit on a single module, and it sports 900 GB/s of bandwidth between CPU and GPU via the NVIDIA NVLink Chip-2-Chip interconnect to deliver up to 624 GB of fast memory to the system. With it, the powerful NVIDIA H100 GPU is not limited to GPU clusters, capable of being used in new AI and HPC applications and in conjunction with an NVIDIA Grace CPU.
Advanced Liquid Cooling Solutions for Superchips
With greater performance comes greater power and cooling requirements. GIGABYTE is ready to provide its customers with DLC solutions beyond the already existing G593 series to support NVIDIA platforms. GIGABYTE has cold plate kits to support the NVIDIA Grace CPU and GH200 Superchip with a custom design that ensures heat is quickly removed from a compute dense 2U 4-node system. This multi-node system is the GIGABYTE H263 series server that supports DLC cooling for each node, for either four Grace CPU Superchips in the GIGABYTE H263-V60 or four GH200 Superchips in the GIGABYTE H263-V11. Liquid cooling allows for all processors to run optimally without throttling performance in a dense form factor computing server.
Supporting the Flagship NVIDIA Blackwell GPU
GIGABYTE will support the Blackwell GPU that will succeed the Hopper GPU and have enterprise servers ready for the market according to NVIDIA's production schedule. This new NVIDIA B200 Tensor Core GPU for generative AI and accelerated computing will have significant benefits, especially in LLM inference workloads. And GIGABYTE will have products for HGX baseboards, Superchips, and PCIe cards. More details will be provided later this year.
"Our servers ensure exceptional AI computing capabilities to meet the most demanding workloads," said Etay Lee, CEO of GIGABYTE. "Designed to support various CPU and GPU architectures, we're able to supercharge training and inference. By also supporting NVIDIA's MGX modular architecture we can speed up time to market for different server configurations to meet unique customer needs."
"GIGABYTE's solutions feature NVIDIA HGX H100 GPUs and GH200 Superchips to drive AI breakthroughs," said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. "With its versatile accelerated computing platforms and advanced liquid cooling technology, GIGABYTE offers customers the advanced computing they need to propel innovation forward."
View at TechPowerUp Main Site | Source
The GIGABYTE booth will focus on GIGABYTE's enterprise products that demonstrate AI training and inference delivered by versatile computing platforms based on NVIDIA solutions, as well as direct liquid cooling (DLC) for improved compute density and energy efficiency. Also not to be missed at the NVIDIA booth is the MGX Pavilion, which features a rack of GIGABYTE servers for the NVIDIA GH200 Grace Hopper Superchip architecture.
Powering AI Breakthroughs with GPU Clusters
Highlighting one of the most important AI platforms, GIGABYTE's booth includes a compact GPU cluster scalable unit -- a rack with GIGABYTE G593-SD2 servers tailored to the NVIDIA HGX H100 8-GPU design and supported by dual 5th Gen Intel Xeon Scalable processors. This HGX platform is the most powerful accelerator platform being used for generative AI and LLMs, with optimized software supporting a wealth of science applications. This rack-scale infrastructure is ideal to scale out as the GPU cluster expands based on demand. This server is an NVIDIA-Certified System, tested for predictable performance and fast deployment.
Optimized for NVIDIA Omniverse
The GIGABYTE G493-SB0, also an NVIDIA-Certified System, is one of the first systems to be validated as an NVIDIA OVX server. Because many data centers put greater weight on scaling up rather than out, there is a need for multiple expansion slots for PCIe Gen 5 cards, NICs, or DPUs. GIGABYTE has an NVIDIA OVX-based server configured to support the ideal ratio of NVIDIA GPUs to DPUs to NICs. The OVX platform is purpose built to power the creation and operation of applications developed on the NVIDIA Omniverse platform at data center scale. This more modularized approach is best exemplified in the G493-SB0, which can support two CPUs, four NVIDIA L40S GPUs, two NVIDIA ConnectX -7 NICs, and one NVIDIA BlueField -3 DPU, all in a single 4U server.
Symphony of an Interconnected CPU & GPU
GIGABYTE's XH23-VG0 is a 2U server with one GH200 Superchip that supports additional I/O slots for BlueField-3 DPUs and ConnectX-7 NICs. This NVIDIA GH200 Grace Hopper platform is the first NVIDIA heterogeneous CPU-GPU unit on a single module, and it sports 900 GB/s of bandwidth between CPU and GPU via the NVIDIA NVLink Chip-2-Chip interconnect to deliver up to 624 GB of fast memory to the system. With it, the powerful NVIDIA H100 GPU is not limited to GPU clusters, capable of being used in new AI and HPC applications and in conjunction with an NVIDIA Grace CPU.
Advanced Liquid Cooling Solutions for Superchips
With greater performance comes greater power and cooling requirements. GIGABYTE is ready to provide its customers with DLC solutions beyond the already existing G593 series to support NVIDIA platforms. GIGABYTE has cold plate kits to support the NVIDIA Grace CPU and GH200 Superchip with a custom design that ensures heat is quickly removed from a compute dense 2U 4-node system. This multi-node system is the GIGABYTE H263 series server that supports DLC cooling for each node, for either four Grace CPU Superchips in the GIGABYTE H263-V60 or four GH200 Superchips in the GIGABYTE H263-V11. Liquid cooling allows for all processors to run optimally without throttling performance in a dense form factor computing server.
Supporting the Flagship NVIDIA Blackwell GPU
GIGABYTE will support the Blackwell GPU that will succeed the Hopper GPU and have enterprise servers ready for the market according to NVIDIA's production schedule. This new NVIDIA B200 Tensor Core GPU for generative AI and accelerated computing will have significant benefits, especially in LLM inference workloads. And GIGABYTE will have products for HGX baseboards, Superchips, and PCIe cards. More details will be provided later this year.
"Our servers ensure exceptional AI computing capabilities to meet the most demanding workloads," said Etay Lee, CEO of GIGABYTE. "Designed to support various CPU and GPU architectures, we're able to supercharge training and inference. By also supporting NVIDIA's MGX modular architecture we can speed up time to market for different server configurations to meet unique customer needs."
"GIGABYTE's solutions feature NVIDIA HGX H100 GPUs and GH200 Superchips to drive AI breakthroughs," said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. "With its versatile accelerated computing platforms and advanced liquid cooling technology, GIGABYTE offers customers the advanced computing they need to propel innovation forward."
View at TechPowerUp Main Site | Source