Sunday, June 21st 2020
ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs
ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.
ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.PCIe 4.0 offers a 16 GT/s that doubles the bandwidth of PCIe 3.0 and delivers lower power consumption, better lane scalability and backwards compatibility. ESC4000A E10 features truly PCIe Gen 4 capabilities with up to eleven PCIe Gen 4 slots support for compute, graphics, storage and networking expansions. In terms of networking, ESC4000A E10 supports OCP NIC 3.0 card which deliver speed up to 200GbE to meet the demands of high bandwidth applications. With the flexible chassis design, ESC4000A E10 accommodates up to 8 hot-swappable 3.5-inch or 2.5" hard drives, and four of them are for NVMe SSDs in option.
NVIDIA A100 PCIe GPU
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications. As the engine of the NVIDIA data center platform, A100 provides up to 20X higher performance over V100 GPUs and can efficiently scale up to thousands of GPUs, or be partitioned into seven isolated GPU instances with new Multi-Instance GPU (MIG) capability to accelerate workloads of all sizes. A100's third-generation Tensor Core technology supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers. Combined with NVIDIA software stack, A100 accelerates all major deep learning and data analytics frameworks and over 700 HPC applications, and containerized software from NGC helps developers easily get up and running.
ASUS continues growing the hardware and software integration solutions
ASUS servers deliver top performance and power efficiency, making them an excellent choice for data center solutions. ASUS 2P and 1P servers continue to achieve top-ranking performance with over 700 world records based on the SPEC CPU 2017 benchmark on SPEC.org. ASUS also offers servers designed specifically for power efficiency with ASUS exclusive power-saving tuning technology, and these servers have achieved the No. 1 ranking on the SPEC Power benchmark on both Windows and Linux. ASUS will continue to collaborate with Intel to deliver holistic solutions for customers for HCI/storage, HPC (genomics analytics) and analytics. In addition to Intel Select Solutions for Microsoft Azure Stack HCI, ASUS will launch more verifications with Intel to enable customers to build up solutions easily with less time spent on hardware and software selection.
Source:
VideoCardz
ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.PCIe 4.0 offers a 16 GT/s that doubles the bandwidth of PCIe 3.0 and delivers lower power consumption, better lane scalability and backwards compatibility. ESC4000A E10 features truly PCIe Gen 4 capabilities with up to eleven PCIe Gen 4 slots support for compute, graphics, storage and networking expansions. In terms of networking, ESC4000A E10 supports OCP NIC 3.0 card which deliver speed up to 200GbE to meet the demands of high bandwidth applications. With the flexible chassis design, ESC4000A E10 accommodates up to 8 hot-swappable 3.5-inch or 2.5" hard drives, and four of them are for NVMe SSDs in option.
NVIDIA A100 PCIe GPU
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications. As the engine of the NVIDIA data center platform, A100 provides up to 20X higher performance over V100 GPUs and can efficiently scale up to thousands of GPUs, or be partitioned into seven isolated GPU instances with new Multi-Instance GPU (MIG) capability to accelerate workloads of all sizes. A100's third-generation Tensor Core technology supports a broad range of math precisions providing a unified workload accelerator for data analytics, AI training, AI inference, and HPC. Accelerating both scale-up and scale-out workloads on one platform enables elastic data centers that can dynamically adjust to shifting application workload demands. This simultaneously boosts throughput and drives down the cost of data centers. Combined with NVIDIA software stack, A100 accelerates all major deep learning and data analytics frameworks and over 700 HPC applications, and containerized software from NGC helps developers easily get up and running.
ASUS continues growing the hardware and software integration solutions
ASUS servers deliver top performance and power efficiency, making them an excellent choice for data center solutions. ASUS 2P and 1P servers continue to achieve top-ranking performance with over 700 world records based on the SPEC CPU 2017 benchmark on SPEC.org. ASUS also offers servers designed specifically for power efficiency with ASUS exclusive power-saving tuning technology, and these servers have achieved the No. 1 ranking on the SPEC Power benchmark on both Windows and Linux. ASUS will continue to collaborate with Intel to deliver holistic solutions for customers for HCI/storage, HPC (genomics analytics) and analytics. In addition to Intel Select Solutions for Microsoft Azure Stack HCI, ASUS will launch more verifications with Intel to enable customers to build up solutions easily with less time spent on hardware and software selection.
1 Comment on ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs