News Posts matching #Tesla V100s

Return to Keyword Browsing

TYAN Launches AI-Optimized Server Platforms Powered by NVIDIA V100S Tensor Core GPUs

TYAN, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Computing Technology Corporation, has launched the latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. "An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "TYAN'sGPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems."

TYAN's new Thunder HX FT83-B7119 features high density local storage within a 4U 10GPU server platform. The system is based on dual-socket 2nd Gen Intel Xeon Scalable Processors, supporting up to 10 NVIDIA V100S or 20 T4 GPUs with 12 hot-swap 3.5" drive bays. The system provides a spare PCIe x16 slot in addition to the 10 double-wide PCIe x16 ones, and it supports high speed networking such as 100 Gigabit EDR InfiniBand or Ethernet. The chassis features tool-less drive trays for added ease of service.

NVIDIA Unveils Tesla V100s Compute Accelerator

NVIDIA updated its compute accelerator product stack with the new Tesla V100s. Available only in the PCIe add-in card (AIC) form-factor for now, the V100s is positioned above the V100 PCIe, and is equipped with faster memory, besides a few silicon-level changes (possibly higher clock-speeds), to facilitate significant increases in throughput. To begin with, the V100s is equipped with 32 GB of HBM2 memory across a 4096-bit memory interface, with higher 553 MHz (1106 MHz effective) memory clock, compared to the 876 MHz memory clock of the V100. This yields a memory bandwidth of roughly 1,134 GB/s compared to 900 GB/s of the V100 PCIe.

NVIDIA did not detail changes to the GPU's core clock-speed, but mentioned the performance throughput numbers on offer: 8.2 TFLOP/s double-precision floating-point performance versus 7 TFLOP/s on the original V100 PCIe; 16.4 TFLOP/s single-precision compared to 14 TFLOP/s on the V100 PCIe; and 130 TFLOP/s deep-learning ops versus 112 TFLOP/s on the V100 PCIe. Company-rated power figures remain unchanged at 250 W typical board power. The company didn't reveal pricing.
Return to Keyword Browsing
Dec 22nd, 2024 04:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts