News Posts matching #GPUDirect

Return to Keyword Browsing

Micron Launches 6550 ION 60TB PCIe Gen5 NVMe SSD Series

Micron Technology, Inc., today announced it has begun qualification of the 6550 ION NVMe SSD with customers. The Micron 6550 ION is the world's fastest 60 TB data center SSD and the industry's first E3.S and PCIe Gen 5 60 TB SSD. It follows the success of the award-winning 6500 ION and is engineered to provide best-in-class performance, energy efficiency, endurance, security, and rack density for exascale data center deployments. The 6550 ION excels in high-capacity NVMe workloads such as networked AI data lakes, ingest, data preparation and check pointing, file and object storage, public cloud storage, analytic databases, and content delivery.

"The Micron 6550 ION achieves a remarkable 12 GB/s while using just 20 watts of power, setting a new standard in data center performance and energy efficiency," said Alvaro Toledo, vice president and general manager of Micron's Data Center Storage Group. "Featuring a first-to-market 60 TB capacity in an E3.S form factor and up to 20% better energy efficiency than competitive drives, the Micron 6550 ION is a game-changer for high-capacity storage solutions to address the insatiable capacity and power demands of AI workloads."

IBM Unleashes the Potential of Data and AI with its Next-Generation IBM Storage Scale System 6000

Today, IBM introduced the new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet today's data intensive and AI workload demands, and the latest offering in the IBM Storage for Data and AI portfolio.

For the seventh consecutive year and counting, IBM is a 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage Leader, recognized for its vision and execution. The new IBM Storage Scale System 6000 seeks to build on IBM's leadership position with an enhanced high performance parallel file system designed for data intensive use-cases. It provides up to 7M IOPs and up to 256 GB/s throughput for read only workloads per system in a 4U (four rack units) footprint.

ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

ASUS today announced ESC N8-E11, its most advanced HGX H100 eight-GPU AI server, along with a comprehensive PCI Express (PCIe) GPU server portfolio—the ESC8000 and ESC4000 series empowered by Intel and AMD platforms to support higher CPU and GPU TDPs to accelerate the development of AI and data science.

ASUS is one of the few HPC solution providers with its own all-dimensional resources that consist of the ASUS server business unit, Taiwan Web Service (TWS) and ASUS Cloud—all part of the ASUS group. This uniquely positions ASUS to deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, plus a diverse ecosystem of industrial hardware and software partners.

ASUS Announces NVIDIA-Certified Servers and ProArt Studiobook Pro 16 OLED at GTC

ASUS today announced its participation in NVIDIA GTC, a developer conference for the era of AI and the metaverse. ASUS will offer comprehensive NVIDIA-certified server solutions that support the latest NVIDIA L4 Tensor Core GPU—which accelerates real-time video AI and generative AI—as well as the NVIDIA BlueField -3 DPU, igniting unprecedented innovation for supercomputing infrastructure. ASUS will also launch the new ProArt Studiobook Pro 16 OLED laptop with the NVIDIA RTX 3000 Ada Generation Laptop GPU for mobile creative professionals.

Purpose-built GPU servers for generative AI
Generative AI applications enable businesses to develop better products and services, and deliver original content tailored to the unique needs of customers and audiences. ASUS ESC8000 and ESC4000 are fully certified NVIDIA servers that support up to eight NVIDIA L4 Tensor Core GPUs, which deliver universal acceleration and energy efficiency for AI with up to 2.7X more generative AI performance than the previous GPU generation. ASUS ESC and RS series servers are engineered for HPC workloads, with support for the NVIDIA Bluefield-3 DPU to transform data center infrastructure, as well as NVIDIA AI Enterprise applications for streamlined AI workflows and deployment.

Micron Technology Announces 9400 Series Enterprise NVMe SSDs

Micron Technology, today announced the Micron 9400 NVMe SSD is in volume production and immediately available from channel partners and to global OEM customers for use in servers requiring the highest levels of storage performance. The Micron 9400 is designed to manage the most demanding data center workloads, particularly in artificial intelligence (AI) training, machine learning (ML) and high-performance computing (HPC) applications. The drive delivers an industry-leading 30.72 terabytes (TB) of storage capacity, superior workload performance versus the competition, and 77% improved input/output operations per second (IOPS). The Micron 9400 is the world's fastest PCIe Gen4 data center U.3 drive shipping and delivers consistently low latency at all capacity points.

"High performance, capacity and low latency are critical features for enterprises seeking to maximize their investments in AI/ML and supercomputing systems," said Alvaro Toledo, vice president and general manager of data center storage at Micron. "Thanks to its industry-leading 30 TB capacity and stunning performance with over 1 million IOPS in mixed workloads, the Micron 9400 SSD packs larger datasets into each server and accelerates machine learning training, which equips users to squeeze more out of their GPUs."

Supermicro Adds New 8U Universal GPU Server for AI Training, NVIDIA Omniverse, and Meta

Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, is announcing its most advanced GPU server, incorporating eight NVIDIA H100 Tensor Core GPUs. Due to its advanced airflow design, the new high-end GPU system will allow increased inlet temperatures, reducing a data center's overall Power Usage Effectiveness (PUE) while maintaining the absolute highest performance profile. In addition, Supermicro is expanding its GPU server lineup with this new Universal GPU server, which is already the largest in the industry. Supermicro now offers three distinct Universal GPU systems: the 4U,5U, and new 8U 8GPU server. The Universal GPU platforms support both current and future Intel and AMD CPUs -- up to 400 W, 350 W, and higher.

"Supermicro is leading the industry with an extremely flexible and high-performance GPU server, which features the powerful NVIDIA A100 and H100 GPU," said Charles Liang, president, and CEO, of Supermicro. "This new server will support the next generation of CPUs and GPUs and is designed with maximum cooling capacity using the same chassis. We constantly look for innovative ways to deliver total IT Solutions to our growing customer base."

KIOXIA Announces Production Availability of Native Ethernet Flash-Based SSDs

KIOXIA America, Inc. today announced the production availability of its EM6 Series Enterprise NVMe-oF solid state drives (SSDs) for Ethernet Bunch of Flash (EBOF) systems. Using the Marvell 88SN2400 NVMe-oF SSD converter controller that converts an NVMe SSD into a dual-ported 25Gb NVMe-oF SSD, KIOXIA EM6 Series drives expose the entire SSD bandwidth to the network.

Due to their ability to scale performance of NVMe SSDs, native NVMe-oF architectures are well-suited for applications such as artificial intelligence (AI)/machine learning (ML), high performance computing (HPC) and storage expansion. In the case of HPC, leveraging the Lustre file system, which is used to provide high bandwidth and parallel access to compute clusters, is beneficial to NVMe-oF based storage, such as EBOF systems with EM6 SSDs, that enable high availability (HA) configurations. An HPC HA configuration example consists of multiple and redundant network connections between a compute host and an EBOF with 88SN2400-connected NVMe SSDs, to deliver scalable throughput based on the number of SSDs.

NVIDIA Quantum-2 Takes Supercomputing to New Heights, Into the Cloud

NVIDIA today announced NVIDIA Quantum-2, the next generation of its InfiniBand networking platform, which offers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The most advanced end-to-end networking platform ever built, NVIDIA Quantum-2 is a 400 Gbps InfiniBand networking platform that consists of the NVIDIA Quantum-2 switch, the ConnectX-7 network adapter, the BlueField-3 data processing unit (DPU) and all the software that supports the new architecture.

NVIDIA Launches A100 PCIe-Based Accelerator with 80 GB HBM2E Memory

During this year's ISC 2021 event, as a part of the company's exhibition portfolio, NVIDIA has decided to launch an updated version of the A100 accelerator. A couple of months ago, in November, NVIDIA launched an 80 GB HBM2E version of the A100 accelerator, on the SXM2 proprietary form-factor. Today, we are getting the same upgraded GPU in the more standard dual-slot PCIe type of card. Featuring a GA100 GPU built on TSMC's 7 nm process, this SKU has 6192 CUDA cores present. To pair with the beefy amount of computing, the GPU needs appropriate memory. This time, there is as much as 80 GB of HBM2E memory. The memory achieves a bandwidth of 2039 GB/s, with memory dies running at an effective speed of 3186 Gbps. An important note is that the TDP of the GPU has been lowered to 250 Watts, compared to the 400 Watt SXM2 solution.

To pair with the new upgrade, NVIDIA made another announcement today and that is an enterprise version of Microsoft's DirectStorage, called NVIDIA GPUDirect Storage. It represents a way of allowing applications to access the massive memory pool built on the GPU, with 80 GB of super-fast HBM2E memory.

NVIDIA and Global Partners Launch New HGX A100 Systems to Accelerate Industrial AI and HPC

NVIDIA today announced it is turbocharging the NVIDIA HGX AI supercomputing platform with new technologies that fuse AI with high performance computing, making supercomputing more useful to a growing number of industries.

To accelerate the new era of industrial AI and HPC, NVIDIA has added three key technologies to its HGX platform: the NVIDIA A100 80 GB PCIe GPU, NVIDIA NDR 400G InfiniBand networking, and NVIDIA Magnum IO GPUDirect Storage software. Together, they provide the extreme performance to enable industrial HPC innovation.

KIOXIA PCIe 4.0 NVMe SSDs Now Qualified for NVIDIA Magnum IO GPUDirect Storage

KIOXIA today announced that its lineup of CM6 Series PCIe 4.0 enterprise NVMe SSDs has been successfully tested and certified to support NVIDIA's Magnum IO GPUDirect Storage. Modern AI and data science applications are synonymous with massive datasets - as are the storage requirements that go along with them. Part of the NVIDIA Magnum IO subsystem designed for GPU-accelerated compute environments, NVIDIA Magnum IO GPUDirect Storage allows the GPU to bypass the CPU and communicate directly with NVMe SSD storage. This improves overall system performance while reducing the impact on host CPU and memory resources. Through rigorous testing conducted by NVIDIA, KIOXIA's CM6 drives have been confirmed to meet the demanding storage requirements of GPU-intensive applications.

Large AI/ML, HPC modeling and data analytics datasets need to be moved and processed in real-time, pushing performance requirements through the roof," said Neville Ichhaporia, vice president, SSD marketing and product management for KIOXIA America, Inc. "By delivering speeds up to 16.0 gigatransfers per second throughput per lane, our CM6 Series SSDs enable NVIDIA's Magnum IO GPUDirect Storage to work with increasingly large and distributed datasets, thereby improving overall application performance and providing a path to scaling dataset sizes even further."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

NVIDIA Announces Industry's First Secure SmartNIC Optimized for 25G

NVIDIA today launched the NVIDIA Mellanox ConnectX -6 Lx SmartNIC—a highly secure and efficient 25/50 gigabit per second (Gb/s) Ethernet smart network interface controller (SmartNIC)—to meet surging growth in enterprise and cloud scale-out workloads.

ConnectX-6 Lx, the 11th generation product in the ConnectX family, is designed to meet the needs of modern data centers, where 25 Gb/s connections are becoming standard for handling demanding workflows, such as enterprise applications, AI and real-time analytics. The new SmartNIC extends accelerated computing by leveraging software-defined, hardware-accelerated engines to offload more security and network processing from CPUs.
NVIDIA Mellanox ConnectX-6 Lx SmartNIC

NVIDIA Unveils the Quadro M6000 24GB Graphics Card

NVIDIA announced the Quadro M6000, its new high-end workstation single-GPU graphics card. Based on the GM200 silicon, and leveraging the "Maxwell" GPU architecture, the M6000 maxes out all the hardware features of the chip, featuring 3,072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 24 GB of memory, double that of the GeForce GTX TITAN X. Its peak single-precision floating point performance is rated at 7 TFLOP/s.

Where the M6000 differs from its the GTX TITAN X is its workstation-grade features. It drops the HDMI 2.0 connector for a total of four DisplayPort 1.2 connectors, supporting a total of four 4K Ultra HD displays. The dual-link DVI connector stays on. There's also an optional stereoscopic 3D connector. The nView MultiDisplay tech provides more flexible display-head configurations than the ones you find on NVIDIA's consumer graphics GPUs; you also get NVIDIA GPUDirect support, which gives better memory sharing access for multi-GPU systems. The M6000 supports most modern 3D APIs, such as DirectX 12, OpenGL 4.5, and Vulkan; with compute capabilities over CUDA, OpenCL, and DirectCompute. NVIDIA didn't reveal pricing.

Tesla K20 GPU Compute Processor Specifications Released

Specifications of NVIDIA's Tesla K20 GPU compute processor, which was launched way back in May, are finally disclosed. We've known since then that the K20 is based on NVIDIA's large GK110 GPU, a chip never used to power a GeForce graphics card, yet. Apparently, NVIDIA is leaving some room on the silicon that allows it to harvest it better. According to a specifications sheet compiled by Heise.de, Tesla K20 will feature 13 SMX units, compared to the 15 available on the GK110 silicon.

With 13 streaming multiprocessor (SMX) units, the K20 will be configured with 2,496 CUDA cores (as opposed to 2,880 physically present on the chip). The core will be clocked at 705 MHz, yielding single-precision floating point performance of 3.52 TFLOP/s, and double-precision floating point performance of 1.17 TFLOP/s. The card packs 5 GB of GDDR5 memory, with memory bandwidth of 200 GB/s. Dynamic parallelism, Hyper-Q, GPUDirect with RDMA are part of the new feature-set. The TDP of the GPU is rated at 225W, and understandably, it uses a combination of 6-pin and 8-pin PCI-Express power connectors. Built in the 28 nm process, the GK110 packs a whopping 7.1 billion transistors.

NVIDIA Releases CUDA 5

NVIDIA today made available the NVIDIA CUDA 5 production release, a powerful new version of the world's most pervasive parallel computing platform and programming model for accelerating scientific and engineering applications on GPUs. It can be downloaded for free from the NVIDIA Developer Zone website.

With more than 1.5 million downloads, supporting more than 180 leading engineering, scientific and commercial applications, the CUDA programming model is the most popular way for developers to take advantage of GPU-accelerated computing.

NVIDIA Partners Make Ultra-Low Latency a Reality with NVIDIA GPUDirect for Video

NVIDIA and industry leading I/O board partners such as AJA, Blackmagic Design, Bluefish444, Deltacast, DVS, and Matrox are providing unprecedented real time video production capabilities leveraging NVIDIA GPUDirect for Video. The technology provides application developers and their customers seamless, fast accessibility of the graphics and image processing power of NVIDIA Quadro and Tesla professional graphics processing units (GPUs), with ultra-low latency input and output across a wide range of I/O devices.

NVIDIA GPUDirect for Video technology is the fastest, most deterministic way to get video data in and out of the GPU. Software vendors are now capable of harnessing the graphics and image processing power of GPUs without the latency, often as many as ten frames, previously associated with 3rd party video I/O boards. With this wide range of I/O vendors, customers can choose the best system for meeting their needs.

Return to Keyword Browsing
Nov 19th, 2024 23:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts