News Posts matching #DGX-2

Return to Keyword Browsing

NVIDIA DGX A100 is its "Ampere" Based Deep-learning Powerhouse

NVIDIA will give its DGX line of pre-built deep-learning research workstations its next major update in the form of the DGX A100. This system will likely pack number of the company's upcoming Tesla A100 scalar compute accelerators based on its next-generation "Ampere" architecture and "GA100" silicon. The A100 came to light though fresh trademark applications by the company. As for specs and numbers, we don't know yet. The "Volta" based DGX-2 has up to sixteen "GV100" based Tesla boards adding up to 81,920 CUDA cores and 512 GB of HBM2 memory. One can expect NVIDIA to beat this count. The leading "Ampere" part could be HPC-focused, featuring a large CUDA-, and tensor core count, besides exotic memory such as HBM2E. We should learn more about it at the upcoming GTC 2020 online event.

NVIDIA Introduces RAPIDS Open-Source GPU-Acceleration Platform

NVIDIA today announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed.

RAPIDS open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU's importance in data analytics, an array of companies is supporting RAPIDS - from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle.

NVIDIA Turing SDKs Now Available

NVIDIA's Turing architecture is one of the biggest leaps in computer graphics in 20 years. Here's a look at the latest developer software releases to take advantage of this cutting-edge GPU. CUDA 10: CUDA 10 includes support for Turing GPUs, performance optimized libraries, a new asynchronous task-graph programming model, enhanced CUDA & graphics API interoperability, and new developer tools. CUDA 10 also provides all the components needed to build applications for NVIDIA's most powerful server platforms for AI and high performance computing (HPC) workloads, both on-prem (DGX-2) and in the cloud (HGX-2).

TensorRT 5 - Release Candidate: TensorRT 5 delivers up to 40x faster inference performance over CPUs through new optimizations, APIs and support for Turing GPUs. It optimizes mixed precision inference dramatically across apps such as recommenders, neural machine translation, speech and natural language processing. TensorRT 5 highlights include INT8 APIs offering new flexible workflows, optimization for depthwise separable convolution, support for Xavier-based NVIDIA Drive platforms and the NVIDIA DLA accelerator. In addition, TensorRT 5 brings support for Windows and CentOS Operating Systems.

Let's Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

(Editor's Note: NVIDIA continues to spread its wings in the AI and automotive markets, where it has rapidly become the de facto player. While the company's gaming products have certainly been the ones to project the company's image - and profits - that allowed it to come to be one of the world's leading tech companies, it's hard to argue that AI and datacenter accelerators has become one of the chief departments in raking in profits for the company. The company's vision for Level 4 and Level 5 autonomous driving and the future of our connected cities is an inspiring one, that came straight from yesterday's science fiction. Here's hoping the human mind, laws and city design efforts accompany these huge technological leaps -or at least don't strangle them too much.)

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive. While the world's billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

NVIDIA Introduces HGX-2, Fusing HPC and AI Computing into Unified Architecture

NVIDIA HGX-2 , the first unified computing platform for both artificial intelligence and high performance computing. The HGX-2 cloud server platform, with multi-precision computing capabilities, provides unique flexibility to support the future of computing. It allows high-precision calculations using FP64 and FP32 for scientific computing and simulations, while also enabling FP16 and Int8 for AI training and inference. This unprecedented versatility meets the requirements of the growing number of applications that combine HPC with AI.

A number of leading computer makers today shared plans to bring to market systems based on the NVIDIA HGX-2 platform. "The world of computing has changed," said Jensen Huang, founder and chief executive officer of NVIDIA, speaking at the GPU Technology Conference Taiwan, which kicked off today. "CPU scaling has slowed at a time when computing demand is skyrocketing. NVIDIA's HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses HPC and AI to solve the world's grand challenges."

NVIDIA Announces the DGX-2 System - 16x Tesla V100 GPUs, 30 TB NVMe Memory for $400K

NVIDIA's DGX-2 is likely the reason why NVIDIA seems to be slightly less enamored with the consumer graphics card market as of late. Let's be honest: just look at that price-tag, and imagine the rivers of money NVIDIA is making on each of these systems sold. The data center and deep learning markets have been pouring money into NVIDIA's coffers, and so, the company is focusing its efforts in this space. Case in point: the DGX-2, which sports performance of 1920 TFLOPs (Tensor processing); 480 TFLOPs of FP16; half again that value at 240 TFLOPs for FP32 workloads; and 120 TFLOPs on FP64.

NVIDIA's DGX-2 builds upon the original DGX-1 in all ways thinkable. NVIDIA looks at these as readily-deployed processing powerhouses, which include everything any prospective user that requires gargantuan amounts of processing power can deploy in a single system. And the DGX-2 just runs laps around the DGX-1 (which originally sold for $150K) in all aspects: it features 16x 32GB Tesla V100 GPUs (the DGX-1 featured 8x 16 GB Tesla GPUs); 1.5 TB of system ram (the DGX-1 features a paltry 0.5 TB); 30 TB NVMe system storage (the DGX-1 sported 8 TB of such storage space), and even includes a pair of Xeon Platinum CPUs (admittedly, the lowest performance increase in the whole system).
Return to Keyword Browsing
Nov 8th, 2024 11:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts