News Posts matching #DGX A100

Return to Keyword Browsing

NVIDIA to Showcase AI-generated "Large Nature Model" at GTC 2024

The ecosystem around NVIDIA's technologies has always been verdant—but this is absurd. After a stunning premiere at the World Economic Forum in Davos, immersive artworks based on Refit Anadol Studio's Large Nature Model will come to the U.S. for the first time at NVIDIA GTC. Offering a deep dive into the synergy between AI and the natural world, Anadol's multisensory work, "Large Nature Model: A Living Archive," will be situated prominently on the main concourse of the San Jose Convention Center, where the global AI event is taking place, from March 18-21.

Fueled by NVIDIA's advanced AI technology, including powerful DGX A100 stations and high-performance GPUs, the exhibit offers a captivating journey through our planet's ecosystems with stunning visuals, sounds and scents. These scenes are rendered in breathtaking clarity across screens with a total output of 12.5 million pixels, immersing attendees in an unprecedented digital portrayal of Earth's ecosystems. Refik Anadol, recognized by The Economist as "the artist of the moment," has emerged as a key figure in AI art. His work, notable for its use of data and machine learning, places him at the forefront of a generation pushing the boundaries between technology, interdisciplinary research and aesthetics. Anadol's influence reflects a wider movement in the art world towards embracing digital innovation, setting new precedents in how art is created and experienced.

NVIDIA DGX H100 Systems are Now Shipping

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence. They're creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom—and working to transform their industries in the process. Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow's plants can be more efficient.

Called Green Physics AI, it adds information like an object's CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

ASUS Announces NVIDIA-Certified Servers and ProArt Studiobook Pro 16 OLED at GTC

ASUS today announced its participation in NVIDIA GTC, a developer conference for the era of AI and the metaverse. ASUS will offer comprehensive NVIDIA-certified server solutions that support the latest NVIDIA L4 Tensor Core GPU—which accelerates real-time video AI and generative AI—as well as the NVIDIA BlueField -3 DPU, igniting unprecedented innovation for supercomputing infrastructure. ASUS will also launch the new ProArt Studiobook Pro 16 OLED laptop with the NVIDIA RTX 3000 Ada Generation Laptop GPU for mobile creative professionals.

Purpose-built GPU servers for generative AI
Generative AI applications enable businesses to develop better products and services, and deliver original content tailored to the unique needs of customers and audiences. ASUS ESC8000 and ESC4000 are fully certified NVIDIA servers that support up to eight NVIDIA L4 Tensor Core GPUs, which deliver universal acceleration and energy efficiency for AI with up to 2.7X more generative AI performance than the previous GPU generation. ASUS ESC and RS series servers are engineered for HPC workloads, with support for the NVIDIA Bluefield-3 DPU to transform data center infrastructure, as well as NVIDIA AI Enterprise applications for streamlined AI workflows and deployment.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

TOP500 Update Shows No Exascale Yet, Japanese Fugaku Supercomputer Still at the Top

The 58th annual edition of the TOP500 saw little change in the Top10. The Microsoft Azure system called Voyager-EUS2 was the only machine to shake up the top spots, claiming No. 10. Based on an AMD EPYC processor with 48 cores and 2.45GHz working together with an NVIDIA A100 GPU and 80 GB of memory, Voyager-EUS2 also utilizes a Mellanox HDR Infiniband for data transfer.

While there were no other changes to the positions of the systems in the Top10, Perlmutter at NERSC improved its performance to 70.9 Pflop/s. Housed at the Lawrence Berkeley National Laboratory, Perlmutter's increased performance couldn't move it from its previously held No. 5 spot.

NVIDIA Launches UK's Most Powerful Supercomputer

NVIDIA today officially launched Cambridge-1, the United Kingdom's most powerful supercomputer, which will enable top scientists and healthcare experts to use the powerful combination of AI and simulation to accelerate the digital biology revolution and bolster the country's world-leading life sciences industry. Dedicated to advancing healthcare, Cambridge-1 represents a $100 million investment by NVIDIA. Its first projects with AstraZeneca, GSK, Guy's and St Thomas' NHS Foundation Trust, King's College London and Oxford Nanopore Technologies include developing a deeper understanding of brain diseases like dementia, using AI to design new drugs and improving the accuracy of finding disease-causing variations in human genomes.

Cambridge-1 brings together decades of NVIDIA's work in accelerated computing, AI and life sciences, where NVIDIA Clara and AI frameworks are optimized to take advantage of the entire system for large-scale research. An NVIDIA DGX SuperPOD supercomputing cluster, it ranks among the world's top 50 fastest computers and is powered by 100 percent renewable energy.

NVIDIA Announces New DGX SuperPOD, the First Cloud-Native, Multi-Tenant Supercomputer, Opening World of AI to Enterprise

NVIDIA today unveiled the world's first cloud-native, multi-tenant AI supercomputer—the next-generation NVIDIA DGX SuperPOD featuring NVIDIA BlueField -2 DPUs. Fortifying the DGX SuperPOD with BlueField-2 DPUs—data processing units that offload, accelerate and isolate users' data—provides customers with secure connections to their AI infrastructure.

The company also announced NVIDIA Base Command, which enables multiple users and IT teams to securely access, share and operate their DGX SuperPOD infrastructure. Base Command coordinates AI training and operations on DGX SuperPOD infrastructure to enable the work of teams of data scientists and developers located around the globe.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.

NVIDIA Announces the A100 80GB GPU for AI Supercomputing

NVIDIA today unveiled the NVIDIA A100 80 GB GPU—the latest innovation powering the NVIDIA HGX AI supercomputing platform—with twice the memory of its predecessor, providing researchers and engineers unprecedented speed and performance to unlock the next wave of AI and scientific breakthroughs. The new A100 with HBM2E technology doubles the A100 40 GB GPU's high-bandwidth memory to 80 GB and delivers over 2 terabytes per second of memory bandwidth. This allows data to be fed quickly to A100, the world's fastest data center GPU, enabling researchers to accelerate their applications even faster and take on even larger models and datasets.

"Achieving state-of-the-art results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before," said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. "The A100 80 GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2 TB per second barrier, enabling researchers to tackle the world's most important scientific and big data challenges."

NVIDIA Building UK's Most Powerful Supercomputer, Dedicated to AI Research in Healthcare

NVIDIA today announced that it is building the United Kingdom's most powerful supercomputer, which it will make available to U.K. healthcare researchers using AI to solve pressing medical challenges, including those presented by COVID-19.

Expected to come online by year end, the "Cambridge-1" supercomputer will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance and 8 petaflops of Linpack performance, which would rank it No. 29 on the latest TOP500 list of the world's most powerful supercomputers. It will also rank among the world's top 3 most energy-efficient supercomputers on the current Green500 list.

AMD Reports Second Quarter 2020 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the second quarter of 2020 of $1.93 billion, operating income of $173 million, net income of $157 million and diluted earnings per share of $0.13. On a non-GAAP basis, operating income was $233 million, net income was $216 million and diluted earnings per share was $0.18. "We delivered strong second quarter results, led by record notebook and server processor sales as Ryzen and EPYC revenue more than doubled from a year ago," said Dr. Lisa Su, AMD president and CEO. "Despite some macroeconomic uncertainty, we are raising our full-year revenue outlook as we enter our next phase of growth driven by the acceleration of our business in multiple markets."

NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

"We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

NVIDIA Announces Financial Results for First Quarter Fiscal 2021

NVIDIA today reported revenue for the first quarter ended April 26, 2020, of $3.08 billion, up 39 percent from $2.22 billion a year earlier, and down 1 percent from $3.11 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $1.47, up 130 percent from $0.64 a year ago, and down 4 percent from $1.53 in the previous quarter. Non-GAAP earnings per diluted share were $1.80, up 105 percent from $0.88 a year earlier, and down 5 percent from $1.89 in the previous quarter.

NVIDIA completed its acquisition of Mellanox Technologies Ltd. on April 27, 2020, for a transaction value of $7 billion. It also transitioned its GPU Technology Conference to an all-digital format, drawing more than 55,000 registered participants, while NVIDIA founder and CEO Jensen Huang's keynote videos were viewed 3.8 million times in their first three days.

NVIDIA RTX 3080 Ti and GA102 "Ampere" Specs, Other Juicy Bits Revealed

PC hardware focused YouTube channel Moore's Law is Dead published a juicy tech-spec reveal of NVIDIA's next-generation "Ampere" based flagship consumer graphics card, the GeForce RTX 3080 Ti, citing correspondence with sources within NVIDIA. The report talks of big changes to NVIDIA's Founders Edition (reference) board design, as well as what's on the silicon. To begin with, the RTX 3080 Ti reference-design card features a triple-fan cooling solution unlike the RTX 20-series. This cooler is reportedly quieter than the RTX 2080 Ti FE cooling solution. The card pulls power from a pair of 8-pin PCIe power connectors. Display outputs include three DP, and one each of HDMI and VirtualLink USB-C. The source confirms that "Ampere" will implement PCI-Express gen 4.0 x16 host interface.

With "Ampere," NVIDIA is developing three tiers of high-end GPUs, with the "GA102" leading the pack and succeeding the "TU102," the "GA104" holding the upper-performance segment and succeeding today's "TU104," but a new silicon between the two, codenamed "GA103," with no predecessor from the current-generation. The "GA102" reportedly features 5,376 "Ampere" CUDA cores (up to 10% higher IPC than "Turing"). The silicon also taps into the rumored 7 nm-class silicon fabrication node to dial up GPU clock speeds well above 2.20 GHz even for the "GA102." Smaller chips in the series can boost beyond 2.50 GHz, according to the report. Even with the "GA102" being slightly cut-down for the RTX 3080 Ti, the silicon could end up with FP32 compute performance in excess of 21 TFLOPs. The card uses faster 18 Gbps GDDR6 memory, ending up with 863 GB/s of memory bandwidth that's 40% higher than that of the RTX 2080 Ti (if the memory bus width ends up 384-bit). Below are screengrabs from the Moore's Law is Dead video presentation, and not NVIDIA slides.

NVIDIA DGX A100 is its "Ampere" Based Deep-learning Powerhouse

NVIDIA will give its DGX line of pre-built deep-learning research workstations its next major update in the form of the DGX A100. This system will likely pack number of the company's upcoming Tesla A100 scalar compute accelerators based on its next-generation "Ampere" architecture and "GA100" silicon. The A100 came to light though fresh trademark applications by the company. As for specs and numbers, we don't know yet. The "Volta" based DGX-2 has up to sixteen "GV100" based Tesla boards adding up to 81,920 CUDA cores and 512 GB of HBM2 memory. One can expect NVIDIA to beat this count. The leading "Ampere" part could be HPC-focused, featuring a large CUDA-, and tensor core count, besides exotic memory such as HBM2E. We should learn more about it at the upcoming GTC 2020 online event.
Return to Keyword Browsing
Dec 22nd, 2024 01:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts