Wednesday, November 18th 2020

NVIDIA Announces Financial Results for Third Quarter Fiscal 2021

NVIDIA (NASDAQ: NVDA) today reported record revenue for the third quarter ended October 25, 2020, of $4.73 billion, up 57 percent from $3.01 billion a year earlier, and up 22 percent from $3.87 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $2.12, up 46 percent from $1.45 a year ago, and up 114 percent from $0.99 in the previous quarter. Non-GAAP earnings per diluted share were $2.91, up 63 percent from $1.78 a year earlier, and up 33 percent from $2.18 in the previous quarter.

"NVIDIA is firing on all cylinders, achieving record revenues in Gaming, Data Center and overall," said Jensen Huang, founder and CEO of NVIDIA. "The new NVIDIA GeForce RTX GPU provides our largest-ever generational leap and demand is overwhelming. NVIDIA RTX has made raytracing the new standard in gaming. "We are continuing to raise the bar with NVIDIA AI. Our A100 compute platform is ramping fast, with the top cloud companies deploying it globally. We swept the industry AI inference benchmark, and our customers are moving some of the world's most popular AI services into production, powered by NVIDIA technology.
"We announced the NVIDIA DPU programmable data center processor, and the planned acquisition of Arm, creator of the world's most popular CPU. We are positioning NVIDIA for the age of AI, when computing will extend from the cloud to trillions of devices."

NVIDIA paid $99 million in quarterly cash dividends in the third quarter. It will pay its next quarterly cash dividend of $0.16 per share on December 29, 2020, to all shareholders of record on December 4, 2020.

NVIDIA's outlook for the fourth quarter of fiscal 2021 is as follows:
  • Revenue is expected to be $4.80 billion, plus or minus 2 percent.
  • GAAP and non-GAAP gross margins are expected to be 62.8 percent and 65.5 percent, respectively, plus or minus 50 basis points.
  • GAAP and non-GAAP operating expenses are expected to be approximately $1.64 billion and $1.18 billion, respectively.
  • GAAP and non-GAAP other income and expense are both expected to be an expense of approximately $55 million.
  • GAAP and non-GAAP tax rates are both expected to be 8 percent, plus or minus 1 percent, excluding any discrete items. GAAP discrete items include excess tax benefits or deficiencies related to stock-based compensation, which are expected to generate variability on a quarter-by-quarter basis.
Highlights
During the third quarter, NVIDIA announced a definitive agreement to acquire Arm Limited from SoftBank Capital Limited and SVF Holdco (UK) Limited in a transaction valued at $40 billion. The transaction will combine NVIDIA's leading AI computing platform with Arm's vast ecosystem to create the premier computing company for the age of AI. The transaction - which is expected to be immediately accretive to NVIDIA's non-GAAP gross margin and non-GAAP earnings per share - is expected to close in the first quarter of calendar 2022.

NVIDIA also announced plans to build a world-class AI lab in Cambridge, England - including a powerful AI supercomputer based on NVIDIA and Arm technology - and provide research fellowships and partnerships with local institutions and AI training courses. Separately, it plans to build Cambridge-1, the U.K.'s most powerful AI supercomputer, based on an NVIDIA DGX SuperPOD system and designed for AI research in healthcare and drug discovery.

NVIDIA also achieved progress since its previous earnings announcement in these areas:

Data Center
  • Third-quarter revenue was a record $1.90 billion, up 8 percent from the previous quarter and up 162 percent from a year earlier.
  • Shared news that Amazon Web Services and Oracle Cloud Infrastructure announced general availability of cloud computing instances based on the NVIDIA A100 GPU, following Google Cloud Platform and Microsoft Azure.
  • Announced the NVIDIA DGX SuperPOD Solution for Enterprise - the world's first turnkey AI infrastructure - which is expected to be installed by yearend in Korea, the U.K., India and Sweden.
  • Announced that five supercomputers backed by EuroHPC - including "Leonardo," the world's fastest AI supercomputer built by the Italian inter-university consortium CINECA - will use NVIDIA's data center accelerators or networking.
  • Introduced the NVIDIA BlueField-2 DPU (data processing unit) - supported by NVIDIA DOCA, a novel data-center-infrastructure-on-a-chip architecture - to bring breakthrough networking, storage and security performance to every data center.
  • Announced a broad partnership with VMware to create an end-to-end enterprise platform for AI and a new architecture for data center, cloud and edge using NVIDIA DPUs, benefiting 300,000-plus VMware customers.
  • Unveiled NVIDIA Maxine, an AI video-streaming platform that enhances streaming quality and offers such AI-powered features as gaze correction, super-resolution, noise cancellation and face relighting.
  • Introduced the NVIDIA RTX A6000 and NVIDIA A40 GPUs, built on the NVIDIA Ampere architecture and featuring new RT Cores, Tensor Cores and CUDA cores.
  • Extended its lead on MLPerf performance benchmarks for inference, winning every test across all six application areas for data center and edge computing systems.
  • Announced a partnership with GSK to integrate computing platforms for imaging, genomics and AI into the drug and vaccine discovery process.
  • Introduced at SC20, three powerful advances in AI technology: the NVIDIA A100 80 GB GPU, powering the NVIDIA HGX AI supercomputing platform with twice the memory of its predecessor; the NVIDIA DGX Station A100, the world's only petascale workgroup server, for machine learning and data science workloads; and the next generation of NVIDIA Mellanox InfiniBand, for the fastest networking performance.
Gaming
  • Third-quarter revenue was a record $2.27 billion, up 37 percent from the previous quarter and up 37 percent from a year earlier.
  • Unveiled the GeForce RTX 30 Series GPUs, powered by the NVIDIA Ampere architecture and the second generation of RTX, with up to 2x the performance of the previous Turing-based generation.
  • Announced that Fortnite - the world's most popular video game - will support NVIDIA RTX real-time raytracing and DLSS AI super-resolution, joining more than two dozen other titles including Cyberpunk 2077, Call of Duty: Black Ops and Watch Dogs: Legion.
  • Introduced NVIDIA Reflex, a suite of technologies that improves reaction time in games by reducing system latency, which is available in Fortnite, Valorant, Call of Duty: Warzone and Apex Legends, among other titles.
  • Unveiled NVIDIA Broadcast, a plugin that enhances microphone, speaker and webcam quality with RTX-accelerated AI effects.
Professional Visualization
  • Third-quarter revenue was $236 million, up 16 percent from the previous quarter and down 27 percent from a year earlier.
  • Brought to open beta NVIDIA Omniverse, the world's first NVIDIA RTX-based 3D simulation and collaboration platform.
  • Announced NVIDIA Omniverse Machinima, enabling creators with video game assets animated by NVIDIA AI technologies.
  • Collaborated with Adobe to bring GPU-accelerated neural filters to Adobe Photoshop AI-powered tools.
Automotive
  • Third-quarter revenue was $125 million, up 13 percent from the previous quarter and down 23 percent from a year earlier.
  • Announced with Mercedes-Benz that NVIDIA is powering the next-generation MBUX AI cockpit system, to be featured first in the new S-class sedan, with such features as an augmented reality heads-up display, AI voice assistant and interactive graphics.
  • Announced with Hyundai Motor Group that the Korean automaker's entire lineup of Hyundai, Kia and Genesis models will come standard with NVIDIA DRIVE in-vehicle infotainment systems, starting in 2022.
  • Announced that China's Li Auto will develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin, a software-defined platform for autonomous vehicles.
Add your own comment

38 Comments on NVIDIA Announces Financial Results for Third Quarter Fiscal 2021

#26
Vya Domus
sith'arii have underlined what you said and i answered it : you said :
"""""It had little to do with the architecture, AMD was in trouble when they were stuck using GloFo........... """"""
and i answered to what you said by mentioning that Radeon VII and RX5700-series were manufactured at 7nm TSMC ,not by GloFo ....
Bro stop adding a million quotations mark, you are going to give me a seizure if you're logic doesn't kill me before that.

Radeon VII is the same Vega architecture which was designed under GloFo 14nm. Radeon 7 was 23% faster than Vega 64 with less shaders and while using less power proving just how bad GloFo node was.
nguyenBest node doesn't mean good GPU
Funny how almost every good GPU also happens to be on a good node. Reality doesn't confirm your ideas.
Posted on Reply
#27
nguyen
Vya DomusBro stop adding a million quotations mark, you are going to give me a seizure if you're logic doesn't kill me before that.

Radeon VII is the same Vega architecture which was designed under GloFo 14nm. Radeon 7 was 23% faster than Vega 64 with less shaders and while using less power proving just how bad GloFo node was.
Funny how almost every good GPU is also on a good node. Reality doesn't confirm your ideas.
Yeah sure like Navi 7nm does anything better than Turing 16/12nm :roll: .

Anyways Nvidia is basically printing money as fast as they can with Samsung 8N, no point in discussing the downfall of Samsung 8N.
Posted on Reply
#28
sith'ari
Vya Domus.......................................
Radeon VII is the same Vega architecture which was designed under GloFo 14nm. Radeon 7 was 23% faster than Vega 64 with less shaders and while using less power proving just how bad GloFo node was.
It was a 7nm , yet it couldn't compete with nVIDIA 's 12nm .
This proves my point when i said that : """"""And with their previous lines , AMD had massive performance issues when compared to nVIDIA 's architectures"""""" ,
and this was what you refuted in your 1st reply towards me.
Can we even understand each other or not ??? look again what was your 1st reply towards me .......
Posted on Reply
#29
Vya Domus
nguyenYeah sure like Navi 7nm does anything better than Turing 16/12nm
Yeah, AMD's 250mm^2 chip performing the same as Nvidia's 450mm^2, that's was a pretty bad showing of TSMC 7nm node.
nguyenno point in discussing the downfall of Samsung 8N.
You sure love to go back of forth on it though.
sith'ariIt was a 7nm , yet it couldn't compete with nVIDIA 's 12nm .
It was a 300 mm^2 chip and Nvidia was already pushing 750 mm^2 chips with a lot more transitors by the way. Considering the die sizes and transistor budgets AMD's 7nm designs were already superior by then. If you insist to ignore everything besides performance then you can no longer claim an architecture was worse or better because you need comparable transistor budgets and nodes to work that out. Nice try though.

Hello, maybe use your brain before you make these baffling comparisons ?
sith'ariCan we even understand each other or not ?
Sadly not, I can barely make out what you write. Needs more quotations marks.
Posted on Reply
#30
sith'ari
Vya DomusYeah, AMD's 250mm^2 chip performing the same as Nvidia's 450mm^2, that's was a pretty bad showing of TSMC 7nm node.
It was a 300 mm^2 chip and Nvidia was already pushing 750 mm^2 chips with a lot more transitors by the way. Considering the die sizes and transistor budgets AMD's 7nm designs were already superior by then.
it was AMD's problem that their architecture couldn't scale-up in order for them to create a larger GPU that could compete , although they had a 7nm process advantage , so , just like i said time and time again ,
""""""And with their previous lines , AMD had massive performance issues when compared to nVIDIA 's architectures"""""" .
Apparently we fail to communicate , and i won't waste any more of my time.
Cheers.
Posted on Reply
#31
Vya Domus
sith'ariApparently we fail to communicate , and i won't waste any more of my time.
We fail to communicate because you don't understand how one goes about in comparing architectures, you need at the very least a similar manufacturing process to isolate differences in terms of architectural efficiency.
sith'arialthough they had a 7nm process advantage
The design was conceived on 14nm and was unchanged it was simply ported to 7nm, of course it wasn't scaled up it was just a node shrink. Genius.

Anyway, cheers. And a lot of """"""""""".
Posted on Reply
#32
TristanX
nguyenSeems like Ampere is earning Nvidia so much money despite poor availability like many people have claimed ? it's like they are printing money there.
Not. Look at shops, all high-level Turing (2080 TI, 2080(S), 2070(S), 2060 S ) cards are sold-out.
Posted on Reply
#33
nguyen
TristanXNot. Look at shops, all high-level Turing (2080 TI, 2080(S), 2070(S), 2060 S ) cards are sold-out.
Nvidia discontinued them since July, so yeah, sold out is expected ?
Posted on Reply
#34
95Viper
Stop your Back and Forth bickering... stay on topic.
Report the post; and, stop the response posting after your report it.
Quote from Guidelines( You might want to familiarize with them ):
Reporting and complaining
All posts and private messages have a "report post" button on the bottom of the post, click it when you feel something is inappropriate. Do not use your report as a "wild card invitation" to go back and add to the drama and therefore become part of the problem.
Thank You and Have a Nice Day
Posted on Reply
#35
medi01
I wonder when that Q3 is and where the YoY revenue spike comes from.
Posted on Reply
#36
Ashtr1x
Vya DomusFaster, probably not by much, but it would have been a lot more power efficient. And yes, they have literally saved a few bucks per chip, you can use a wafer calculator and look up estimated prices for TSMC and Samsung and see for yourself. Of course all of this seems dumb and a mystery to people who don't anything about this, I don't blame you for believing that.

But then what's the alternative ? Say Samsung's node is fucking amazing as you think it is, the massive gap in efficiency between them and AMD must be explained somehow and the only other explanation is that instead of the node being garbage their architecture is garbage.

And by the way the "process made specifically for Nvidia" is actually the high performance version of Samsung's 8nm node, the same way 12nm was just TSMC's 16nm.
I never said Samsung node is amazing, TSMC had lot of funding from Apple alone and they had a huge advantage vs Samsung in terms of yield and experience due to multiple contracts. I know it is an extension of 10nm, it also doesn't use EUV like 7N. I'm talking only on the performance side. Not efficiency. On top the 7N is more denser than 8N, their GPU temp for throttling (13Mhz clock drop) is now at 88C, dunno if going for 7N would even reduce that or not and how it will help the performance considering all the facts that Nvidia thought of including AMD's competition, Supply & Demand of 7N and others.

And I don't know where to calculate the prices for the wafers and all, not much into that but when a quick google search I got these below

"In other words, in the best of cases NVIDIA would be paying $ 5,600 per wafer." - Samsung 8N (Article that mentions this along with more details incl. perf vs nodes)
Here's TSMC price of the 7N wafer cost from the leaker that we all know ~$9300 / wafer.

On top of all this, Nvidia's Ampere never seemed to put insane focus on Rasterization performance considering how they were approaching the HPC side of things since Volta, I think it was apparent since they put out the details and leaks & their changes to the FP32 compute too.
Posted on Reply
#37
wolf
Better Than Native
For all the stock vs demand shortfalls and inferior node business, it would appear that the revenue has still been excellent. I wonder what ampere refresh will bring next year.
Posted on Reply
#38
nguyen
wolfFor all the stock vs demand shortfalls and inferior node business, it would appear that the revenue has still been excellent. I wonder what ampere refresh will bring next year.
Well this is like the second coming of the mining craze, any and all GPU will get gobbled up. So whoever produce more cards win this round :laugh:. Seems like AMD is struggling to produce RX6000 while delivering Ryzen 5000 and PS5/XB Series X/S at the same time.
The PS5/XB contract might have bitten AMD's hand, they would rather have all that TSMC 7nm capacity be making Ryzen 5000/RX 6000.
Posted on Reply
Add your own comment
Dec 23rd, 2024 06:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts