- Joined
- Oct 9, 2007
- Messages
- 47,217 (7.55/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Intel CEO Pat Gelsinger considers NVIDIA "extraordinarily lucky" to be leading the AI hardware industry. In a recent public discussion with the students of MIT's engineering school to discuss the state of the semiconductor industry, Gelsinger said that Intel should be the one to be leading AI, but instead NVIDIA got lucky. We respectfully disagree. What Gelsinger glosses over with this train of thought is how NVIDIA got here. What NVIDIA has in 2023 is the distinction of being one of the hottest tech stocks behind Apple, the highest market share in a crucial hardware resource driving the AI revolution, and of course the little things, like market leadership over the gaming GPU market. What it doesn't have, is access to the x86 processor IP.
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.
NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
View at TechPowerUp Main Site | Source
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.
NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
View at TechPowerUp Main Site | Source