Thursday, December 21st 2023
Intel Should be Leading the AI Hardware Market: Pat Gelsinger on NVIDIA Getting "Extraordinarily Lucky"
Intel CEO Pat Gelsinger considers NVIDIA "extraordinarily lucky" to be leading the AI hardware industry. In a recent public discussion with the students of MIT's engineering school to discuss the state of the semiconductor industry, Gelsinger said that Intel should be the one to be leading AI, but instead NVIDIA got lucky. We respectfully disagree. What Gelsinger glosses over with this train of thought is how NVIDIA got here. What NVIDIA has in 2023 is the distinction of being one of the hottest tech stocks behind Apple, the highest market share in a crucial hardware resource driving the AI revolution, and of course the little things, like market leadership over the gaming GPU market. What it doesn't have, is access to the x86 processor IP.
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
Source:
ExtremeTech
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
55 Comments on Intel Should be Leading the AI Hardware Market: Pat Gelsinger on NVIDIA Getting "Extraordinarily Lucky"
NVIDIA very much could be how the use of hardware accelerated machine learning got to where it is today. I remember attending a GTC in 2011 where it was already in the air.
While AMD were struggling to survive and nVidia was investing in innovations, Intel was paying big dividends.
www.techpowerup.com/61702/university-of-antwerp-makes-4k-eur-supercomputer-with-four-geforce-9800-gx2-cards
All of that lucky BrookGPU -> to CUDA -> cuDNN roadmap that took decades to execute on.
They even have the marketing slides.
Just adding a couple stuff here.
Nvidia tried to get an X86 license. Intel said no.
Huang said once(I think, read it once many many years ago), I believe around 2005 or even earlier "Nvidia is a software company that also happens to build the hardware where that software will run"
And one more thought.
Creative was the King in Audio 20+ years ago. With on board audio solutions becoming the normal, Creative just became an old name to remember.
I think Huang seen that and making the GPU a powerful co processor wasn't only a great vision, but also a necessary transformation of the simple graphics chip that could also be replaced in the future from cheap on board solutions that could be good enough for the majority of consumers.
Development, tools and community is why. NVIDIA actively running workshops, actively running courses and actively granting the tools to buy (that means even your local computer store's gaming GPU), learn and perform ML actions. No server grade or expensive hardware needed
*We should ALL be praying somebody knocks Nvidia off their high horse. Monopoly and AI alone are issues that represent many dangers, but together....that could be truly dangerous.
Going back to the formation of the "Gen" graphics division (separate from Larrabee, those were two distinct teams) Intel invested heavily in the development of fixed-function accelerators for video with Quick Sync. They also began work on AMX way back in 2017 but made the poor decision to only incorporate that hardware in Sapphire Rapids which saw an historic 30+ months of delays before finally crawling over the finish line. So it's not that they sat back and did nothing, they simply weren't strategic enough in the use of their technologies.
In any case it makes him look like a fool, if a competitor has done well through decades of careful planning and competent execution you acknowledge that, to do otherwise makes you look graceless, petty, and small.
reminds him of what he also lost.... apple... and now they are outperforming them with thier own silicon that does better with creative tasks like music production.
Clearly upset that he's riding on leather coat tails in to the AI market, while simultaneously being shown how to engineer a great CPU with less resources, by a team 1/3 their size. It is Intel that is 'lucky' to have this new market created by others, where they can now dump their -me too- "AI" products.
Intel failed so hard on tablets. So much waste