Thursday, December 21st 2023
Intel Should be Leading the AI Hardware Market: Pat Gelsinger on NVIDIA Getting "Extraordinarily Lucky"
Intel CEO Pat Gelsinger considers NVIDIA "extraordinarily lucky" to be leading the AI hardware industry. In a recent public discussion with the students of MIT's engineering school to discuss the state of the semiconductor industry, Gelsinger said that Intel should be the one to be leading AI, but instead NVIDIA got lucky. We respectfully disagree. What Gelsinger glosses over with this train of thought is how NVIDIA got here. What NVIDIA has in 2023 is the distinction of being one of the hottest tech stocks behind Apple, the highest market share in a crucial hardware resource driving the AI revolution, and of course the little things, like market leadership over the gaming GPU market. What it doesn't have, is access to the x86 processor IP.
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
Source:
ExtremeTech
NVIDIA has, for long, aspired to be a CPU company, right from its rumored attempt to merge with AMD in the early/mid 2000s, to its stint with smartphone application processors with Tegra, an assortment of Arm-based products along the way, and most recently, its spectacularly unsuccessful attempt to acquire Arm from Softbank. Despite limited luck with the CPU industry, to level up to Intel, AMD, or even Qualcomm and MediaTek; NVIDIA never lost sight of its goal to be a compute hardware superpower, which is why, in our opinion, it owns the AI hardware market. NVIDIA isn't lucky, it spent 16 years getting here.NVIDIA's journey to AI hardware leadership begins back in the late 2000s, when it saw the potential for GPU to be a general purpose processor, since programmable shaders essentially made the GPU a many-core processor with a small amount of fixed-function raster hardware on the side. The vast majority of an NVIDIA GPU's die-area is made up of streaming multiprocessors—the GPU's programmable SIMD muscle.
NVIDIA's primordial attempts to break into the HPC market with its GPUs bore fruit with its "Tesla" GPU, and the compute unified device architecture, or CUDA. NVIDIA's unique software stack that lets developers build and accelerate applications on its hardware dates all the way back to 2007. CUDA set in motion a long and exhaustive journey leading up to NVIDIA's first bets with accelerated AI on its GPUs a decade later, beginning with "Volta." NVIDIA realized that despite a vast amount of CUDA cores on its GPUs and HPC processors, it needed some fixed-function hardware to speed up deep learning neural network building, training, and inference, and developed the Tensor core.
In all this time, Intel continued to behave like a CPU company and not a compute company—the majority of its revenue came from client CPUs, followed by server CPUs, and it has consistently held accelerators at a lower priority. Even as Tesla and CUDA took off in 2007, Intel had its first blueprints for an SIMD accelerator, codenamed "Larrabee" as early as by 2008. The company hasn't accorded the focus Larrabee needed as a nascent hardware technology. But that's on Intel. AMD has been a CPU + GPU company since its acquisition of ATI in 2006, and has tried to played catch-up with NVIDIA by combining its Stream compute architecture with open compute software technologies. The reason AMD's Instinct CDNA processors aren't as successful as NVIDIA's A100 and H100 processors is the same reason Intel never stood a chance in this market with its "Ponte Vecchio"—it was slow to market, and didn't nurture an ecosystem around its silicon quite like NVIDIA did.
Hardware is a fraction of NVIDIA's growth story—the company has an enormous, top-down software stack, including its own programming language, APIs, prebuilt compute and AI models; and a thriving ecosystem of independent developers and ISVs that it has nurtured over these years. So by the time AI took off at scale as a revolution in computing, NVIDIA was ready with the fastest hardware, and the largest community of developers that could put it to use. We began this editorial by stating that it's a good thing NVIDIA didn't acquire an x86 license in the early 2000s. It could switch gears and look inward on the one thing it was already making that can crunch numbers at scale—GPUs with programmable shaders. What NVIDIA is extraordinarily lucky about is that it didn't get stuck with an x86 license.
You can watch Pat Gelsinger's interview over at MIT's YouTube channel, here.
55 Comments on Intel Should be Leading the AI Hardware Market: Pat Gelsinger on NVIDIA Getting "Extraordinarily Lucky"
Yeah one got a lot of bucks off miners "a few times" you can guess which one got that easy money to blow on AI :laugh:
Intel was to busy on ++++++ refreshes every 6 months.
If you want to see what he actually has to say about how Intel messed it up watch from 13:00 to 20:50 here:
Summarized: Intel was working on massively parallel compute when he left. After he left, Intel more or less killed that project (only Xeon Phi continued). Gelsinger considers that the mistake. It would have been an expensive 10-year project and Intel's demise was not doing that. Now Intel is trying to catch up, but they have far more limited resources.
“AMD is using glue for their chips. Snake oil”, “WE SHOULD BE THE LEADER OF AI, Nvidia is “Extraordinary LUCKY” LUCKY!?!? Pfffffffahhahahahahahahahaha. As much I give s$&t to Nvidia, luck is definitely not that LOL
Intel and AMD software also tend to be free, open source, and easy to port to other platforms. I wonder if Nvidia's accountants are a lot more willing to spend on software knowing that everyone who uses it will have to buy Nvidia hardware. We criticize Nvidia a lot for proprietary software but Nvidia created adaptive displays, AI upscale, and other cool things. Then again, AMD gave us Mantle which became Vulcan and lead to DirectX 12.
Nvidia switch to being a service/software company in addition to hardware too, clairvoyance.
Same for cloud and HPC, Nvidia invested heavily in those areas and early-on.
Intel is in very good financial shape and their foundry business is really strategic. USA and EU are very interested in getting as many domestic foundries as possible.
Now they are investing at the same times on foundries, GPU, cloud, HPC, AI/ML...and catching up on CPU efficiency, that's the issue I feel that intel whilst they really had the monopoly and supremacy, under-invested and lacked clairvoyance, others caught up and it weakened Intel in many strategic area where it could have a much bigger footprint, true
Spin it however you want, Pat, but NVIDIA got your playbook and ran the plays way better than you did.
Nvidia executes Jensen's vision with hardware and software, and the stock price is a byproduct.
Intel itself had and still has a wide backbone of kinda pro-intel ISVs, media, business- and enduser supplyers and ecosystems but those leftovers are shrinking.
Still those where heavy weapons to sandback vs AMD in the x86 space and some are still acting today,
like a shielding for the remaining pro-intel ecosystems if i may call them like that.
That behavior was 100% intel mindset for many years .... and whoosh ... there is CUDA and Ryzen ... intel must have been thinking they where invincible untouchable or something.
But no sorry ... the others only where lucky. :kookoo:
Both Intel and AMD, their greatest rivals, do so routinely (Intel relies on their tick/tock cadence and if this fails, their entire business goes belly up, see 14nm++++ and more recently, the quote unquote """"14th gen""" BS they have come up with), with AMD having an honest to God corporate culture problem; which is why no matter how great a product they seem to create, it will always be the second option: either because the software falters (and would you believe me if I told you their devs are not idiots? they just have their hands bound by the suits, this is why I am so fed up with AMD!) or because marketing oversold it - and at the wrong time as well.
They haven't been doing accelerations targeted at neural nets before google made tensorflow.
Its pretty basic to be trying to create demand for your accelerators.
Should I say that Intel "was lucky enough to lead the fab process before 14 nm"???