Wednesday, March 19th 2025

Pat Gelsinger Repeats Observation that NVIDIA CEO "Got Lucky" with AI Industry Boom

Pat Gelsinger has quite bravely stepped into the belly of the beast this week. The former Intel boss was an invited guest at NVIDIA's GTC 2025 conference; currently taking place in San Francisco, California. Technology news outlets have extracted key quotes from Gelsinger's musings during an in-person appearance on Acquired's "Live at GTC" video podcast. In the past, the ex-Team Blue chief held the belief that NVIDIA was "extraordinarily lucky" with a market leading position. Yesterday's panel discussion provided a repeat visit—where Gelsinger repeated his long-held opinion: "the CPU was the king of the hill, and I applaud Jensen for his tenacity in just saying, 'No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics. You know, it became this broader view. And then he got lucky with AI, and one time I was debating with him, he said: 'No, I got really lucky with AI workload because it just demanded that type of architecture.' That is where the center of application development is (right now)."

The American businessman and electrical engineer reckons that AI hardware costs are climbing to unreasonable levels: "today, if we think about the training workload, okay, but you have to give away something much more optimized for inferencing. You know a GPU is way too expensive; I argue it is 10,000 times too expensive to fully realize what we want to do with the deployment of inferencing for AI and then, of course, what's beyond that." Despite the "failure" of a much older Intel design, Gelsinger delved into some rose-tinted nostalgia: "I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput oriented architecture (of a GPU), and I think had Intel stay on that path, you know, the future could have been different...I give Jensen a lot of credit (as) he just stayed true to that throughput computing or accelerated (vision)." With the semi-recent cancelation of "Falcon Shores" chip design, Intel's AI GPU division is likely regrouping around their next-generation "Jaguar Shores" project—industry watchdogs reckon that this rack-scale platform will arrive in 2026.
Pat Gelsinger turns up at the 38-minute mark on the "Live at NVIDIA GTC with Acquired" videocast.
Sources: NVIDIA YouTube Channel, Tom's Hardware, Wccftech, DigiTimes Asia
Add your own comment

19 Comments on Pat Gelsinger Repeats Observation that NVIDIA CEO "Got Lucky" with AI Industry Boom

#1
Event Horizon
It felt like Intel was reacting to CUDA and the rise of GPGPU, but without any long-term plans to push the field forward. Even 10 years ago, running machine learning on GPUs was a hot topic. It just wasn't all over the place like AI is nowadays. Intel could have and should have captured a slice of the pie. Better late than never I guess.
Posted on Reply
#2
TechBuyingHavoc
Come on Pat, don't be a sore loser. I was with your corporate strategy as the right move for Intel that just came too late, but this just makes you look pathetic. Love or hate Jensen, but he was planning for AI long before it became popular to use AI as a marketing move.
Posted on Reply
#3
kondamin
He's correct and intel was in a luxury position to make a heck of a lot of mistakes.
They lost that position.

I hope they manage to recover their former glory but that hope is very very little
Posted on Reply
#4
Vayra86
I think its commendable he points out that Jensen was on the right path as well. Its not just being sore here. More of an observation I'd say.

I mean the numbers don't lie anyway. He's got bigger issues to worry about
Posted on Reply
#5
trsttte
Unnecessarily click baity tittle lol.

Yeah, Intel leadership back in the 2010's was just terrible! They completely wasted their market dominance
Posted on Reply
#6
AGlezB
Event HorizonIt felt like Intel was reacting to CUDA and the rise of GPGPU, but without any long-term plans to push the field forward. Even 10 years ago, running machine learning on GPUs was a hot topic. It just wasn't all over the place like AI is nowadays. Intel could have and should have captured a slice of the pie. Better late than never I guess.
Intel did the exact same thing IBM did before them: sit on their behinds thinking nothing could possibly shake them from market leadership.

It happens every time someone sits a salesman as CEO of a tech company: by they time they realize their mistake and try to pass the ball to someone with actual understanting the the products they make it is usually too late and the competition is so far ahead it's not ever fair to say they're "competing".
Posted on Reply
#7
InVasMani
If they didn't start running the company like a Amway for decade maybe they'd be in a better position today.
Posted on Reply
#8
Daven
Sorry Patty Cakes but Nvidia Tesla data center COMPUTE units were around way before AI. Your precious company stopped competing a long time ago and only ‘delivered against the workload’ through anti-competitive behavior.
Posted on Reply
#9
Darmok N Jalad
Well of course success comes with being in the right place at the right time, but that’s not the end of it. You also have to be in a position to do something about it. Intel was also in the right place at the right time and did something about it, and then the empire faded. They are still important, but no longer the dominant industry leader.
Posted on Reply
#10
boidsonly
For the last decade Intel has traded innovation for cheap H1-B labor and lost the race in the process.
Posted on Reply
#11
lexluthermiester
I agree with Pat, NVidia really got lucky. But they recognized that lucky break quickly and ran with it.
Posted on Reply
#12
user556
The price tag on the environment is a far bigger worry than the price tag of installing the hardware.
Posted on Reply
#13
Scrizz
DavenSorry Patty Cakes but Nvidia Tesla data center COMPUTE units were around way before AI.
AI has been around way before NVidia existed.... ;)
Posted on Reply
#14
R0H1T
Right, so Intel got lucky with *dozer & the OEM bribes ~ no wait maybe that doesn't count :slap:
InVasManiIf they didn't start running the company like a Amway for decade maybe they'd be in a better position today.
They're still doing it today! How come you need to change your mobo every 2nd gen? Yeah and for Intel apologists 12-14th gen is virtually the same uarch.
Posted on Reply
#15
TumbleGeorge
Well, Pat is just crying about his huge salary, bonuses, and benefits that he was getting from Intel. He could blame himself if he wasn't too narcissistic to do so.
Posted on Reply
#16
AusWolf
I think he's right in his comment but... Big but...

Intel moves too slow. Larrabee took ages to develop and never saw the light of day. A-series Arc was extremely late, and came with too many problems that should have been ironed out before release. I've heard B-series is much better, but still late, and only a single lower-midrange card available, while even the delayed 9070 (XT) has been out for half a month now, and the 9060 and 5060 coming soon.

So sorry, but it's not just Nvidia's luck, but also Intel's snail pace in the GPU space.
Posted on Reply
#17
tpa-pr
AusWolfI think he's right in his comment but... Big but...

Intel moves too slow. Larrabee took ages to develop and never saw the light of day. A-series Arc was extremely late, and came with too many problems that should have been ironed out before release. I've heard B-series is much better, but still late, and only a single lower-midrange card available, while even the delayed 9070 (XT) has been out for half a month now, and the 9060 and 5060 coming soon.

So sorry, but it's not just Nvidia's luck, but also Intel's snail pace in the GPU space.
I think some credit is due to Nvidia for their development speed and hardware as well. Intel might've been slow on the uptake but Nvidia went all in FAST and it shows in the quality of their AI stack. Everyone else is playing catchup after all.
Posted on Reply
#18
TechBuyingHavoc
boidsonlyFor the last decade Intel has traded innovation for cheap H1-B labor and lost the race in the process.
I don't think Intel's lack of innovation is because they had excess proportions of H1-B labor. It is more that Intel chose to keep profit margins high instead of reinvesting it in new lines of business like handhelds (the NUC chips were a start but Intel never took this seriously), phones (the Atoms were a start but Intel never was ok with tiny profit margins and rapid development efforts), consoles (Intel was never willing to accept the low margins and work intensively with Sony and Microsoft to make custom hardware), SSDs (a product type they dominated in but were not willing to lower margins and prices to compete), etc, etc.

All of this was in the early 2010s, well before crypto or AI. Even if Intel never made a GPU and missed the whole crypto/AI bubbles, they would have far better fab utilization rates if they had been "hungry" for new business.

They were not hungry and to make the situation even worse, they took all that immense profits at their peak and wasted it on boondoggles like McAfee which went absolutely nowhere and wasted opportunities like Altera and Mobileye which could have been promising but the larger management issues prevented the fruits of such expensive acquisitions from being harvested by Intel.

Intel was at the cutting edge of SSDs for a while with great early-era innovations, but they abandoned this field because they were not ok with low margins. (are you noticing a pattern here...)

All of this is separate from the massive issues they had on the infamous 10 nm node(s). That latter part is a technical issue, but everything else is a result of bad or incorrect business management.
Posted on Reply
#19
AGlezB
tpa-prI think some credit is due to Nvidia for their development speed and hardware as well. Intel might've been slow on the uptake but Nvidia went all in FAST and it shows in the quality of their AI stack. Everyone else is playing catchup after all.
nVidia has been ahead of the curve the whole time, it's just people don't know or don't make the connection because after so many years and repetitions is easy to forget the DL in DLSS stands for Deep Learning and the Super Sampling work is done by a neural network. Now look at the dates: DLSS was announced with the RTX 20 series in september 2018, just 2 months after the release of GPT-1 and released 1.0 in February 2019, five months later. What's more, nVidia invests a ton in AI research. If you watch Two Minute Papers you'll know about half of the papers published every week on graphics processing, light transport, etc. come from them. Here is a recent example:

Posted on Reply
Add your own comment
Mar 20th, 2025 12:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts