Thursday, November 2nd 2023

AMD Instinct MI300X Could Become Company's Fastest Product to Rake $1 Billion in Sales

AMD in its post Q3-2023 financial results call stated that it expects the Instinct MI300X accelerator to be the fastest product in AMD history to rake in $1 billion in sales. This would be the time it took for a product in its lifecycle to register $1 billion in sales. With the MI300 series, the company hopes to finally break into the AI-driven HPC accelerator market that's dominated by NVIDIA, and at scale. This growth is attributable to two distinct factors. The first of which is that NVIDIA is supply bottlenecked, and customers and looking for alternatives, and finally found a suitable one with the MI300 series; and the second is that with the MI300 series, AMD has finally ironed out the software ecosystem backing the hardware that looks incredible on paper.

It's also worth noting here, that AMD is rumored to be sacrificing its market presence in the enthusiast-class gaming GPU segment with its next-generation, with the goal of maximizing its foundry allocation for HPC accelerators such as the MI300X. HPC accelerators are a significantly higher margin class of products than gaming GPUs such as the Radeon RX 7900 XTX. The RX 7900 XTX and its refresh under the RX 7950 series, are not expected to have a successor in the RDNA4 generation. "We now expect datacenter GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year," said Dr. Lisa Su, CEO AMD, at the company's earnings call with analysts and investors. "This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history."
Sources: Tom's Hardware, The Motley Fool
Add your own comment

31 Comments on AMD Instinct MI300X Could Become Company's Fastest Product to Rake $1 Billion in Sales

#1
Patriot
They aren't sacrificing much presently.
Posted on Reply
#2
mclarenfanN7
Those are bold claims. Can't say with much confidence that's from a diverse set of clients. More likely down to a few clients of likes of National labs ordering by contract for a SuperComputer.

CUDA is really entrenched in Universities especially in the US, it is even part of the curriculum in some Unis for their undergrads/grads. These future professionals once graduated they take that CUDA mindset to their workplaces. If at all they get into steering roles they will push Nvidia ecosystem. Then when those people hire the next group of engineers, these seniors will be looking for CUDA experience. That's not brand loyality its just prefefnece for familarity.

AMD wont make much head way even with relatively cheaper hardware when their University drives are next to nothing. Professors who asked their postdocs to go with AMD did so only becuase the grants were limited for their work.

Then there is an imminent reality this AI bubble will be burst by startups and other companies putting out specialized ASICs that handle training and inferencing much quicker and far more efficientlty. Inferenecing on analog designs is already a thing and vastly more efficient.
Posted on Reply
#3
Guwapo77
Exactly why I'm going Nvidia next round for the extremist level 5090TI Extra Super!
Posted on Reply
#4
Bwaze
I'm not a native English speaker, what is the significance of stating so many times in an article this is "fastest product in AMD history to rake in $1 billion in sales" - if it's the only product that will presumably reach $1 billion in sales?

In our country you could jokingly say you reached $1 billion in sales three times - first time, last time and the only time!
Posted on Reply
#5
KrazyT
It's also worth noting here, that AMD is rumored to be sacrificing its market presence in the enthusiast-class gaming GPU segment with its next-generation, with the goal of maximizing its foundry allocation for HPC accelerators such as the MI300X.

Nothing more than a normal business day in business world ...
Posted on Reply
#6
Space Lynx
Astronaut
I'm really starting to regret not buying that RTX 4090 my IRL friend offered me for $1150. He was just trying to help me out and I should have said yes. FML

I forgot gpu's are on a 5 year cycle now, whether it be crypto, AI, and next up WW3 supply chain issues. Technically WW3 is already here, its just in the form of trade wars and Cold War style skirmishes.
Posted on Reply
#7
Pumper
BwazeI'm not a native English speaker, what is the significance of stating so many times in an article this is "fastest product in AMD history to rake in $1 billion in sales" - if it's the only product that will presumably reach $1 billion in sales?

In our country you could jokingly say you reached $1 billion in sales three times - first time, last time and the only time!
Pretty sure it means that AMD will reach 1b in sales with it faster than with any other product, it's just worded strangely.
Posted on Reply
#8
Bwaze
PumperPretty sure it means that AMD will reach 1b in sales with it faster than with any other product, it's just worded strangely.
But they haven't reached 1b in sales with single product before, and even this one is just expected to reach 1b. And if it's overall revenue we're talking about - they rake in 5 billion dollars quarterly, and nobody really cares about the timeline when the deals are made and paid, so it would be a strange thing to single out the fastest time to the fifth of overall revenue?
Posted on Reply
#9
Redwoodz
Um, yeah. 3 tweets by 3 random people on the internet from 3 months ago confirms AMD has changed....nothing. I can't believe how easy it is to state a complete rumor as fact theses days.
Posted on Reply
#11
Mahboi
mclarenfanN7Those are bold claims. Can't say with much confidence that's from a diverse set of clients. More likely down to a few clients of likes of National labs ordering by contract for a SuperComputer.
Read the latest semianalysis article on MI300. The free tier namedrops way more than just "a few clients".

Also the CUDA meme is irrelevant. HIP is basically a slightly higher level but functionally identical copy of CUDA. ROCm is also not that different. And besides, a lot of the workload will be using nothing else than PyTorch.
Posted on Reply
#12
Denver
Like I said before, follow the money.
Posted on Reply
#13
lexluthermiester
BwazeI'm not a native English speaker, what is the significance of stating so many times in an article this is "fastest product in AMD history to rake in $1 billion in sales" - if it's the only product that will presumably reach $1 billion in sales?
Because it isn't the only AMD product to reach $1billion in sales. Not even close.
Posted on Reply
#14
Chrispy_
Ugh. AI is the new ETH mining fad for the industry.
I'm preparing for a repeat of the 2019-2021 GPU crisis.
Posted on Reply
#15
lexluthermiester
Chrispy_Ugh. AI is the new ETH mining fad for the industry.
I'm preparing for a repeat of the 2019-2021 GPU crisis.
Never gonna happen. There are WAY to many ASICs dedicated to AI and DeepLearning. GPU's are a less than great choice for such tasks.
Posted on Reply
#16
Arpeegee
I can't help but think that the new AI trend is another bubble for GPU manufacturers. Once it crashes they'll be scrambling for customers again, or worse yet those AI programs will help corporations develope their own chips for their needs and cut the legacy companies out.
Posted on Reply
#17
AnarchoPrimitiv
lexluthermiesterNever gonna happen. There are WAY to many ASICs dedicated to AI and DeepLearning. GPU's are a less than great choice for such tasks.
Then why are they flying off the shelves?
Posted on Reply
#18
Chrispy_
lexluthermiesterNever gonna happen. There are WAY to many ASICs dedicated to AI and DeepLearning. GPU's are a less than great choice for such tasks.
It's already happening, and it's going to get worse because this bubble is still growing

If it wasn't going to happen, why are both AMD and Nvidia switching their GPU focus to AI features, why are consumer products being sidelined to use their premium TSMC allocation on AI compute GPGPUs for the datacenter?
Posted on Reply
#19
evernessince
mclarenfanN7Those are bold claims. Can't say with much confidence that's from a diverse set of clients. More likely down to a few clients of likes of National labs ordering by contract for a SuperComputer.

CUDA is really entrenched in Universities especially in the US, it is even part of the curriculum in some Unis for their undergrads/grads. These future professionals once graduated they take that CUDA mindset to their workplaces. If at all they get into steering roles they will push Nvidia ecosystem. Then when those people hire the next group of engineers, these seniors will be looking for CUDA experience. That's not brand loyality its just prefefnece for familarity.

AMD wont make much head way even with relatively cheaper hardware when their University drives are next to nothing. Professors who asked their postdocs to go with AMD did so only becuase the grants were limited for their work.

Then there is an imminent reality this AI bubble will be burst by startups and other companies putting out specialized ASICs that handle training and inferencing much quicker and far more efficientlty. Inferenecing on analog designs is already a thing and vastly more efficient.
AMD have been investing heavily in it's own foundation. You can run CUDA code pretty performantly on AMD cards nowadays via AMD's ROCm HIP, which is designed specifically for that job. AMD is not dumb, they know they are not going to get people to stop programming in CUDA overnight.
Posted on Reply
#20
L'Eliminateur
lexluthermiesterNever gonna happen. There are WAY to many ASICs dedicated to AI and DeepLearning. GPU's are a less than great choice for such tasks.
This is not a matter of that the GPU is "less great" or AI but that INSTEAD of making gaming GPU they use their fab quota to make those profitable AI ASICs and enterprise GPUs

End result is the same: not enough gaming GPU for you and what's trickled will be overpriced at whatever the hell la-la magic pricing they want.

It's the same as the crypto craze and pandemic sleaze but worse as they're now issues dictated by the manufacturer themselves and their bottomless greed.

i hope this stupid AI bubble bursts soon.
Posted on Reply
#21
lexluthermiester
AnarchoPrimitivThen why are they flying off the shelves?
They're not.
Chrispy_It's already happening
Then why have prices gone down in the last few months?
Chrispy_If it wasn't going to happen, why are both AMD and Nvidia switching their GPU focus to AI features, why are consumer products being sidelined to use their premium TSMC allocation on AI compute GPGPUs for the datacenter?
I'm not commenting on those points or getting into that debate.
L'EliminateurThis is not a matter of that the GPU is "less great" or AI but that INSTEAD of making gaming GPU they use their fab quota to make those profitable AI ASICs and enterprise GPUs

End result is the same: not enough gaming GPU for you and what's trickled will be overpriced at whatever the hell la-la magic pricing they want.

It's the same as the crypto craze and pandemic sleaze but worse as they're now issues dictated by the manufacturer themselves and their bottomless greed.

i hope this stupid AI bubble bursts soon.
Except that prices have gone down, not up.

Seriously with this nonsense?
Posted on Reply
#22
L'Eliminateur
it's not nonsense, it's sense, and prices will go up.
Even with our "prices have gone down" GPU are still savagely overpriced, if you overprice something 300%, and then lower it to 250%, yes prices went down but they're still overpriced.

It's also what they have said, allocation will go to profiteering, not gamer gpus
Posted on Reply
#23
FeelinFroggy
BwazeBut they haven't reached 1b in sales with single product before, and even this one is just expected to reach 1b. And if it's overall revenue we're talking about - they rake in 5 billion dollars quarterly, and nobody really cares about the timeline when the deals are made and paid, so it would be a strange thing to single out the fastest time to the fifth of overall revenue?
Are you sure they have not had a product reach one billion? While AMD may not be as big as some of their competitors, they are not a small mom and pop operation with a market cap of 175b.

I'll admit, I am not an investor and know little of AMD, but I'd bet they have had several products reach 1 billon in sales over the life of the product. If I had to bet I'd say the 580 sold 1 billon. That was a hugely popular card for gaming and mining and it sold for a long run.
Posted on Reply
#24
mrnagant
FeelinFroggyAre you sure they have not had a product reach one billion? While AMD may not be as big as some of their competitors, they are not a small mom and pop operation with a market cap of 175b.

I'll admit, I am not an investor and know little of AMD, but I'd bet they have had several products reach 1 billon in sales over the life of the product. If I had to bet I'd say the 580 sold 1 billon. That was a hugely popular card for gaming and mining and it sold for a long run.
This isn't claiming the first but rather the "fastest" to $1B. The 580 @ $200 needs to sell 5M units. The MI300X at $36k needs to sell 28k units. The largest supercomputer has >36k MI250X chips. MI300X is likely to sell like hotcakes for AMD.
Posted on Reply
#25
Avro Arrow
With Radeon Instinct already powering two of the fastest three supercomputers in the world (and soon to be three of the top-four), it's quite clear that CUDA is more or less inconsequential at the supercomputer-level.
mclarenfanN7Those are bold claims. Can't say with much confidence that's from a diverse set of clients. More likely down to a few clients of likes of National labs ordering by contract for a SuperComputer.
Sales are sales.
mclarenfanN7CUDA is really entrenched in Universities especially in the US, it is even part of the curriculum in some Unis for their undergrads/grads. These future professionals once graduated they take that CUDA mindset to their workplaces. If at all they get into steering roles they will push Nvidia ecosystem. Then when those people hire the next group of engineers, these seniors will be looking for CUDA experience. That's not brand loyality its just prefefnece for familarity.
These supercomputers calculate extremely complex atomic and molecular interactions like protein folding and nuclear chain-reactions. Two of the top-three fastest supercomputers in the world are powered by EPYC and Radeon Instinct, those being Frontier (currently the fastest on Earth) and LUMI (3rd-place). When El Capitan comes online, the two fastest supercomputers in the world will be powered by EPYC and Radeon Instinct. CUDA is of no consequence at this level and it shows by the very presence of Frontier and LUMI in the #1 and #3 spots, respectively. The fastest supercomputer with nVidia hardware is Leonardo and it's in 4th-place. These supercomputers aren't used for running blender and the components are chosen by some of the top computer engineers in the world. They don't care what colour the box is when the parts arrive so nVidia doesn't have its "clueless noob" advantage like it does in the gaming spaces. These computers are used by engineers at a level that is far above what you would find in a university. These computers are used by engineers from agencies like NASA. Do you think that NASA engineers give a damn about a bunch of university professors using CUDA in their CGI courses?

Here's a hint...they don't. They want what does the job that they need done and CUDA clearly has nothing to do with these purposes, as we've seen by the part selection made over the past few years. If EPYC and Instinct do the job they need done better than the offerings from Intel and/or nVidia, they're going to use them. This is the US government we're talking about here, you know, the one with the unlimited budget (unless it comes to giving their citizens universal healthcare). If nVidia hardware was better (as you seem to be trying so hard to claim), then that's what they'd be using. Oh, except that they're not...
mclarenfanN7AMD wont make much head way even with relatively cheaper hardware when their University drives are next to nothing. Professors who asked their postdocs to go with AMD did so only becuase the grants were limited for their work.
I don't know what to say to this because (other than what universities use being completely irrelevant) the very existence of the top-3 supercomputers, Frontier, Fugaku and LUMI proves your statement to be 100% false. None of these supercomputers use nVidia hardware for anything. The Kubaku is the odd one out because it uses Fujitsu-designed ARM8.2 cores and doesn't require separate GPU cores. The other two have been extremely successful in their roles with EPYC and Instinct for the USA and Finland. After their experience with Frontier, if there were any drawbacks to using Radeon Instinct GPUs, there's not a chance that the US government would be investing even more money into an even more expensive and powerful Cray like El Capitan. They can spend limitless amounts of money on whatever they want which means that if they chose Radeon Instinct, then it is either the best for the task or at the very least, nVidia would be no better.
mclarenfanN7Then there is an imminent reality this AI bubble will be burst by startups and other companies putting out specialized ASICs that handle training and inferencing much quicker and far more efficientlty. Inferenecing on analog designs is already a thing and vastly more efficient.
What does that have to do with the M1200X being the fastest AMD product to reach $1,000,000,000? You must be a real fanboy to bring up things that have nothing to do with the article in your attempt to say "But, but, but, CUDAAAA!!!".
Posted on Reply
Add your own comment
Dec 18th, 2024 15:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts