Monday, July 3rd 2023

AMD "Vega" Architecture Gets No More ROCm Updates After Release 5.6

AMD's "Vega" graphics architecture powering graphics cards such as the Radeon VII, Radeon PRO VII, sees a discontinuation of maintenance with ROCm GPU programming software stack. The release notes of ROCm 5.6 states that the AMD Instinct MI50 accelerator, Radeon VII client graphics card, and Radeon PRO VII pro-vis graphics card, collectively referred to as "gfx906," will reach EOM (end of maintenance) starting Q3-2023, which aligns with the release of ROCm 5.7. Developer "EwoutH" on GitHub, who discovered this, remarks gfx906 is barely 5 years old, with the Radeon PRO VII and Instinct MI50 accelerator currently being sold in the market. The most recent AMD product powered by "Vega" has to be the "Cezanne" desktop processor, which uses an iGPU based on the architecture. This chip was released in Q2-2021.
Source: EwoutH (ROCm GitHub)
Add your own comment

42 Comments on AMD "Vega" Architecture Gets No More ROCm Updates After Release 5.6

#26
TheoneandonlyMrK
bugThat's the sad part: it doesn't matter how capable is the hardware if the software sucks.

If Ferrari were so great, everybody would be buying Ferraris, nobody would buy anything else, right? ;)

It's an exaggeration, of course. But it shows why you can sell even when there's a considerable gap between you and competition.
You say stuff implying you use Cuda a lot , and also imply its the only thing to use.

The market will not suffer one supplier.

See Tenstorrent, Nvidia may have a strangle hold but others will come along to relieve that, Tesla didn't stick with Nvidia long.

El capitan and frontier also exist.

As do other AI systems with no sign of Cuda or AMD on them.
Posted on Reply
#27
john_
dj-electricPeople really stay under large rocks.
If you are NOT in the business, you need to lift no rock. It's fine :p

Seriously. AI and ML are in a totally different level today. Nvidia was building the compute ecosystem from the introduction of their first GPU and latter with CUDA and everything else. Today someone with huge resources can go any hardware they choose. And everyone is targeting that market. CUDA is the de facto option for individuals, but I was never talking about individuals. You keep pointing at me what someone can do with a GeForce card or a $200 board. And still you haven't commented on MosaicML's announcement, while being in the business as you say. Maybe you are just tech support in that business? Nothing to do with AI and ML programming?

PS We are same timezone. ;)
bugThe fact that CUDA doesn't command 100% market share is not a guarantee ROCm is just as serviceable.
You are looking it at the wrong way. Didn't said that magically CUDA will be replaced. It just seems they managed to fix a couple of things those last months and probably one of the reasons for getting serious about it, or should I say another reason, the main reason is probably those $11 billions Nvidia announced for this quarter, is this
Lisa Su Reaffirms Commitment To Improving AMD ROCm Support, Engaging The Community - Phoronix
Posted on Reply
#28
Vayra86
john_Individuals and startups are not the target audience for Nvidia, Intel, AMD, Google, Tenstorrent etc.
Dude, individuals and start ups are where the real innovation happens these days. And then they get bought by corporate

OpenAI is the best most recent example.

This is exactly why governments and market regulators are fighting giants like Google and MS on noncompetitive practices.

Its the same strategy as is used in education. Why do you think Apple and Google deliver stuff for that purpose to be used in schools? Its simple: they want that Microsoft pie, where people grew up with Windows and then land in Enterprise Windows on their day job. People get what they know because the barrier of entry is lower. Despite that, yes, we have macOS alongside Windows just like we have CUDA alongside whatever else.
john_the main reason is probably those $11 billions Nvidia announced for this quarter, is this
Lisa Su Reaffirms Commitment To Improving AMD ROCm Support, Engaging The Community - Phoronix
Ah yes, they are going to engage the community again. Lovely, but pointless.

This the AMD vibe all over again, its like they company works like a bad employee: manager says to work better, bad employee puts full focus in the next work week, and then he's back to the old ways. It echoes everywhere, RDNA2 > 3 is another such example. It could've been so much more.
Posted on Reply
#29
bug
john_You are looking it at the wrong way. Didn't said that magically CUDA will be replaced. It just seems they managed to fix a couple of things those last months and probably one of the reasons for getting serious about it, or should I say another reason, the main reason is probably those $11 billions Nvidia announced for this quarter, is this
Lisa Su Reaffirms Commitment To Improving AMD ROCm Support, Engaging The Community - Phoronix
Which proves what I am saying: where Nvidia has a working solution, AMD has a marketing statement. Which is aimed squarely at you, because companies that actually buy compute hardware don't make decisions based on what Ms Su tells to the press.

If I were to be mean, I would highlight how Ms Su "reaffirms commitment" and a month later ROCm announces more hardware will go unsupported. But I won't do that.
Vayra86Dude, individuals and start ups are where the real innovation happens these days. And then they get bought by corporate

OpenAI is the best most recent example.

This is exactly why governments and market regulators are fighting giants like Google and MS on noncompetitive practices.

Its the same strategy as is used in education. Why do you think Apple and Google deliver stuff for that purpose to be used in schools? Its simple: they want that Microsoft pie, where people grew up with Windows and then land in Enterprise Windows on their day job. People get what they know because the barrier of entry is lower. Despite that, yes, we have macOS alongside Windows just like we have CUDA alongside whatever else.
This guy/gal gets it. :rockout:
Posted on Reply
#30
john_
@Vayra86 @bug

To both of you. Future isn't always a continuation of the past. Some times change happens. And we are not talking about individuals, again. Those who would be bought from bigger corporations will have to play unter different rules. If the bigger corporation that bought them says "We need a super computer and we need it now to use your solutions" and Nvidia reply "You will have to wait 6 months and pay X" they will have a problem. They can just wait 6 months, or check for alternatives. MosaicML did just that. Checked if AMD today can be an alternative, because in the resent past, it wasn't or at least it was a problematic alternative. So if those go to AMD (or Intel or someone else) and that someone tells them "We can offer you 80% of the performance at 3/4 X the cost today and we warranty that the software you need is provided by us", they might go that direction. They only need third party verification that the software provided will not be garbage.

As for the bad employee, if the good employee next to him gets a 50% raise because of being a good employee, that could be a good reason to turn to a good employee and focus for more than a week.
Posted on Reply
#31
bug
@john_ Try a little harder and you can make a crappy software stack into an advantage.
Posted on Reply
#32
Vayra86
john_As for the bad employee, if the good employee next to him gets a 50% raise because of being a good employee, that could be a good reason to turn to a good employee and focus for more than a week.
Exactly right! So where is that 50% raise from AMD?
Posted on Reply
#33
john_
Vayra86Exactly right! So where is that 50% raise from AMD?
I was talking about that 50% raise in Nvidia's case. If those $11 billions in Nvidia's predictions for this quarter doesn't make AMD gets serious about software support, then maybe I should just go and buy an Nvidia GPU for my next upgrade.
bug@john_ Try a little harder and you can make a crappy software stack into an advantage.
It's pity seeing you becoming rude once again, by implication(my English might be worst here than usual), in an effort to be Nvidia's advocate. I thought this time you where doing nicely in participating in this conversation, only to ruin it in the end. Pity.
Posted on Reply
#34
Vya Domus
dj-electricNot everyone is rushing to get 20 4090s
Why not ? That's exactly what you're saying.
dj-electricbut small offices will already start equipping their employees with machines that allow them to train smaller models locally.
What do you mean "will" ? Lol, CUDA has been used for ML for over a decade.
dj-electricIm not speaking out of nowhere or speaking in hypothetics, im in this business myself
Then you're not a great business man.

Explain to me why would any business invest thousands in dedicated machines when they can get access to more capable hardware at a fraction of the cost per month with cloud with the exception that they just didn't know any better.

This is also, by the way, why both Nvidia and AMD stopped making their highest end compute oriented hardware available to consumers, for example the last GPU truly dedicated for compute Nvidia sold to consumers was Titan V. It makes no sense from a financial point of view. The hardware has diverged as well, Hoper and CDNA are clearly dedicated for compute and ML and will probably never make their way even to professional products, the gap between what you can do with consumer hardware will widen even more with time.
Posted on Reply
#35
bug
john_It's pity seeing you becoming rude once again, by implication(my English might be worst here than usual), in an effort to be Nvidia's advocate. I thought this time you where doing nicely in participating in this conversation, only to ruin it in the end. Pity.
I'm not advocating for Nvidia, I have no reason to do that (I don't even own Nvidia shares - I think). I am always saying I want AMD to raise the bar wrt compute. In the past couple of months, we've got Ms Su's "reaffirmed commitment" and this hardware support removal. I think I can be excused if I don't see a silver lining here.
Posted on Reply
#36
dj-electric
Vya DomusExplain to me why would any business invest thousands in dedicated machines when they can get access to more capable hardware at a fraction of the cost per month with cloud with the exception that they just didn't know any better.
Often businesses that want to train models do it with incredibly large amounts of locally produced data-sets. It could be hundreds of terabytes even. Such things during development could be incredibly expensive to train with cloud services, and also get results of. For these purposes machines will often be built on site to provide direct access to devleopers to train and get results for. model training is a lot of back and fourth and there are some very obvious disadvantages on working with cloud services.

As far as businesses are concerned, a 5000 dollar machine containing two RTX 4090 cards (say 3200 dollars, the rest is relatively simple with an 8-12 core CPU, some mild amount of RAM) is worth its weight in pure gold for model training. you get your ROI quite quickly.
Vya DomusWhy not ? That's exactly what you're saying.
Semantics. Are you terminally online to argue with people in forums? grow up, have fuitful discussions like an adult. Throughout my 6 years in working in this field I have seen at least 5 nearby businesses making training machines as a first response force to their ideas. the reasons stated above.
Vya DomusWhat do you mean "will" ? Lol, CUDA has been used for ML for over a decade.
Its a form of speech. It implies of current actions being done by businesses, like the one ive described above.
Vya DomusThen you're not a great business man.
I'm not a business man. I'm a system integration and product engineer
Posted on Reply
#37
Vya Domus
dj-electricSemantics.
The semantics of saying something word for word then claiming it's not like that ?

Lol, talk about being terminally online and not having fruitful discissions like an adult.
Posted on Reply
#38
GoldenX
Meanwhile, if you install Linux on a Nintendo Switch, you can use CUDA.
Posted on Reply
#39
Fluffmeister
Good old Vega, late to the market yet first to die.
Posted on Reply
#40
LabRat 891
I was certain that Vega would be one of the longest-supported graphics architectures on PC; this news is concerning, regardless of whether I'd use the feature or not.
Not sure how consequential this fully is, but seeing support of any kind being dropped on a still-actively-sold architecture, is troubling.
Posted on Reply
#41
eidairaman1
The Exiled Airman
ZoneDymoWhat does this mean? idk what ROCm even is.
Alternative to CUDA
LabRat 891I was certain that Vega would be one of the longest-supported graphics architectures on PC; this news is concerning, regardless of whether I'd use the feature or not.
Not sure how consequential this fully is, but seeing support of any kind being dropped on a still-actively-sold architecture, is troubling.
Considering it was a flop out of the gate, overpromised by Raja, They are Focusing on RDNA.
Posted on Reply
#42
Patriot
bugIt's a little debatable how "focused" they are, considering they still put Vega in current products. I wish they dropped this habit once and for all. Or at least stick to a couple of generations.

It's their Linux compute stack. I.e. what keep s them from being taken seriously for AI/ML :(


It may seem odd, but it's really not. People dive into AI/ML using the hardware they have, they don't buy professional adapters for a hobby. This, in turn, determines what skills are readily available in the market when you're looking to hire AI/ML engineers.
It is why it was important for them to put matrix accelerators in RDNA3 to bring them up to CDNA2 compatibility.
ROCm is partly coming to windows for the 7900xtx/xt and workstation cards this year... sometime. It was originally in the ROCm 5.6 alpha but didn't make it this release.
I am disappointed the mi50/60 is getting support dropped, I found it odd that they had dropped mi25 at ROCm 4.5 and left the other Vegas enabled.
That said, they are wanting to have a unified architecture for compute and now only tensor/matrix math enabled cards will exist moving forward.

I will have to do some benchmarks to compare resnet50 on mi60 vs mi100 to show matrix accelerations.
Posted on Reply
Add your own comment
Dec 18th, 2024 03:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts