Tuesday, December 6th 2016

AMD Cripples Older GCN GPUs of Async-Compute Support?

AMD allegedly disabled asynchronous-compute technology support on older generations of Graphics CoreNext (GCN) architecture, since Radeon Software 16.9.2. With the newer drivers, "Ashes of the Singularity" no longer supports asynchronous-compute, a feature that improves performance in the game, on GPUs based on the first-generation GCN architecture, such as the Radeon R9 280X.

"Ashes of the Singularity" benchmarks run by Beyond3D forum members on GCN 1.0 hardware, comparing older drivers to version 16.9.2 shows that the game supports async-compute on the older drivers, and returns improved performance. AMD, on its part, is pointing users to a patch change-list from the developers of "Ashes..." which reads that the game supports DirectX 12 async-compute only on GCN 1.1 (eg: Radeon R9 290) and above.
Source: Reddit
Add your own comment

97 Comments on AMD Cripples Older GCN GPUs of Async-Compute Support?

#76
Prima.Vera
Seems like nowadays both nVidia and AMD are only supporting current and -1 generations, while intentionally crippling performance on older generations. Is just confirmed, no need to bitch about it. Both companies are doing it.
In all truth, nVidia was worst, by crippling performance on 7xx series, while 9xx series was still their current generation...
Posted on Reply
#77
AsRock
TPU addict
KweeI see a lot of misunderstand since i post my finding.

In a first place, i started to search why GCN 1.0 was not supported in Rise Of The Tomb Raider last patch.

When Maxwell was accused to not support Async Compute, someone create a program for testing Async Compute. It worked on GCN 1.0, 1.1 and 1.2 back in time.

So i just wanted to verify if that still the case and that how i find that Async Compute was disabled on news drivers. Then i post on Reddit.

After that, many ask me to test a game that support Async Compute on GCN 1.0. So i tried Ashes Of Singularity. You know the end. That just confirm that Async Compute was disabled on news drivers. Old drivers performs way better because of Async Compute.

DirectX12 driver 16.3.1 Async Compute off : i.imgur.com/aiV1pSg.png

DirectX12 driver 16.3.1 Async Compute on : i.imgur.com/CGrb4yM.png

DirectX12 drivers superior to 16.9.2 Async Compute off :i.imgur.com/yiSSRCE.png

DirectX12 drivers superior to 16.9.2 Async Compute on :i.imgur.com/Fch5V8w.png
For all you know at this time it could be that AMD tweaked the drivers and waiting on the game devs to update the code. Although if that is the case they could of said some thing but they are not required to.

Shit maybe AMD accidentally broke it who knows for sure yet ?.
Sempron Guywell as you said 280x is nothing more than 7970 ghz ed. 290x is a newer architecture. 4 years lifespan of a card is more than enough. AMD is not running a charity nor do they have the luxury to do so. That effort to still support a 4 years old architecture is best spent elsewhere.
Don't know about that, it all depends. Dropping the 290 from support would be a terrible thing to do at this time.
Posted on Reply
#78
Tatty_Two
Gone Fishing
AquinusIt's an indicator but, it's certainly not definitive as you can check to make sure that a version is under a certain version. For example, he said he tested:

16.3.1 to 16.9.2 is a really big gap and if an update on a minor version was when it stopped working, it could still be the case that games looking to make sure that certain features are used when the driver is in a certain range. I don't want to make assumptions about software though. Simple fact is that we don't really know. An interesting observation though is that the 16.9.2 results are somewhere between the 16.3.1 results.

Either way, I think more testing is in order to determine if AMD actually gimped their drivers or not or if AMD gimped async compute like how they gimped HDMI. ;)
Hence why I said "he believes", my point is many seem to be once again jumping to conclusions, some believing it could be the drivers, some thinking not, your statement earlier led me to believe you thought it was just more anti AMD FUD (you may be right of course) however the only actual tangible information we appear to have indicates otherwise, now that evidence may be inaccurate but it seems to have been dismissed by some already without consideration so when I read the thread all I see is those defending AMD and those not, either way it's unbalanced.
Posted on Reply
#79
Fluffmeister
KweeI spent all night installing every drivers between 16.3.1 and 16.9.2. The break point is the driver 16.4.2(released in April). After this driver no more Async Compute on GCN 1.0. So Nixxes was aware that Async Compute was not active on GCN 1.0. They released Async Compute patch for Rise Of The Tomb Raider in July, specifying that only GCN 1.1 and superior can take advantage of Async Compute.

I received a good amount of results all around the world that users did for me and their results just confirmed my findings. Thank you all.
All credit to you for actually making the effort to test multiple drivers.
Posted on Reply
#80
Aquinus
Resident Wat-man
Tatty_OneHence why I said "he believes", my point is many seem to be once again jumping to conclusions, some believing it could be the drivers, some thinking not, your statement earlier led me to believe you thought it was just more anti AMD FUD (you may be right of course) however the only actual tangible information we appear to have indicates otherwise, now that evidence may be inaccurate but it seems to have been dismissed by some already without consideration so when I read the thread all I see is those defending AMD and those not, either way it's unbalanced.
Sure, this problem is a little more hard to gauge than the HDMI one which was obviously misrepresented. It's very possible that drivers were gimped but, I think it's also possible that AMD could have done some async compute magic with the driver outside of explicitly using it in the application (hence why the non-async compute score on the newer driver is half way inbetween the non-async and async results on the old driver.) So is it really not using async compute or is it using it differently? We don't know. I do find it interesting though that on the newer driver that the scores don't change by enabling/disabling async compute but, the results are (in general,) higher than the non-async results on older drivers.

Either way, even if something did happen, GCN 1.0 parts are starting to get to the age that many VLIW parts were getting to when AMD decided to ditch support. It wouldn't surprise me if they actually did remove support to further optimize it for GCN 1.1+ but, even with a claim like that, we still don't know.

Edit: The only reason the title is okay is because, there is a question mark at the end. :p
Posted on Reply
#82
Aquinus
Resident Wat-man
Kweeithardware.pl/testyirecenzje/czy_amd_rzeczywiscie_wykastrowalo_gcn_1_0_z_async_compute_weryfikujemy_doniesienia-1425-1.html
That shows gains from async compute where already very minor in the first place. The same test would need to be run multiple times to see how close the results are to each other to determine if (in this case,) if it actually gained anything or rather numbers were just a little different. If the performance loss is 6% and amount of error is something like 3-5%, then you're not really showing much of a difference between the runs.

Something other than Ashes of the Singularity probably should be used to confirm this. We're putting a lot of faith in a single application to make a very broad claim if we take this at face value.
Posted on Reply
#83
Kwee
AquinusThat shows gains from async compute where already very minor in the first place. The same test would need to be run multiple times to see how close the results are to each other to determine if (in this case,) if it actually gained anything or rather numbers were just a little different. If the performance loss is 6% and amount of error is something like 3-5%, then you're not really showing much of a difference between the runs.

Something other than Ashes of the Singularity probably should be used to confirm this. We're putting a lot of faith in a single application to make a very broad claim if we take this at face value.
Margin of error is about between 0,5% and 2% max.

Gain from Async Compute is between 7% and 10% on R9 270X, 5% and 7% on R9 280X(Maybe more with the same drivers as R9 270X).

Async Compute implementation in Ashes is very light.

Sure not that much(images.anandtech.com/doci/9124/Async_Perf.jpg) but higher than that.
Posted on Reply
#85
Aquinus
Resident Wat-man
Kweeimgur.com/Xr0Jhjo

AMD presentation June 2016, Async Compute retired in GCN 1.0 in April.
I'm still not sure where you're pulling the proof that they retired Async Compute from that image. All I see is from it is that they introduced async compute in GCN 1.0 and was going to continue to be updated as time progressed and the same exists in the same image for GCN 1.1. Nothing here says anything about its retirement or dropping support for anything, only that certain features were going to be updated over time. You really need to stop pulling assumptions from thin air.
Posted on Reply
#86
bug
AquinusI'm still not sure where you're pulling the proof that they retired Async Compute from that image. All I see is from it is that they introduced async compute in GCN 1.0 and was going to continue to be updated as time progressed and the same exists in the same image for GCN 1.1. Nothing here says anything about its retirement or dropping support for anything, only that certain features were going to be updated over time. You really need to stop pulling assumptions from thin air.
Did you miss the part where Kwee tested every driver since March and found the support is not there past April?
Posted on Reply
#87
Aquinus
Resident Wat-man
bugDid you miss the part where Kwee tested every driver since March and found the support is not there past April?
No, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
Posted on Reply
#88
bug
AquinusNo, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
How about staying on topic? When Sony removed feature from PS4 (Linux support), they caught a lot of flak for it. I think it turned into a class-action or something.
What AMD seems to have done here is awfully similar. No one can sue, though, since I think async was not an advertised feature, it was enabled down the road. As for "they improved performance to make up for it", that was when Nvidia has done, too, yet people still burn them to the stake because they don't have "proper async".

Personally, I couldn't care less if AMD dropped async support. Yet I cannot help noticing they drop everything they can, as fast as they can (just try to see which hardware is supported by which driver and with witch kernel under Linux). They're short on resources and this trend has me worried.
Posted on Reply
#89
Kwee
AquinusNo, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
The first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
Posted on Reply
#90
bug
KweeThe first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
Don't bother. You literally drew him a picture and he still (pretends he) doesn't get it.
Posted on Reply
#91
Aquinus
Resident Wat-man
KweeThe first tool i used for determinate if the Async Compute work or not wasn't Ashes Of Singularity. It was a tool developped just for testing Async Compute. After that only i test Ashes Of Singularity which show a regression on performance. Many tools was tested after that like D3D12NBodyGravity.
If that's the case, why aren't we seeing the results of those as well to make this a more definitive than just behaving as if it were speculation?
bugDon't bother. You literally drew him a picture and he still (pretends he) doesn't get it.
A picture showing that async compute will get updates over time and says nothing about retiring it? Okay, buddy.
Posted on Reply
#92
Kwee
AquinusIf that's the case, why aren't we seeing the results of those as well to make this a more definitive than just behaving as if it were speculation?

A picture showing that async compute will get updates over time and says nothing about retiring it? Okay, buddy.
Just click on the source, mate, i'm not going to take your hand and show you what it's going on. If you don't want know, then be my guest. img4.hostingpics.net/pics/489141lestroissinges3.jpg
Posted on Reply
#93
Aquinus
Resident Wat-man
KweeJust click on the source, mate, i'm not going to take your hand and show you what it's going on. If you don't want know, then be my guest. img4.hostingpics.net/pics/489141lestroissinges3.jpg
I did. Maybe you need to read my post again instead of being an arrogant ass.
AquinusA picture showing that async compute will get updates over time and says nothing about retiring it? Okay, buddy.
Posted on Reply
#94
flame21
KweeJust click on the source, mate, i'm not going to take your hand and show you what it's going on. If you don't want know, then be my guest. img4.hostingpics.net/pics/489141lestroissinges3.jpg
Hi Kwee,
I think we should all think you for your tests.

I'm very disappointed to see AMD doing such a bad trick. - If the new driver is not optimizing older GCN 1.0 cards, I can understand. But the functions were already there, and now is disabled by new driver? that's a whole other story.
Nvidia used to do that, I was disappointed and came to AMD. Now this happened, next time buying a new Video card I should think again. Who knows, maybe one day AMD will secretly disable another function in another card, because they think it is "old"?
Posted on Reply
#95
AsRock
TPU addict
Kweeithardware.pl/testyirecenzje/czy_amd_rzeczywiscie_wykastrowalo_gcn_1_0_z_async_compute_weryfikujemy_doniesienia-1425-1.html

community.amd.com/message/2765237

community.amd.com/message/2765216
Well one guy says AMD are looking in to it ( 2nd link ), if they did say that it would mean they did not disable it intentionally, how ever i have no idea were that was said.

As for 3rd link, shows that it is working but the user failed to include what card he was using.
Posted on Reply
#96
flame21
AquinusNo, I'm just not sure how the image he posted ties into all of this. Once again, how about testing using something other than Ashes of the Singularity? Once again, version ranges can have a lot to do with it for what a game engine thinks the GPU is capable of. How about the improved performance without async compute enabled? As I said before, it's possible that the driver could be automagically using it under the hood when it can as opposed to just being able to be turned on. Simply put, there are too many variables to rule out the application or changes in how the driver operates.
Is there another way to test, other than install the whole Ashes of the Singularity, to quickly find out if a new version driver fixed the async or not?
Posted on Reply
#97
Broudka
Issue fixed in Crimson 17.2.1
Posted on Reply
Add your own comment
Nov 21st, 2024 08:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts