• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

QA Consultants Determines AMD's Most Stable Graphics Drivers in the Industry

So, thanks to mining, i finally bought some Radeon cards since good ol times of AGP era. Now i have 290/390 cards and must say 15.x and 16.x drivers pretty stable and compared to multiple versions of nVidia ones for 980/1070. I also have some old Quadro 2000 cards, they rock stable in any scenarios i put them. And that stability of Radeons and Quadros is memory management inside drivers especialy then your physical VRAM eaten by applications or can't be used due to stupid limitations like DX9 restriction to only use 4GB.

The good:
Radeons and Quadro can split workload and VRAM consumption correctly if multiple applications try to use GPU (multiple browsers + YT video + video encoding + online gaming + mining isn't problem, just slow), GeForces can't handle that

Radeons and Quadros have fair rendering in games, GeForces often cheats (even in simpliest games...)

The bad:
During good ol days of AGP and early PCI-E cards Radeons had so much issues with modeling and video editing software so i strictly decided to switch to green. Despite drivers quality and changes to software, that made both AMD and nVidia products equal in general usage, Radeons still ill suitable for the most prosumer tasks: OpenCL is just a word, while CUDA is used almost everywhere.
 
great, too bad "Radeon Software" is one of the worst GUI's I ever have worked with
 
Should not the fails be 21 vs 36?

There are a lot of fails that are not fails but runs 'Not attempted (due to hang)'. It just so happens that AMD cards have 10 of these and Nvidia cards 40. These do not represent the quality but luck - better if driver hangs later in the automated test run.

I would be much more interested in why the drivers failed.
 
I know I shouldn't be so skeptical. I am sure this is probably on the, up and up'. But, Amd paid for this research. What would the outcome be if Nvidea paid for it?
As if we don't know...

Still, wth am I reading here? The tests are not described at all. We pass/fail per day? What is this? Were some tests passing one day only to fail the day after that? The whole report is mostly 100 pages of listing configurations :wtf:

Edit: I see now, they cherry-picked the stress test out of the whole suite. Well, if that's what floats AMD's boat...
 
Seems like bullshit to me
"Mission-critical test", so you just (randomly) grab newest drivers from both side? You'd be fired if you do such bullshit on a corporate server.
The configuration of a production environment is strictly controlled in every corporation. Configurations must be intensely tested prior to installation, and tweaked, if necessary. After a configuration passed such tests, this is where "stability" kicks in -- is such success steadily reproducible in the following runs?

What these researchers did, IMHO, is that they identified 6 erroneous configurations, and superior out-of-box usability of AMD cards. Not a single shit to do with "mission-critical" stability.
 
Well, they used Gigabyte's NVIDIA cards, that explains a lot :roll::roll: : SOURCE
lol_giga.png

On the other hand they used MSI RX580 GAMING and ASUS STRIX RX560.

Also LegitReviews asked to QA Consultants if their samples were supplied by AMD and the answer is YES : http://www.legitreviews.com/amd-cla...but-supplied-all-graphics-cards-tested_206705
 
Last edited:
Kinda strange since the 18.7.1 drivers are broken for my system. HDMI driver is corrupted or something. Tried re-downloading, tried clean uninstalls... reboots and re-installs. Then I just on a whim downloaded 18.5.1 and ran that installer... So far, 100% fine.
 
I’ve been saying this about Nvidia’s driver team for awhile. Thank goodness we have @qubit to beta test them for us before I try them! :laugh:

Last year I made another foray into AMD land and had a 480. My experience was AMD has problems with drivers too. Their driver would frequently crash, and basically carry on if I was lucky with the basic Windows version.

So, they both need work, in my experience.

Edit: yes, I know my experience is anecdotal, not scientific.

Ask Vanguard dude.
 
No wonder since nVidia is releasing a driver fix after EACH release...
 
Kinda strange since the 18.7.1 drivers are broken for my system. HDMI driver is corrupted or something. Tried re-downloading, tried clean uninstalls... reboots and re-installs. Then I just on a whim downloaded 18.5.1 and ran that installer... So far, 100% fine.

18.7.1 is beta driver, don't download beta drivers unless you already have issues >_<
 
My question about this all since its not said in story, is what kinda software were they running for these test's? Did amd have a say in what software was used during the test's? That 2nd question's answer would tell a lot about the results if that is the case.

It was never proven that CTS was paid by Intel , stop making shit up just to spite people. Also , FCAT is just a tool, I literally never heard any complaint about it.
FCAT only did what all other FPS counters did and that was put an overlay on video frames that came outta the game engine before they are sent to the gpu. Reality of that tool it helped AMD solve the crossfire stuttering issue that plagued them for many years with out any sign of being fixed.

AMD came out on top on an AMD funded test ... as Joe Pesci said in "My Cousin Vinny" ... "Oh there's a $#*&^%$ Surprise"
Its like being shocked that xxx sponsored game perform's better on xxx cards.
Also LegitReviews asked to QA Consultants if their samples were supplied by AMD and the answer is YES : http://www.legitreviews.com/amd-cla...but-supplied-all-graphics-cards-tested_206705
Part that does jump out about that story is (quoted below). Did they just click uninstall on drivers and then install the nvidia drivers? That could cause in the off chance issues of of crashing. Besides that the fact that AMD supplied the cards def puts the test in doubt as they could do what both sides do when it comes to sending cards to reviewers and cherry pick the best of the bunch. Most failures came outta 1 card for nvidia which could just mean the card was defective to start with. Without a wide range of testing say 100 cards of each model this whole thing test should be taken with a grain of salt.

The tests were done on twelve machines over a 12-day period with the graphics cards in the AMD systems being switched to the NVIDIA systems at the half way point. At the end of the testing AMD products passed 93% of scheduled tests whereas the aggregate of NVIDIA products passed 82% of scheduled tests.
 
Last edited:
18.7.1 is beta driver, don't download beta drivers unless you already have issues >_<

Well, The whole point of downloading a beta driver is to test for bugs. lol. I was being slightly sarcastic though. Kinda left me stumped though. Will try a 2nd version of the 7.1 drivers in a week or so. (AMD has a habit of updating their driver packages on the site without changing the version #)

Not that it bothers me. Although, I am glad they are going to keep Vega around for a bit longer on the AI/Compute side of things. Us Vega adopters might get some of that R&D thrown our way.
 
Part that does jump out about that story is (quoted below). Did they just click uninstall on drivers and then install the nvidia drivers? That could cause in the off chance issues of of crashing. Besides that the fact that AMD supplied the cards def puts the test in doubt as they could do what both sides do when it comes to sending cards to reviewers and cherry pick the best of the bunch. Most failures came outta 1 card for nvidia which could just mean the card was defective to start with. Without a wide range of testing say 100 cards of each model this whole thing test should be taken with a grain of salt.

Read the PDF.

They had 12 systems. 1-6 runs were done. GPU swap (AMD/Nvidia) to eliminate system bias. Then 7-12 runs.

If your alluding to driver overlap issues it would affect both. If there was any.

Most of the GeForce issues happened before the swap on runs 1-6. Funny enough after the swap the 1060 only had 1 fail.
 
Based on these results, both companies should fire their professional driver teams. Makes it even more funny the report is being shared by AMD.
Fire, no, but they need to take a second look at the lower end cards to see why they have a ridiculously high failure rate.

OpenCL is just a word, while CUDA is used almost everywhere.
Because NVIDIA decided not to implement any more than OpenCL 1.2 to force developers to use CUDA on their hardware; meanwhile, AMD supports OpenCL 2.0.

There are a lot of fails that are not fails but runs 'Not attempted (due to hang)'. It just so happens that AMD cards have 10 of these and Nvidia cards 40. These do not represent the quality but luck - better if driver hangs later in the automated test run.
Drivers should never hang. It's just a different kind of failure.
 
Last edited:
Because NVIDIA decided not to implement any more than OpenCL 1.2 to force developers to use CUDA on their hardware; meanwhile, AMD supports OpenCL 2.0.
The problem doesn't seem to be who implements it, but rather OpenCL itself. I have seen tests where Nvidia's OpenCL 1.2 beats AMD's OpenCL 2.0. Maybe the whole thing is hard to implement correctly (something along Vulkan/DX12 lines)?
 
I can't find any recent, neutral benchmark results for OpenCL.

OpenCL isn't particularly hard to implement. It's more that with CPUs having so many idle cores these days, it makes more sense to multithread the code on the CPU.
 
OpenCL and CUDA performance is dependent on the hardware configuration. The same code can run worse on more powerful hardware simply because things such as the number of ALUs per CU or the register file/cache size had not been taken into account properly. And there are many other architectural differences that complicate matters such as the fact that GCN has additional scalar ALUs that can't be explicitly used through OpenCl.

I can't find any recent, neutral benchmark results for OpenCL.

Well , what I wrote above is the reason why.

it makes more sense to multithread the code on the CPU.

What you do on a CPU isn't really the same kind of multithreading you typically try to implement on a GPU. It makes sense to implement tasks that fit one or the other very well.
 
This picture is pretty telling:
embed.png

The Radeon cards are being held back by the memory subsystem. In tests where memory performance isn't the bottleneck, the Radeon cards do well. In instances where it is, they perform poorly.

Still, this isn't my point. There's an open standard out there for compute and NVIDIA deliberately doesn't update it because they would rather promote their proprietary solution (just like G-SYNC).


OpenCL 2.0 features a new shared memory subsystem that vastly accelerates memory accesses:
https://www.anandtech.com/show/7161...pengl-44-opencl-20-opencl-12-spir-announced/3
I wouldn't be surprised if AMD jumped on OpenCL 2.0 for that reason and NVIDIA are dragging their heels because it makes CUDA look bad.

Not seeing any benchmarks that compare OpenCL 1.2 and OpenCL 2.0 performance.
 
Last edited:
This picture is pretty telling:
embed.png

The Radeon cards are being held back by the memory subsystem. In tests where memory performance isn't the bottleneck, the Radeon cards do well. In instances where it is, they perform poorly.

Still, this isn't my point. There's an open standard out there for compute and NVIDIA deliberately doesn't update it because they would rather promote their proprietary solution (just like G-SYNC).


OpenCL 2.0 features a new shared memory subsystem that vastly accelerates memory accesses:
https://www.anandtech.com/show/7161...pengl-44-opencl-20-opencl-12-spir-announced/3
I wouldn't be surprised if AMD jumped on OpenCL 2.0 for that reason and NVIDIA are dragging their heels because it makes CUDA look bad.

Not seeing any benchmarks that compare OpenCL 1.2 and OpenCL 2.0 performance.
These very benchmarks use OpenCL 2.0 on AMD hardware. It says so on the first page (with tiny text, of course). They don't compare OpenCL 1.2 to 2.0 on the same card, but it compares OpenCL 2.0 on AMD with OpenCL 1.2 on Nvidia.
 
Talking the picture? What a terrible test. All cards should have been tested on 1.2. The bad memory performance could easily be because of 2.0 but code that isn't optimized for it.
 
Talking the picture? What a terrible test. All cards should have been tested on 1.2. The bad memory performance could easily be because of 2.0 but code that isn't optimized for it.
All I'm saying is, that's all AMD can squeeze out of OpenCL 2.0.
Feel free to browse that website, the guy did tons of OpenCL tests over time. I'm at work now, I posted the first thing I could find, I can't look more closely.
 
If you read these results, you'd think driver stability was actually a big problem, and it really isn't. It makes you think crashes are happening constantly, and I can say, using cards from both camps on a daily basis, that the reality is crashing is really uncommon. I can't even remember the last crash I had, that's how long ago it happened.

And stability is only part of the picture of a good driver, annoying bugs and how quickly they are fixed, how often they are updated(it can be too often as well as not often enough), etc. And again, as someone that uses drivers from both sides daily, I'm going to say right now they are both about even. I can not honestly say I prefer one over the other as they are today.
 
Back
Top