Monday, February 18th 2019

AMD Radeon VII Retested With Latest Drivers

Just two weeks ago, AMD released their Radeon VII flagship graphics card. It is based on the new Vega 20 GPU, which is the world's first graphics processor built using a 7 nanometer production process. Priced at $699, the new card offers performance levels 20% higher than Radeon RX Vega 64, which should bring it much closer to NVIDIA's GeForce RTX 2080. In our testing we still saw a 14% performance deficit compared to RTX 2080. For the launch-day reviews AMD provided media outlets with a press driver dated January 22, 2019, which we used for our review.

Since the first reviews went up, people in online communities have been speculating that these were early drivers and that new drivers will significantly boost the performance of Radeon VII, to make up lost ground over RTX 2080. There's also the mythical "fine wine" phenomenon where performance of Radeon GPUs significantly improve over time, incrementally. We've put these theories to the test by retesting Radeon VII using AMD's latest Adrenalin 2019 19.2.2 drivers, using our full suite of graphics card benchmarks.
In the chart below, we show the performance deltas compared to our original review, for each title three resolutions are tested: 1920x1080, 2560x1440, 3840x2160 (in that order).



Please do note that these results include performance gained by the washer mod and thermal paste change that we had to do when reassembling of the card. These changes reduced hotspot temperatures by around 10°C, allowing the card to boost a little bit higher. To verify what performance improvements were due to the new driver, and what was due to the thermal changes, we first retested the card using the original press driver (with washer mod and TIM). The result was +0.2% improved performance.

Using the latest 19.2.2 drivers added +0.45% on top of that, for a total improvement of +0.653%. Taking a closer look at the results we can see that two specific titles have seen significant gains due to the new driver version. Assassin's Creed Odyssey, and Battlefield V both achieve several-percent improvements, looks like AMD has worked some magic in those games, to unlock extra performance. The remaining titles see small, but statistically significant gains, suggesting that there are some "global" tweaks that AMD can implement to improve performance across the board, but unsurprisingly, these gains are smaller than title-specific optimizations.

Looking further ahead, it seems plausible that AMD can increase performance of Radeon VII down the road, even though we have doubts that enough optimizations can be discovered to match RTX 2080, maybe if suddenly a lot of developers jump on the DirectX 12 bandwagon (which seems unlikely). It's also a question of resources, AMD can't waste time and money to micro-optimize every single title out there. Rather the company seems to be doing the right thing: invest into optimizations for big, popular titles, like Battlefield V and Assassin's Creed. Given how many new titles are coming out using Unreal Engine 4, and how much AMD is lagging behind in those titles, I'd focus on optimizations for UE4 next.
Add your own comment

182 Comments on AMD Radeon VII Retested With Latest Drivers

#26
EarthDog
Ive heard that the crashing was fixed!

Seems like those who were begging for a retest for performamce issues didnt leave the wine in the casket long enough. :)

Edit: Hilarious this gets downvoted. Another polarizing toxic fanboy goes on ignore at tpu. I wonder if there is a max amount of users one can ignore here...lol
Posted on Reply
#27
medi01
notbvendetta
You just need to stop projecting. It isn't hard.
notbIt refers to AMD not being able to provide proper drivers at the time of launch. So with Nvidia you get that extra 10% the first day, and with AMD you have to wait.
I mean, just how shameless can it become, seriously?
You got what performance upfront, when 960 beat 780Ti ($699), come again?

AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
As card you bought gets older, NV doesn't give a flying sex act.
It needs quite a twisting to turn this into something positive.

What's rather unusual this time, is AMD being notably worse at perf/$ edge, at least with game list picked up at TP.

290x was slower than 780Ti at launch, but it cost $549 vs $699, so there goes "I get 10% at launch" again.
Posted on Reply
#28
cucker tarlson
medi01AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
factually wrong.
EarthDogIve heard that the crashing was fixed!

Seems like those who were begging for a retest for performamce issues didnt leave the wine in the casket long enough. :)

Edit: Hilarious this gets downvoted. Another polarizing toxic fanboy goes on ignore at tpu. I wonder if there is a max amount of users one can ignore here...lol
as one guy in the adoredtv thread said, you can "speak only positively of amd or piss off".I guess that involves disproved theories too as long as they're flattering to amd or harmng to nvidia.who cares if it's a debunked theory from 5 years ago.if it's an option to leave venomous comments and binge-downvote people they're taking that opportunity.
Posted on Reply
#29
bug
medi01Remind us about 5 generations we had between 290x release and today.
200 series -> 300 series -> 400 series -> 500 series -> Vega
Posted on Reply
#30
cucker tarlson
bug200 series -> 300 series -> 400 series -> 500 series -> Vega
Fury and another rx480 refresh (rx590) in between them.
Posted on Reply
#31
las
medi01You mean, when you buy 780Ti, you expect it to be beaten by 960 later on, right?


Why the hell not!
You might discover interesting things, if you check how results were changing over time in TP charts.
Fury used to beat 980Ti only at 4k at launch (and even then, barely) at 1440p it was about 10% behind:
www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html


And were are we now:
www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_64/31.html


From 10% behind to several % ahead. Not bad, is it?

And one more point: FineWine in general refers to graceful aging of AMD cards, especially in contrast with that nVidia does with its customers.
Not about some magic dust coming into play later on or AMD purposefully crippling it at launch (the way certain other company does)
Haha, you seem to forget that AMD called Fury X an overclockers dream, while 980 Ti performs ~20% faster out of the box for custom cards and easily gains 10-20% more performance on top of that. A fully overclocked 980 Ti custom card completely wrecks a Fury X today. 2GB more VRAM also helps with this.

Performance per watt on recent AMD top-end cards have been a disaster too. Vega 64 does not beat 1080 while pulling 100% more power.

Radeon VII is just as bad.

The only reason why 290 series aged well, was because 390 series were a rebrand and AMD kept optimizing them to try and keep up with Nvidia. Notoriously bad performance per watt on both series.

Once again, 7970 was the last good AMD top-end GPU.
Posted on Reply
#32
SIGSEGV
Given how many new titles are coming out using Unreal Engine 4, and how much AMD is lagging behind in those titles, I'd focus on optimizations for UE4 next.
hahaha....
cucker tarlsonfactually wrong.
but I hate their (NVIDIA) approach to reduce performance through a driver update to older GPU.
That's beyond my comprehension.
Posted on Reply
#33
medi01
cucker tarlsonfactually wrong.
In your green face, from page 1:

lasHaha, but excuses...
It isn't needed to lose your face over it, Huang is not worth it.
bug200 series -> 300 series -> 400 series -> 500 series -> Vega
cucker tarlsonFury and another rx480 refresh (rx590) in between them.
200 series cannot count as "between", could they?
Should I count 500 series twice? I don't know why, but there might be green reasons.
I would count rebrands as, what they are, zero.

Now, if you'd stop twisting reality, we'd have to mention:

=> Polaris => Vega => Vega 7nm

That's all the released we had between 290x and now. On the green side of things:

=> Pascal => [.... Volta never materialized in consumer market....] => Tesla
Posted on Reply
#34
las
medi01In your green face, from page 1:




It isn't needed to lose your face over it, Huang is not worth it.
Excuses? It's a fact that 980 Ti wrecks Fury X today.

Every single person with knowledge of GPU's knows that 980 Ti reference performs MUCH worse than custom cards out of the box (yet even reference can make Fury X look weak post OC).

www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_G1_Gaming/33.html
www.techpowerup.com/reviews/MSI/GTX_980_Ti_Lightning/26.html

35-40% OC gain in the end, over 980 Ti reference. Completely wrecks Fury X at the end of the day.

Meanwhile Fury X can gain 3% from OC, bumps up the powerusage like crazy tho. OVERCLOCKERS DREAM!!!
Posted on Reply
#35
notb
medi01AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
But is this important? With Nvidia you get a lot more performance up front.
Imagine the amount of "finewine" that has to happen until Radeon VII matches Turing on efficiency. They would have to make it run on happy thoughts (which, I imagine, you have in spades).

And sure, since Nvidia optimizes for their newest architecture, an older product may become a little slower.
It would look exactly the same with AMD, if they actually changed the architecture from time to time.
But since they're still developing GCN, it's inevitable that older cards will get better over time.
We may see this change when they switch to a new architecture (if we live long enough, that is...).
As card you bought gets older, NV doesn't give a flying sex act.
They made it very good at launch. I'm fine with that. I buy every product as is. I have no guarantee of future improvements and I don't assume they will happen.

What other durable goods you buy become better during their lifetime? Why do you think this is what should be happening?
Does your shoes become better? Furniture? Cars?

When my car is recalled for repair, I'm quite mad that they haven't found it earlier. These are usually issues that shouldn't go through QC.
I imagine this is what you think: "what a lovely company it is: I already paid them for the car, but they keep fixing all these fatal flaws!".
It needs quite a twisting to turn this into something positive.
Seriously, you'll lecture me on twisting?
Posted on Reply
#36
EarthDog
SIGSEGVI hate their (NVIDIA) approach to reduce performance through a driver update to older GPU.
That's beyond my comprehension.
I thought this was that part that was bunked... I dont recall where but I do recall seeing this wasn't true after it was brought up years ago.

AMD has done a solid job with their drivers (though this release was quite awkward at best for some) and results do improve with time in some titles (as it does with nvidia)...not all. This is also in part due to the stagnant GCN arc that prevailed for several generations.

Anyway, I'm happy the issues from day1 are mitigated....time will tell about performance improvements..
Posted on Reply
#37
medi01
notbBut is this important?
In the goddamn context of "is FineWine real", IT'S THE FREAKING POINT, not simply "important".
notbImagine the amount of "finewine" that has to happen until Radeon VII matches Turing on efficiency.
That doesn't make FineWine "a myth".
Last time I've checked on a rather AMD unfriendly TP, power gap was 25%, so uh, well, there is that.

But you are missing the point, for FineWine(tm) to work, it just needs to get better over time, there are no particular milestones it needs to beat.

There is no price parity between 2080 and VII, the latter comes with 3 games and twice the RAM.
notbAnd sure, since Nvidia optimizes for their newest architecture, an older product may become a little slower.
I'm glad 960 beating $699 780Ti is justifiable.
notbThey made it very good at launch.
That's a dead horse, why not just leave it there?
290x was a bit behind, but was $549 vs $699, there was nothing "but better" about it, you paid 25%+ for about 10% more perf at launch, which gradually disappeared.
Posted on Reply
#38
notb
SIGSEGVbut I hate their (NVIDIA) approach to reduce performance through a driver update to older GPU.
That's beyond my comprehension.
This is an accusation you could take to court and become a millionaire. Can you prove it?
Posted on Reply
#39
ValenOne
INSTG8RMany on that list are AMD games. FC5,AC, and a couple more I can’t be arsed to install and confirm.
From www.techpowerup.com/reviews/AMD/Radeon_VII/

Assassin's Creed Origins (NVIDIA Gameworks, 2017)

Battlefield V RTX (NVIDIA Gameworks, 2018)

Civilization VI (2016)

Darksiders 3 (NVIDIA Gameworks, 2018), old game remaster, where's Titan Fall 2.

Deus Ex: Mankind Divided (AMD, 2016)

Divinity Original Sin II (NVIDIA Gameworks, 2017)

Dragon Quest XI (Unreal 4 DX11, large NVIDIA bias, 2018)

F1 2018 (2018), Why? Microsoft's Forza franchise is larger than this Codemaster game.

Far Cry 5 (AMD, 2018)

Ghost Recon Wildlands (NVIDIA Gameworks, 2017), missing Tom Clancy's The Division

Grand Theft Auto V (2013)

Hellblade: Senuas Sacrif (Unreal 4 DX11, NVIDIA Gameworks)

Hitman 2

Monster Hunter World (NVIDIA Gameworks, 2018)

Middle-earth: Shadow of War (NVIDIA Gameworks, 2017)

Prey (DX11, NVIDIA Bias, 2017 )

Rainbow Six: Siege (NVIDIA Gameworks, 2015)

Shadows of Tomb Raider (NVIDIA Gameworks, 2018)

SpellForce 3 (NVIDIA Gameworks, 2017)

Strange Brigade (AMD, 2018),

The Witcher 3 (NVIDIA Gameworks, 2015)

Wolfenstein II (2017, NVIDIA Gameworks), Results different from www.hardwarecanucks.com/forum/hardware-canucks-reviews/78296-nvidia-geforce-rtx-2080-ti-rtx-2080-review-17.html when certain Wolfenstein II map exceeded RTX 2080'
Posted on Reply
#40
medi01
notbThis is an accusation you could take to court and become a millionaire.
Nope:

Apple: Yes, we're slowing down older iPhones
rvalenciaFrom www.techpowerup.com/reviews/AMD/Radeon_VII/

Assassin's Creed Origins (NVIDIA Gameworks, 2017)

Battlefield V RTX (NVIDIA Gameworks, 2018)

Civilization VI (2016)

Darksiders 3 (NVIDIA Gameworks, 2018), old game remaster, where's Titan Fall 2.

Deus Ex: Mankind Divided (AMD, 2016)

Divinity Original Sin II (NVIDIA Gameworks, 2017)

Dragon Quest XI (Unreal 4 DX11, large NVIDIA bias, 2018)

F1 2018 (2018), Why? Microsoft's Forza franchise is larger than this Codemaster game.

Far Cry 5 (AMD, 2018)

Ghost Recon Wildlands (NVIDIA Gameworks, 2017), missing Tom Clancy's The Division

Grand Theft Auto V (2013)

Hellblade: Senuas Sacrif (Unreal 4 DX11, NVIDIA Gameworks)

Hitman 2

Monster Hunter World (NVIDIA Gameworks, 2018)

Middle-earth: Shadow of War (NVIDIA Gameworks, 2017)

Prey (DX11, NVIDIA Bias, 2017 )

Rainbow Six: Siege (NVIDIA Gameworks, 2015)

Shadows of Tomb Raider (NVIDIA Gameworks, 2018)

SpellForce 3 (NVIDIA Gameworks, 2017)

Strange Brigade (AMD, 2018),

The Witcher 3 (NVIDIA Gameworks, 2015)

Wolfenstein II (2017, NVIDIA Gameworks), Results different from www.hardwarecanucks.com/forum/hardware-canucks-reviews/78296-nvidia-geforce-rtx-2080-ti-rtx-2080-review-17.html when certain Wolfenstein II map exceeded RTX 2080'
I colored things.
and wow
rvalenciaFrom www.techpowerup.com/reviews/AMD/Radeon_VII/

Assassin's Creed Origins (NVIDIA Gameworks, 2017)

Battlefield V RTX (NVIDIA Gameworks, 2018)


Civilization VI (2016)

Darksiders 3 (NVIDIA Gameworks, 2018), old game remaster, where's Titan Fall 2.

Deus Ex: Mankind Divided (AMD, 2016)

Divinity Original Sin II (NVIDIA Gameworks, 2017)

Dragon Quest XI (Unreal 4 DX11, large NVIDIA bias, 2018)


F1 2018 (2018), Why? Microsoft's Forza franchise is larger than this Codemaster game.

Far Cry 5 (AMD, 2018)

Ghost Recon Wildlands (NVIDIA Gameworks, 2017), missing Tom Clancy's The Division

Grand Theft Auto V (2013)

Hellblade: Senuas Sacrif (Unreal 4 DX11, NVIDIA Gameworks)

Hitman 2

Monster Hunter World (NVIDIA Gameworks, 2018)

Middle-earth: Shadow of War (NVIDIA Gameworks, 2017)

Prey (DX11, NVIDIA Bias, 2017 )

Rainbow Six: Siege (NVIDIA Gameworks, 2015)

Shadows of Tomb Raider (NVIDIA Gameworks, 2018)

SpellForce 3 (NVIDIA Gameworks, 2017)


Strange Brigade (AMD, 2018),

The Witcher 3 (NVIDIA Gameworks, 2015)

Wolfenstein II (2017, NVIDIA Gameworks),
Results different from www.hardwarecanucks.com/forum/hardware-canucks-reviews/78296-nvidia-geforce-rtx-2080-ti-rtx-2080-review-17.html when certain Wolfenstein II map exceeded RTX 2080'
I colored things.
And wow.

Isn't Hitman an AMD title though?
Posted on Reply
#41
cucker tarlson
medi01In your green face, from page 1:

great.you've successfully proved that changing the testing suite changes the result.great sleuthing.

and 980ti is still a faster card

by 6% in pcgh's review (and this is stock)

www.pcgameshardware.de/Radeon-VII-Grafikkarte-268194/Tests/Benchmark-Review-1274185/2/

by even more in computerbase ranglist-no 980ti here but Fury X does very poorly

www.computerbase.de/thema/grafikkarte/rangliste/

also,calm down,you're having a fanboy tantrum all over this thread.
Posted on Reply
#42
notb
medi01I'm glad 960 beating $699 780Ti is justifiable.
Well, after a product is launched, one company starts to develop new, much faster successor and the other company starts to think how the already launched product works.
It's good we have a choice, right?

But I agree with you in this case: Maxwell was awesome.
290x was a bit behind, but was $549 vs $699, there was nothing "but better" about it, you paid 25%+ for about 10% more perf at launch, which gradually disappeared.
So what? I will replace the card at some point. I'll get a new one with a new "10% more performance at launch". I will keep having faster cards.

And you generally pay more (in %) than the performance difference is. It has to be that way, because the market just wouldn't work (convergence).
Posted on Reply
#44
londiste
notbImagine the amount of "finewine" that has to happen until Radeon VII matches Turing on efficiency. They would have to make it run on happy thoughts
It kind of does, when undervolted. The problem is that Radeon VII is a full process node ahead. Also, there is probably a reason AMD overvolts things out-of-box.
I don't think I have ever seen an attempt to seriously undervolt a Turing GPU for comparison though.
medi01Isn't Hitman an AMD title though?
Glacier engine used to favor AMD a lot back in Absolution and initially in Hitman, it was one of AMD's DX12 showcases. Eventually, the results pretty much evened out in DX12. Not sure about DX11. When developing Hitman 2 they dropped DX12 renderer and kept going with DX11.
Posted on Reply
#45
jabbadap
Heh that escalated quickly and I don't have any popcorn :cry:...

So drivers are stable now, good. How is the noise @W1zzard any improvements on that department? I.E. changes in fan profile etc.
Posted on Reply
#46
las
Nothing escalated, everyone can see that 980 Ti beats Fury X with ease, hence no Fine Wine here.

Trying to prove that Fury X can perform on par with a 980 Ti reference was fun tho. Remember the 30-40% OC headroom next time.

I'm out xD
Posted on Reply
#47
trog100
it seems to me that nvidea make better graphics card than amd which leaves those in camp having a difficult time justifying exactly why they are in red camp..

ether way all this toxicity is getting rather boring..

trog
Posted on Reply
#48
95Viper
No more changing what someone has posted to troll, insult, shame, create hate, or cause a toxic thread.
Also, non of the above, period.
No retaliatory comments, either.

Stay on topic.

Thank You
Posted on Reply
#49
notb
rvalenciaPrey (DX11, NVIDIA Bias, 2017 )
Oh come on. This game was launched with Vega as a confirmation of AMD-Bethesda cooperation.
Counting Prey as favouring AMD (i.e. it could be even worse) it's:
AMD: 4
Nvidia: 14
undecided: 4

AMD: 4/18 ~= 22%
Nvidia: 14/18 ~= 78%
which is basically the market share these companies have in discrete GPU.

Do you think it should be 50:50? Or what? And why?
Posted on Reply
#50
efikkan
FlyordieReally, all they should do is focus on optimizing for the major game engines.
No, not at all. They should start focusing on optimizing the driver in general, not do workarounds to "cheat" benchmarks.

Many have misconceptions about what optimizations really are. Games are rarely specifically optimized for targeted hardware, and likewise drivers are rarely optimized for specific games in their core. The few exceptions to this are cases to deal with major bugs or bottlenecks.

Games should not be written for specific hardware, they are written using vendor-neutral APIs. Game developers should focus on optimizing their engine for the API, and driver developers should focus on optimizing their driver for the API, because when they try to cross over, that's when things starts to get messy. When driver developers "optimize" for games, they usually manipulate general driver parameters and replace some of the game's shader code, and in most cases it's not so much optimization as them trying to remove stuff without you seeing the degradation in quality. Games have long development cycles and are developed and tested against API specs, so it's obvious problematic when suddenly a driver diverges from spec and manipulate the game, and generally this causes more problems than it solves. If you have ever experienced a new bug or glitch in a game after a driver update, then you now know why…

This game "optimization" stuff is really just about manipulating benchmarks, and have been going on since the early 2000s. If only the vendors spend this effort on actually improving their drivers instead, then we'll be far better off!
JB_GamerIsn't it the case - the problem for Amd - that ALL games are tested and optimized for nVidia GPU'S?
Not at all. Many AAA titles are developed exclusively for consoles and then ported to PC, if anything there are many more games with a bias favoring AMD than Nvidia.

Most people don't understand what causes games to be biased. First of all, a game is not biased just because it scales better on vendor A than vendor B. Bias is when a game has severe bottlenecks or special design considerations, either intentional or "unintentional", that gives one vendor a disadvantage it shouldn't have. When some games scale better on one vendor and some other games scale better on another vendor isn't a problem by itself, games are not identical, and different GPUs have various strengths and weaknesses, so we should use a wide selection to determine real world performance. Significant bias happens when a game is designed around one specific feature, and the game scales badly on different hardware configurations. A good example of this is games which are built for consoles but doesn't really scale well with much more powerful hardware. But in general, games are much less biased than most people think, and just because the benchmark doesn't confirm your presumptions doesn't mean the benchmark is biased.
medi01AMD perf improves over time, nVidia falls behind not only behind AMD, but behind own newer cards.
Over the past 10+ years, every generation have improved ~5-10% within their generation's lifecycle.
AMD is no better at driver improvements than Nvidia, this myth needs to die.
SIGSEGVhahaha....
but I hate their (NVIDIA) approach to reduce performance through a driver update to older GPU.
FUD which has been disproven several times. I don't belive Nvidia have ever intentionally sabotaged older GPUs.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts