# DOOM with Vulkan Renderer Significantly Faster on AMD GPUs



## btarunr (Jul 13, 2016)

Over the weekend, Bethesda shipped the much awaited update to "DOOM" which can now take advantage of the Vulkan API. A performance investigation by ComputerBase.de comparing the game's Vulkan renderer to its default OpenGL renderer reveals that Vulkan benefits AMD GPUs far more than it does to NVIDIA ones. At 2560 x 1440, an AMD Radeon R9 Fury X with Vulkan is 25 percent faster than a GeForce GTX 1070 with Vulkan. The R9 Fury X is 15 percent slower than the GTX 1070 with OpenGL renderer on both GPUs. Vulkan increases the R9 Fury X frame-rates over OpenGL by a staggering 52 percent! Similar performance trends were noted with 1080p. Find the review in the link below.





*View at TechPowerUp Main Site*


----------



## Chaitanya (Jul 13, 2016)

It was expected.


----------



## laszlo (Jul 13, 2016)

no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it? 

as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...


----------



## ZoneDymo (Jul 13, 2016)

laszlo said:


> no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?
> 
> as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...



Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.


----------



## evernessince (Jul 13, 2016)

laszlo said:


> no surprise as is their developed api for GCN arch.; question is will they pay other developers also to implement it?
> 
> as i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...



With those kind of performance boosts I would be throwing money and engineers at developers.  Let's not also forgot that multiple cards in Vulkan is so much better than previous APIs as well.


----------



## qubit (Jul 13, 2016)

This is good competition for NVIDIA which is good for customers. We need more of this.


----------



## john_ (Jul 13, 2016)

So, GCN cards are faster in Mantle, DirectX 12 Mantle and also, Vulkan Mantle.


----------



## laszlo (Jul 13, 2016)

ZoneDymo said:


> Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
> Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.



a dev "supported" by nv ? cutting off the branch you are sitting on ?


----------



## Ubersonic (Jul 13, 2016)

laszlo said:


> question is will they pay other developers also to implement it?



I doubt they had to pay iD, historically iD has always been an OpenGL developer and Vulkan (previously called glNext) is the successor to OpenGL 4


----------



## nienorgt (Jul 13, 2016)

I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.


----------



## the54thvoid (Jul 13, 2016)

ZoneDymo said:


> Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
> Soooo *why would a dev not just build the game in Vulcan to begin with*? there is no negative there.



Because M$?


----------



## Dethroy (Jul 13, 2016)

I'll quote myself (thread):



Dethroy said:


> Both the Fury X and the Nano have 4.096 ALUs and a bus width of 4.096 bit. That architecture is literally begging for async compute. The gains are certainly impressive! I wonder what Nvidia is planning to do since Asynchronous Compute and Asynchronous Shader Pipelines is AMD proprietary hardware IP...
> Right now, all Nvidia can do, is emulate it on a software level. It'll be interesting to see if that software emulation will lead to higher FPS as soon as Nvidia supports "async compute" in Doom.
> 
> It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan





laszlo said:


> s i und. ,from original link, async compute didn't worked on amd cards when antialising or TSSAA was used/enabled ;can someone confirm i read it correctly.....my german lacks...


_Bethesda weist darauf hin, dass Asynchronous Compute auch auf Grafikkarten von AMD nur dann funktioniert, wenn keine Kantenglättung oder zur Kantenglättung – wie in den Benchmarks von ComputerBase – TSSAA genutzt wird.
_
simplified translation:
Bethesda is pointing out, that *Asynchronous Compute will only work* with AMD's GPUs *1) when no anti-aliasing is being used* or* 2) when TSSAA is being used* instead.


----------



## john_ (Jul 13, 2016)

On another note, we see that 1000 series Nvidia cards, behave as 900 series Nvidia cards.
Who bought a Pascal card because now it supports Async? Raise your hands please. Don't be shy.

One more marketing lie from Nvidia. They where going to give Maxwell users Async Compute support through driver updates. Right? Instead, what they did in my opinion, was to prefer to keep that software emulation for Pascal and present it as a new feature. They also didn't used async as the name of that feature, so they don't get probably sued. Instead they used the "Dynamic load balancing" term and let users and tech sites speculate that this is Nvidia's async implementation in Pascal. Finally Pascal was offering async.Well, even with Nvidia's perfect driver optimizations, async could be offering at least 5% more performance to Pascal cards. It doesn't seems to do something like that.

Maxwell's biggest marketing disadvantage was the lack of async support, and they couldn't send Pascal, with a Founders Edition price tag, into the market, without a least the illusion that it supports async. People would have been less willing to pay $700 for just a better Maxwell.
Not the first time that Nvidia is trying to gimp specs, knowing that something like this is influencing the potential buyer's psychology.

Just my opinion of course.


----------



## Parn (Jul 13, 2016)

If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.


----------



## john_ (Jul 13, 2016)

Parn said:


> If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.


It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.



Dethroy said:


> It's even more incredible how the Vulkan API handles CPU bottlenecks, though. PC Games Hardware tested an i7-5820K that they manually put to a lower power state @ 1.2GHz (in tandem with an overclocked Titan X @ 1500/4200). At a resolution of 1.280 x 720 w/o AA/AF this setup pulled 89 FPS running on OpenGL and 152 FPS (+71%) running on Vulkan


 In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.


----------



## ShurikN (Jul 13, 2016)

ZoneDymo said:


> Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
> Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.


Because like everything else, Nvidia will pay devs not to use Vulkan, as AMD destroys them with those 50% gains.
It's standard NV practice.


----------



## RejZoR (Jul 13, 2016)

Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.

R9 Fury X users don't feel that let down anymore  In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.

Also, for people saying "async emulation", there is no such thing, either you have hardware implementation or you don't. You can't emulate a feature that's sole purpose of it is massive performance boost through seamless connection of graphics and compute tasks. This is the same as emulation of pixel shaders when they became a thing with DirectX 8. Either you had them or you didn't. There were some software emulation techniques, but they were so horrendously slow it just wasn't feasible to use in real-time rendering within games. Async is no different. And NVIDIA apparently doesn't have it. Which kinda sucks when you pay 700+ € for a brand new graphic card...



john_ said:


> It doesn't make games run slower or look worst on Nvidia cards. We are not talking about PhysX here. Nvidia users lose nothing in visuals or performance.
> 
> In the first presentation of Mantle's advantages over DirectX 11 in AoTS, AMD was using a system with an FX 8350 clocked down to 2GHz.



I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged.  How will you defend NVIDIA then?


----------



## chaosmassive (Jul 13, 2016)

Damage Control, incoming...!!


----------



## fynxer (Jul 13, 2016)

Parn said:


> If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.


----------



## john_ (Jul 13, 2016)

RejZoR said:


> I guess that's how NVIDIA fanboys are comforting themselves after buying super expensive GTX 1000 series graphic card (or GTX 900) that sucks against last generation of AMD cards that weren't particularly awesome even back then. "uh oh it doesn't lose any performance". Well, you also gain none. What's the point then? The whole point of Vulkan/DX12 is to boost performance. When devs will cram more effects into games assuming all these gains, your performance will actually tank where AMD's will remain unchanged.  How will you defend NVIDIA then?




*STOP THE PRESS. *

*FIRST PAGE MATERIAL.

john_ IS DEFENDING NVIDIA.
*
Are you serious? Read again what I wrote. Damn,....


----------



## RejZoR (Jul 13, 2016)

Also read again what I wrote. I haven't even directed it at you... XD


----------



## bug (Jul 13, 2016)

nienorgt said:


> I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.


Neah, AMD's implementation of OpenGL has been subpar for years. That's what we see here: with the dirty work out of the drivers and into the hands of capable programmers (id), the cards finally work as they should.


----------



## john_ (Jul 13, 2016)

RejZoR said:


> Also read again what I wrote. I haven't even directed it at you... XD



You didn't? And what exactly is this?



RejZoR said:


> How will *you* defend NVIDIA then?



When you quote someone and you are just using his post as an opportunity to make a general comment, don't make questions that appear to be aimed at him.


----------



## Prima.Vera (Jul 13, 2016)

Im an nGreedia user, but I love those kind of news. Keep it up AMD. If more games would use Vulkan, they will bitch smack nvidia's prices in the face.


----------



## deemon (Jul 13, 2016)

ZoneDymo said:


> Well what I see is that Nvidia is getting the same results and AMD just does better vs OpenGL.
> Soooo why would a dev not just build the game in Vulcan to begin with? there is no negative there.



They are, now. Just that Doom didn't get developed overnight. Vulkan just came out. New games from this point forward are probably developed in either Vulkan or in DX12 from ground up, but games already in developement for years didn't even have the API to start with, so those are done in DX11 or OpenGL.


----------



## Dethroy (Jul 13, 2016)

Aside from the obvious gains on AMD's side one can observe something else as well...
According to this *test* done by PC Games Hardware (updated for the 4th time now) a GTX 980 Ti pulls ahead of a GTX 1070 by ~20% (average of the 4 resolutions that have been tested) thanks to Vulkan.

Vulkan really does utilize architectural advantages way better than the OpenGL implementation does. I wonder what kind of performance gains one would see with Vega...

It will be intersting to see if Pascal's faster pre-emption and its dynamic load balancing (which is Nvidia's current answer to async compute) will achieve similar results once ID - with the help of Nvidia - is done implementing it.


----------



## john_ (Jul 13, 2016)

cryohellinc said:


> My bet goes on AMD giving some cash to Vulkan developers so that they can boost their GPU's performance first.


Maybe AMD is also using it's huge bank account to help Google with it's financial problems. That's why Google is adopting Vulkan as the main low level API for Android.

Developers know that the future is low level APIs. And they know that Vulkan could be very crucial in the future giving them the opportunity to make a game not just for consoles and desktops, but for every device out there, including smartphones and tablets. Also Vulkan is the only option right now if you want to offer a game highly optimized in Windows 7 and Linux. Going DirectX 12 means that only users with Windows 10 will see any benefits and going DirectX 11 only is like putting your head in the sand hopping that everyone else will do the same.


----------



## Landcross (Jul 13, 2016)

nienorgt said:


> I hope that this poor result on Nvidia's GPUs are only because Pascal is still not optimised for Vulkan. It would be highly inappropriate for Khronos to favor AMD in a multiplatform API.



Khronos doesn't favor AMD (or any other manufacturer), it's just that AMD's architecture is better suited for Vulkan. The same way that Nvidia's architecure is often better suited for other APIs.


----------



## bug (Jul 13, 2016)

Dethroy said:


> Vulkan really does utilize architectural advantages way better than the OpenGL implementation does.



It doesn't, at least not inherently. What it does is it exposes the video card behind the drivers and lets the programmers use it to the fullest. id is top-notch, but other companies may very well come up with half-assed implementations that will make Vulkan slower than OpenGL. Vulkan is still an unknown quantity, imho, but this first step has been executed perfectly.


----------



## john_ (Jul 13, 2016)

cryohellinc said:


> Ah, AMD fanboy. About EXACTLY scrubs like you i was talking about earlier.
> 
> And compare dicks as much as you want, I wont buy new mobo or processor or ram to get 1% increased in game performance boost.
> 
> ...


As YOU predicted


cryohellinc said:


> and make a million and one excuse to argue that



Have a nice day.


----------



## laszlo (Jul 13, 2016)

chaosmassive said:


> Damage Control, incoming...!!



......better quote @Tatty_One before is coming :  "children are misbehaving in the nursery yet again, reply bans for this thread will be issued if it continues, followed by free holiday passes"


----------



## Dethroy (Jul 13, 2016)

bug said:


> It doesn't, at least not inherently. What it does is it exposes the video card behind the drivers and lets the programmers use it to the fullest. id is top-notch, but other companies may very well come up with half-assed implementations that will make Vulkan slower than OpenGL. Vulkan is still an unknown quantity, imho, but this first step has been executed perfectly.


Funny you picked up on that as I wanted to write_ ID's Vulkan implementation_ at first instead. But I finally decided to word it differently because I wanted to emphasize that ID was able to do so because of Vulkan and that it wouldn't have been possible with OpenGL. I know that ID's programmers are to thank, but so is Vulkan 

*Edit:* My takeaway... I skip this round of GPUs and await the next generation.


----------



## R-T-B (Jul 13, 2016)

john_ said:


> So, GCN cards are faster in Mantle, DirectX 12 Mantle and also, Vulkan Mantle.



Vulkan is the only one that is proven to use any Mantle code.

If something went on behind the scenes, it can only be said as conjecture.  Let's not just spout conjecture as fact.  It makes me pissy...  and when I'm pissy, it makes me do stupid things, like break my computer.

You wouldn't cause an innocent frog to break his computer, right?


----------



## cryohellinc (Jul 13, 2016)

john_ said:


> As YOU predicted
> 
> 
> Have a nice day.


Thank you very much. Likewise and all the best!


----------



## bug (Jul 13, 2016)

Dethroy said:


> Funny you picked up on that as I wanted to write_ ID's Vulkan implementation_ at first instead. But I finally decided to word it differently because I wanted to emphasize that ID was able to do so because of Vulkan and that it wouldn't have been possible with OpenGL. I know that ID's programmers are to thank, but so is Vulkan
> 
> *Edit:* My takeaway... I skip this round of GPUs and await the next generation.


Eh, if you look at Nvidia's results, a proper driver can already use the card to its fullest.
However, yes, Vulkan exists for a reason, it does put more flexibility into the hands of the developer.
Exciting times for sure, after being stuck with 28nm cards since forever.

Edit: I may not skip this generation, my 660Ti starts to show its age. And I say "may" because the 480 doesn't cut it for me. 1060 I think will provide enough HP, but I won't buy it at FE+ prices.


----------



## LAN_deRf_HA (Jul 13, 2016)

I actually got a decent improvement on my 780 Ti with Vulkan on certain levels but I blame that more on all the weird performance degradation issues Doom has, I think they're related to cache. Also got rid of those late game crashes, which I'm pretty sure are also some how cache related.


----------



## PP Mguire (Jul 13, 2016)

R-T-B said:


> Vulkan is the only one that is proven to use any Mantle code.
> 
> If something went on behind the scenes, it can only be said as conjecture.  Let's not just spout conjecture as fact.  It makes me pissy...  and when I'm pissy, it makes me do stupid things, like break my computer.
> 
> You wouldn't cause an innocent frog to break his computer, right?


The bottom line being they're all low level.


----------



## Litvan (Jul 13, 2016)

Am I the only one wondering why there's not a single 1080 card on that list and only a 1070?


----------



## RejZoR (Jul 13, 2016)

john_ said:


> You didn't? And what exactly is this?
> 
> 
> 
> When you quote someone and you are just using his post as an opportunity to make a general comment, don't make questions that appear to be aimed at him.



Ever heard of 3rd person and the use of "you" for those situations (I don't even know what's the official term for it)? Ever thought I was addressing NVIDIA fanboys with "you" and not you directly?


----------



## R-T-B (Jul 13, 2016)

PP Mguire said:


> The bottom line being they're all low level.



Yes, but low level simply means you have to target the hardware at a low level.  It does not specify it will perform better on AMD (or NVIDIA), unless the coder specifically targeted AMD.  That's most likely what is happening.


----------



## Xzibit (Jul 13, 2016)

Quoted and fixed.



			
				Bethesda said:
			
		

> *Does DOOM support asynchronous compute when running on the Vulkan API?*
> 
> Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.
> 
> Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run.  We are working with NVIDIA to enable asynchronous compute _preemption_ in Vulkan on NVIDIA GPUs. We hope to have an update soon.


----------



## ShurikN (Jul 13, 2016)

Dethroy said:


> This might give you an idea. Although results differ widely...
> View attachment 76847


Those are not the same results as the original pic. To me that looks like OGL rather than Vulkan


----------



## PP Mguire (Jul 13, 2016)

R-T-B said:


> Yes, but low level simply means you have to target the hardware at a low level.  It does not specify it will perform better on AMD (or NVIDIA), unless the coder specifically targeted AMD.  That's most likely what is happening.


Wasn't exactly the context of his post though. The point was, GCN was made for low level from the get-go.


----------



## the54thvoid (Jul 13, 2016)

Xzibit said:


> Quoted and fixed.



That's about the size of it. This is the flip happening now. Nvidia need to code more (for Pascal) while RTG can rely on Async hardware.

Unlike DX11 where Nvidia ruled and RTG needed to optimise drivers heavily (why after time they improve performance because they need the optimisations), RTG have a clear edge.

Is the Async hardware really proprietary though? If it is, Nvidia can't do anything about DX12. Also, if Mantle, Vulcan and DX12 is based heavily on proprietary tech, is that 'allowed' under FRAND patents?


----------



## ShurikN (Jul 13, 2016)

Dethroy said:


> It is Vulkan. But tests were done by PC Games Hardware, not Computer Base.


How is the Fury X then barely faster than a 980, yet in the other bench it's easily beating a 1070. In the article that you linked, the last update was performed 15 days ago.


----------



## dj-electric (Jul 13, 2016)

Since when TPU turned into WCCF? what the hell is going on in this thread?!


----------



## R-T-B (Jul 13, 2016)

PP Mguire said:


> Wasn't exactly the context of his post though. The point was, GCN was made for low level from the get-go.



You can't "make something" for low level, at least not hardware wise.  It goes against the definition of low level.  You optimize low level to a platform, you target your hardware.  That is what low level means.


----------



## laszlo (Jul 13, 2016)

http://asawicki.info/news_1601_lower-level_graphics_api_-_what_does_it_mean.html


----------



## Dethroy (Jul 13, 2016)

ShurikN said:


> Those are not the same results as the original pic. To me that looks like OGL rather than Vulkan


&


ShurikN said:


> How is the Fury X then barely faster than a 980, yet in the other bench it's easily beating a 1070. In the article that you linked, the last update was performed 15 days ago.



You are right. I messed up! 
Was refreshing the site for further updates and clicked on a link which I thought to be an update because of the use of the word "final". My appologies!



the54thvoid said:


> That's about the size of it. This is the flip happening now. Nvidia need to code more (for Pascal) while RTG can rely on Async hardware.
> 
> Unlike DX11 where Nvidia ruled and RTG needed to optimise drivers heavily (why after time they improve performance because they need the optimisations), RTG have a clear edge.


Yup. I wonder if dynamic load balancing coupled with pre-emption will boast a similar performance gain as async compute does (unlikely imho).


----------



## R-T-B (Jul 13, 2016)

laszlo said:


> http://asawicki.info/news_1601_lower-level_graphics_api_-_what_does_it_mean.html



Supports my claim.  You target the hardware, you don't make a hardware product to be "low level from the getgo"



> So lower-level API means just that driver could be smaller and simpler, while upper layers will have more responsibility of manually managing stuff instead of automatic facilities provided by the driver (for example, there is no more DISCARD or NOOVERWRITE flag when mapping a resource in DirectX 12). It also means API is again closer to the actual hardware. Thanks to all that, the usage of GPU can be optimized better by knowing all higher-level details about specific application on the engine level.



You need to understand the hardware you are writing code to run on.  The driver no longer babysits you.  AMD is doing better now because most lowlevel efforts have been AMD sponsored.


----------



## Tatty_One (Jul 13, 2016)

laszlo said:


> ......better quote @Tatty_One before is coming :  "children are misbehaving in the nursery yet again, reply bans for this thread will be issued if it continues, followed by free holiday passes"


Happily, free holiday passes have been issued, I am happy to issue more if needed, thank you.


----------



## PP Mguire (Jul 13, 2016)

R-T-B said:


> You can't "make something" for low level, at least not hardware wise.  It goes against the definition of low level.  You optimize low level to a platform, you target your hardware.  That is what low level means.


You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play. Somebody else said it earlier how AMD has been preparing for this movement for quite a while. For once the industry is going in a direction that helps AMD and their decisions rather than hurting it.


----------



## Aquinus (Jul 13, 2016)

Well, I played Doom for 5 minutes before going to work after enabling Vulkan and everything seems to run nicely, just as it does in OpenGL 4.5. I might go so far to say that it might even look a little nicer but, that could just be placebo effect. When I get home, I'm going to see if eyefinity is playable now because if the improved performance is that much, it might be realistic for me to do 5760x1080 where before it kind of struggled.

It makes you wonder if this is a preview of things to come with DX12. I guess time will tell. Until then, I'm going to enjoy this.


----------



## laszlo (Jul 13, 2016)

PP Mguire said:


> You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play. Somebody else said it earlier how AMD has been preparing for this movement for quite a while. For once the industry is going in a direction that helps AMD and their decisions rather than hurting it.



reading your post come in my mind a thread from 2 months ago....AMD-The Master Plan.....(https://www.techpowerup.com/forums/threads/amd-the-master-plan.222334/) and slowly gossip turn in reality....

wondering if older older hardware (GCN 1st,2nd) show this improvement; if yes, means these gen. are still good to use a while


----------



## R-T-B (Jul 13, 2016)

PP Mguire said:


> You can gear the architecture for it. In this case, the boost in Async queues in GCN giving AMD the clear advantage when utilized. You optimize your API usage during development toward the architectural strengths in play.



If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.

The fact is you don't gear an architecture to be low level.  You gear your game for an architecture.


----------



## ensabrenoir (Jul 13, 2016)

....why no 1080 test?


----------



## Captain_Tom (Jul 13, 2016)

I'm sorry but Maxwell/Pascal will be the greatest prank Nvidia has ever pulled.

This fall the old Fury X will come close to a 1080 in most games, and a $200 budget card from AMD will nearly match Nvidia's 1070.


----------



## Parn (Jul 13, 2016)

R-T-B said:


> If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.
> 
> The fact is you don't gear an architecture to be low level.  You gear your game for an architecture.



Exactly.


----------



## Captain_Tom (Jul 13, 2016)

R-T-B said:


> If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.
> 
> The fact is you don't gear an architecture to be low level.  You gear your game for an architecture.



That's just flat-out not true.  There is a difference between optimizing a game to be good at things like tesselation/texturing/lighting, and just straight up not putting hardware like RAM/ACE's/SP's in a card.   The fact is AMD has always given their cards more compute than needed for gaming so that they could have one unified arch, and so they could beat out the competition once compute gaming was ready - and now it is buddy!

Nvidia has had plenty of time to prepare their cards for the future, but just like how they gave Fermi cards half as much VRAM as they needed; they then stripped Kepler/Maxwell/Pascal of any useful compute hardware.   This was done so that 1) It can operate more efficiently in today's inefficient games, 2) People will be forced to upgrade to Volta.


----------



## FordGT90Concept (Jul 13, 2016)

Not sure why there is so much talk here about async shaders.  It's not clear that Doom even uses them.  This is mostly about Vulkan being faster than OpenGL (which we knew) and, in AMD's case, Vulkan is a lot faster than OpenGL because AMD never had the best support of OpenGL in the first place.

Remember that most developers still used DirectX 11 on Windows while Mac and Linux releases used OpenGL.  Why?  DirectX 11 was faster.


----------



## Dethroy (Jul 13, 2016)

FordGT90Concept said:


> Not sure why there is so much talk here about async shaders.  It's not clear that Doom even uses them.


https://community.bethesda.net/thread/54585?start=0&tstart=0



> *Does DOOM support asynchronous compute when running on the Vulkan API?
> *
> Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.
> Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run.  We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.





FordGT90Concept said:


> This is mostly about Vulkan being faster than OpenGL (which we knew) and, in AMD's case, Vulkan is a lot faster than OpenGL because AMD never had the best support of OpenGL in the first place.


Some of the gains are surely because AMD's OpenGL support wasn't as good as Nvidia's. But how come a Fury X beats a GTX 1070 all of a sudden?


----------



## HD64G (Jul 13, 2016)

RejZoR said:


> Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.
> 
> R9 Fury X users don't feel that let down anymore  In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.
> 
> ...



Couldn't have said it better mate!


----------



## FordGT90Concept (Jul 13, 2016)

Dethroy said:


> Some of the gains are surely because AMD's OpenGL support wasn't as good as Nvidia's. But how come a Fury X beats a GTX 1070 all of a sudden?


Most of it undoubtedly comes from Vulkan.  Judging by Ashes of the Singularity, 5-10% is coming from async compute.


----------



## Dethroy (Jul 13, 2016)

RX 480 vs GTX 1070 and 970


----------



## yogurt_21 (Jul 13, 2016)

so even my R9 290 should see significant gains using Vulkan? I see the 390 does. The question then becomes how many games will start to utilize it? The performance is nice but changing drivers for each game doesn't sound fun at all.


----------



## Ferrum Master (Jul 13, 2016)

I guess I hear champagne pops coming from the red camp


----------



## GhostRyder (Jul 13, 2016)

Well I tried it out, it definitely helps that's for sure.  Kinda surprised actually...

Though I didn't have time to try it in full, I only saw the number for like a couple minutes.  I will have to give it a more in depth try tonight as I am still finishing that game.


----------



## Slizzo (Jul 13, 2016)

Here's my testing results. My system is in my specs to the left.

In the limited, and very quick peek I took, in one specific area I was getting about 60FPS with everything maxed out at 1440P, except motion blur turned to low (don't like motion blur). When I switched to Vulkan, I saw 100FPS at the lowest in the same area.

I'd say it improved for me quite a bit.


----------



## Dethroy (Jul 13, 2016)

Slizzo said:


> Here's my testing results. My system is in my specs to the left.
> 
> In the limited, and very quick peek I took, in one specific area I was getting about 60FPS with everything maxed out at 1440P, except motion blur turned to low (don't like motion blur). When I switched to Vulkan, I saw 100FPS at the lowest in the same area.
> 
> I'd say it improved for me quite a bit.


Those gains are probably due to Vulkan handling CPU bottlenecks much better.


----------



## FordGT90Concept (Jul 13, 2016)

The CPU usage in that video clearly show it is well multithreaded--all of it.  That is impressive.  The future of gaming has finally arrived.


----------



## ShurikN (Jul 13, 2016)

Dethroy said:


> RX 480 vs GTX 1070 and 970


The same guy has a video comparing RX 480 OGL/Vulkan, and by the looks of it, Vulkan is a lot less stressful on the CPU, which is great for budget gamers.
If only more devs would pick Vulkan (or true DX12)


----------



## ShurikN (Jul 13, 2016)

https://www.reddit.com/r/Amd/comments/4sd75l/doom_123_vulcan_performance_increase_on_r9_280x/

Also this 
ofc, it's not that amount through the whole game, but wow.

Wish I had Doom so I can test my crappy gear as well.


----------



## ZoneDymo (Jul 13, 2016)

ShurikN said:


> The same guy has a video comparing RX 480 OGL/Vulkan, and by the looks of it, Vulkan is a lot less stressful on the CPU, which is great for budget gamers.
> If only more devs would pick Vulkan (or true DX12)



Well its good for everyone because now a lot of CPU power is available for future games


----------



## Frick (Jul 13, 2016)

Has anyone tried it on HD7XXX cards?


----------



## INSTG8R (Jul 13, 2016)

yogurt_21 said:


> so even my R9 290 should see significant gains using Vulkan? I see the 390 does. The question then becomes how many games will start to utilize it? The performance is nice but changing drivers for each game doesn't sound fun at all.



Changing drivers? Vulkan has been in the drivers for months.


----------



## xkm1948 (Jul 13, 2016)

I just wish more VR applications employ the new Vulkan API. VR is the future.  

I am also looking into improvement using AS for scientific computing. It would be awesome if OpenCL can benefit from these as well.


----------



## R-T-B (Jul 13, 2016)

Captain_Tom said:


> That's just flat-out not true.  There is a difference between optimizing a game to be good at things like tesselation/texturing/lighting, and just straight up not putting hardware like RAM/ACE's/SP's in a card.



From the perspective of the API, no there is not.  AMD is severely constrained in tesselation.  You could counter your arguement by saying NVIDIA hardware is ready for future games featuring massive tesselation, and AMD just lacks the hardware to compete, etc

That AMD has better compute (they do) does not make them better in low level apis.  It makes them better at games that ufilize compute heavily.  The API?  The API just facilitates access to the hardware.  It has no brand loyalties.


----------



## ur1manphantom (Jul 13, 2016)

Salty nvidia users


----------



## the54thvoid (Jul 13, 2016)

Ferrum Master said:


> I guess I hear champagne pops coming from the red camp



Because their top range cards beat an over priced smaller mode process mid range card? This is Nvidia's fault for pricing their cards where they do.

If the GTX1080 was priced to replace the 980 it architecturally replaces, it wouldn't be such a surprise a card with Fiji's raw power does so well.  Fury X has a whole shed load of processing power and finally we're seeing it being used.  On paper it's more than a match for what Nvidia has at it's top end for now.  The problem still remains - where is gaming going?  Is it DX12, is it Vulkan?  How long is DX11 staying around?

Undoubtedly the improvements shown in this Vulkan game show AMD to be fantastic.  The GTX 1080 does actually put in a very good show and that is the 980 replacement.  Also recall that Fury X is still (where available) a >£500 card.

What people should still realise (and so few do due to their differing views) is that nothing has changed on the generic landscape yet.  If we all knew that Vulkan was the way forward and it was going to be used in everything, shit, we'd all go and buy AMD tomorrow.  But we don't know what's happening on the gaming dev front.  I could go out tomorrow and try and buy a Fury X and then find out some of the new games have went with an Nvidia sponsored DX12 title and lose out to an optimised Maxwell/Pascal.
Ditto with staying Nvidia.  It's shit that instead of one playing field (DX11), there's now possibly 3 (or more).  This is not the best environment to purchase in.  We need some stability from dev's to decide which is the likely way forward




ur1manphantom said:


> Salty nvidia users



Well that's okay because salt is not _bitter_.  Unlike others.


----------



## cdawall (Jul 13, 2016)

Well this thread turned into a shit storm. Why are people upset that the API improved performance on AMD cards? Everyone knew this was coming since mantle/DX12/vulkan was known to exist. If you purchased NV and planned to have the card forever, plan to have mid-range GCN products perform better.


----------



## ShurikN (Jul 13, 2016)

cdawall said:


> Well this thread turned into a shit storm. Why are people upset that the API improved performance on AMD cards? Everyone knew this was coming since mantle/DX12/vulkan was known to exist. If you purchased NV and planned to have the card forever, plan to have mid-range GCN products perform better.


If you pay close attention it's only NV fanboys that are upset. bust shhhh dont tell them, they might get agitated


----------



## laszlo (Jul 13, 2016)

Frick said:


> Has anyone tried it on HD7XXX cards?



a asked also..thought someone here will try and let us know...

i find the answer however... in steam forum ..:

"I'm using an older HD7970 and it's running like a dream so far with Vulkan at 1080p with a mixture of high to ultra settings (and medium shadows, but I haven't tweaked the graphics settings much from default to be honest). Well impressed that I'm able to run games like this at 60-80 fps consistently on a 5-year-old card.  "


seems is working like a charm and older cgn stuff still good stuff (to a certain point) 2 bad my actual card is not cgn..waiting custom 480 from aib....


----------



## Slizzo (Jul 13, 2016)

Dethroy said:


> Those gains are probably due to Vulkan handling CPU bottlenecks much better.



No doubt. People aren't really understanding that DX12 and Vulkan go a long way towards helping along bottlenecks that are present in systems far more than if you used the same APIs on a brand new, top of the line, very powerful system. There isn't much in the way of a bottleneck in that case, and these new APIs won't return too much performance on an already streamlined system.


----------



## FordGT90Concept (Jul 13, 2016)

The only problem with HD7XXX cards is they have two ACEs instead of eight.  Not sure how much of a performance impact that would be but HD7XXX will no doubt benefit from Vulkan because it is still GCN.


----------



## efikkan (Jul 13, 2016)

A benchmark with temporal antialiasing? Who cares? Run proper AA and see how things actually scales.



FordGT90Concept said:


> Not sure why there is so much talk here about async shaders.  It's not clear that Doom even uses them.


Async shaders isn't implemented in Vulkan yet, so clearly it's not used.
But as usual, clueless people are the quickest to make assumptions.



FordGT90Concept said:


> This is mostly about Vulkan being faster than OpenGL (which we knew) and, in AMD's case, Vulkan is a lot faster than OpenGL because AMD never had the best support of OpenGL in the first place.


To this date AMD has never bothered to implement proper OpenGL support, so clearly they can get higher "gains" from Vulkan.


----------



## TheoneandonlyMrK (Jul 13, 2016)

efikkan said:


> A benchmark with temporal antialiasing? Who cares? Run proper AA and see how things actually scales.
> 
> 
> Async shaders isn't implemented in Vulkan yet, so clearly it's not used.
> ...


You sure about that , I copied this from Bethesda doom faq.


*Does DOOM support asynchronous compute when running on the Vulkan API?*

Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.

Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run.  We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.

Soooo yeah but not if you're running the wrong setup for it.

It's also clear that if they weren't paying out enough love to open Gl they are now.

And I personally think vulkan has got bite and backing ,ie apple ,Google , PC and I'd expect its implementations to work well on console 

Have to admit I've chatted some poop over the years re Amd on tpu but its nice to see some of it is as I said it would be.

Damn this pone


----------



## RejZoR (Jul 13, 2016)

We aren't even comparing Radeons to Radeons here, OGL vs Vulkan. We are comparing NVIDIA to AMD and AMD takes a huge edge here even with R9 Fury which as you know wasn't a real competitor against GTX 980Ti. But now it goes against latest GTX 1070. Which is around GTX 980Ti level. Not bad I'd say. And per your word, not even using Async...


----------



## FordGT90Concept (Jul 13, 2016)

We need more titles of the DX12/Vulkan flavor before we draw any broad conclusions.  That said, I don't think I ever seen benchmarks of Talos Principle running Vulkan and it was the first Vulkan title.



theoneandonlymrk said:


> Damn this pone


----------



## Dethroy (Jul 13, 2016)

FordGT90Concept said:


> We need more titles of the DX12/Vulkan flavor before we draw any broad conclusions.


And I'd also like to see if Pascal's Dynamic Load Balancing and Pre-Emption translate to real world performance gains and if these gains are anywhere close to what Async Compute & Shaders do for GCN.

Having said that, I really like the direction we are heading to (eliminating bottlenecks & general performance gains thanks to better utilization of hardware architectures/features) and everyone -AMD & Nvidia fanboys alike - should be happy about said direction. Certainly, interesting and exciting times lie ahead of us!


----------



## $ReaPeR$ (Jul 13, 2016)

holy crap those gains!!!! finally the future is here.


----------



## the54thvoid (Jul 13, 2016)

Hmmm.

The weird thing is there a host of youtube vids with 1080's at 1440p nightmare gfx settings on Vulkan running at around 150fps.  

We'll need to get the 1070/1080 owners thread to run the benchmark.  See what the overpriced mid tier 980 replacement does.


----------



## Slizzo (Jul 13, 2016)

the54thvoid said:


> Hmmm.
> 
> The weird thing is there a host of youtube vids with 1080's at 1440p nightmare gfx settings on Vulkan running at around 150fps.
> 
> We'll need to get the 1070/1080 owners thread to run the benchmark.  See what the overpriced mid tier 980 replacement does.



Certain areas I definitely make around that in framerate.


----------



## Dethroy (Jul 13, 2016)

nem.. said:


>


You are late to the party 

Interesting read &
Another interesting video...


----------



## nem.. (Jul 13, 2016)

Amd -> <- Nvidia


----------



## PP Mguire (Jul 13, 2016)

R-T-B said:


> If someone wrote a low-level api game that utilized tons of nvidia friendly tessellation, nvidia would suddenly appear to be awesomely optimized for low level as well.
> 
> The fact is you don't gear an architecture to be low level.  You gear your game for an architecture.


Still not really the context of the original post and quote, but who cares really? I have shitty old Maxwell and it still plays Doom damn good at 4k. More FPS for cheaper cards is a win/win in my books. I would rather see this on other games that slam GPUs, like Witcher 3 or the devs for ARK properly implement Vulkan or DX12 so performance isn't so shit. A guy can dream though


----------



## Aquinus (Jul 13, 2016)

So, TSSAA (8x) plus Ultra and 16x AF, I'm seeing no lower than 90FPS at 1080p with the average being over 100 but under 115. The GPU is constantly pegged out and my 390 is running a lot hotter than it normally does at full load, normally I'll see ~70*C under load but, I'm seeing about 80*C with this. I would say, that Vulkan is doing a pretty good job at tapping those unused resources. My only complaint is that TSSAA, even at 8x is kind of meh. Either way, I'm impressed with what I'm seeing. I think I'm going to see how eyefinity runs. I'll report back.

Edit: I'm noticing some flickering at far distances. It could be the TSSAA, I guess.
Edit 2: Flickering went away when I turned off AA. Funny thing is that frame-rate didn't change much after I disabled it.
Edit 3: It appears that TSSAA (8x) seems to be the only mode that causes flickering.
Edit 4: Same scene does about 39-60FPS on 5760x1080 which is far better than I was seeing before but, it isn't what I would consider acceptable. This is with the same settings as at 1080p, Ultra + 16x AF + 8x TSSAA. This is also with no overclock on my GPU with the stock 1040/1500(6000).


----------



## Jism (Jul 14, 2016)

There's alot of 'compute' potential on AMD GPU's in general, this is why they are favoured in for example the Bitcoin mining thing. They provide a much higher hashrate then nvidia cards. What you are seeing here is exploit of full potential of AMD gpu's, so if your avg AMD GPU is running 10 degrees hotter then usual then it means you are fully utilizing it.

Vulkan should be adopted in ANY game, since it offers so much more then the standard DX11/12 and / or OpenGL. It means that we get higher FPS from even 2 generations of cards before, which contain the GCN featureset.

AMD was'nt stupid when it developped the Async stuff into their stuff. AMD was'nt stupid either that the future contains multiple cores in both CPU and GPU camps and it is betting huge. By doing so and pushing Mantle for what it was, it just opened up a new feature. Remember almost any generation of console now and future, will proberly be driven by AMD GPU's. The vulkan thing is something that we will see more often.


----------



## R-T-B (Jul 14, 2016)

PP Mguire said:


> Still not really the context of the original post and quote, but who cares really?



I don't honestly.  I just have a better understanding of what "low level" means being a programmer, and am trying to share my knowledge.  Async != low level.  I will leave it at that for now.


----------



## Jism (Jul 14, 2016)

Low level is without any abstract(s) such as DX as a layer in between. This offers extra performance and spares CPU-cycles from which gains can be mostly seen at lower-end systems.

If anyone remembers the Amiga 1200, 










That thing had less then 2MB of ram, and not even a GPU but an AGA capable graphics chip with up to 256 colors, lol. But still programmers managed to sqeeze graphics and tech demo's out of that peace of antique like never before. It was because they understood coding to the metal. Same as the PS2, which is RISC based and on some games really shined and where way ahead of other games, with only 4MB of video-ram and 32MB of total system ram.










Even Nasa builds hardware that's based upon 8086 processors and head into space. It's because they understand the logic(s) and power of a chip. The PS3 contains a complete different CPU platform, but still manages to produce the very best graphics best in it's time. Just straight coding to the metal.


----------



## looncraz (Jul 14, 2016)

john_ said:


> So, GCN cards are faster in Mantle, DirectX 12 Mantle and also, Vulkan Mantle.



Also known as the future of PC gaming. ..


----------



## looncraz (Jul 14, 2016)

bug said:


> Edit: I may not skip this generation, my 660Ti starts to show its age. And I say "may" because the 480 doesn't cut it for me. 1060 I think will provide enough HP, but I won't buy it at FE+ prices.



That 1060 will age poorly compared to the RX 480.  All newer APIs favor it heavily and most new big games will be using said APIs already in the coming year.  Will you be happy next year watching your more expensive 1060 not playing games anywhere near as well as the cheaper RX 480?

AMD cards simply age better.


----------



## SetsunaFZero (Jul 14, 2016)

looncraz said:


> AMD cards simply age better.


 +1

Consoles should profit from this pretty good no more downscaling


----------



## cdawall (Jul 14, 2016)

Jism said:


> There's alot of 'compute' potential on AMD GPU's in general, this is why they are favoured in for example the Bitcoin mining thing. They provide a much higher hashrate then nvidia cards. What you are seeing here is exploit of full potential of AMD gpu's, so if your avg AMD GPU is running 10 degrees hotter then usual then it means you are fully utilizing it.



I would be curious power consumption running in Vulcan.  Obviously they are pumping more wattage if they are making more heat. This probably means I'll need to drop my gpu clocks down.


----------



## the54thvoid (Jul 14, 2016)

The power usage thing can't be right. The cards are designed and sold with a power consumption figure. In reviews in DX11 they generally run at that power usage. If a game gets a 40-50% performance uplift, that can't equate to a linear power increase - it would go against design?

Surely the API is using the compute more efficiently instead. 

On that front, OcUK have Fury X cards on pre-order at £150 under normal pricing. I'm tempted to build a Skylake rig running a Fury X. But then I think, why sell so cheap unless there's a Fury X replacement incoming...


----------



## laszlo (Jul 14, 2016)

i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :


----------



## Ubersonic (Jul 14, 2016)

Jism said:


> If anyone remembers the Amiga 1200, That thing had less then 2MB of ram, and not even a GPU but an AGA capable graphics chip with up to 256 colors, lol. But still programmers managed to sqeeze graphics and tech demo's out of that peace of antique like never before.



You make it sound like the A1200 was weak but in reality it had a pretty beefy spec that dwarfed most of the other PC's available at the time and laughed at the consoles.  It's 14Mhz CPU was faster than pretty much anything bar Intel's flagship 486 CPU and nobody really had them at the time.  By comparison, when it launched in 1992 I had an 8Mhz IBM compatible with 640KB of RAM, a 27MB HDD and VGA graphics, that was considered a pretty good system at the time.  Hell, I think it was '92 when a buddy of mine got a 286 with a whopping 1MB of RAM :O


----------



## bug (Jul 14, 2016)

looncraz said:


> That 1060 will age poorly compared to the RX 480.  All newer APIs favor it heavily and most new big games will be using said APIs already in the coming year.  Will you be happy next year watching your more expensive 1060 not playing games anywhere near as well as the cheaper RX 480?
> 
> AMD cards simply age better.


I have never seen a first generation card to provide adequate performance for any new DX generation.
FX5000 or Radeon 9000 series were not adequate for DX9 gaming, despite having support in place. Nvidia's 8 series or ATI's HD2000 series didn't have enough HP for DX 10. And we have DX11 titles bringing cards to their knees today.
So my guess is, we'll need something better than Pascal/Polaris for proper DX12/Vulkan gaming.


----------



## PP Mguire (Jul 14, 2016)

R-T-B said:


> I don't honestly.  I just have a better understanding of what "low level" means being a programmer, and am trying to share my knowledge.  Async != low level.  I will leave it at that for now.


In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.


----------



## bug (Jul 14, 2016)

PP Mguire said:


> In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.


I think it would help if you guys used the proper terminology. There is no low-level per se, there's low level programming and low leve programming languages. As defined here: https://en.wikipedia.org/wiki/Low-level_programming_language
So yes, Vulkan is lower level than OpenGL. Yet neither are low level programming languages by any means (low level API is actually an oxymoron). And no, hardware cannot be low-level ready. Hardware is always accessed at a low level.


----------



## Nosada (Jul 14, 2016)

bug said:


> I have never seen a first generation card to provide adequate performance for any new DX generation.
> FX5000 or Radeon 9000 series were not adequate for DX9 gaming, despite having support in place. Nvidia's 8 series or ATI's HD2000 series didn't have enough HP for DX 10. And we have DX11 titles bringing cards to their knees today.
> So my guess is, we'll need something better than Pascal/Polaris for proper DX12/Vulkan gaming.


This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects. HW DX12 support will actually increase the lifetime of a GPU, not decrease it like previous generations. The difference will show in a couple of years, when Pascal GPU's are insufficient to keep on gaming, while their AMD counterparts are able to just nudge by.

This is where AMD and nVidia differ most IMO: AMD thinks that being as forward thinking as possible is the way to go, while nVidia designs cards for games that are already out, with little regard to how the card will perform on API's that are yet to be released.

Both ideas have merit, and which one you prefer depends mostly on how frequently you change hardware.


----------



## PP Mguire (Jul 14, 2016)

bug said:


> I think it would help if you guys used the proper terminology. There is no low-level per se, there's low level programming and low leve programming languages. As defined here: https://en.wikipedia.org/wiki/Low-level_programming_language
> So yes, Vulkan is lower level than OpenGL. Yet neither are low level programming languages by any means (low level API is actually an oxymoron). And no, hardware cannot be low-level ready. Hardware is always accessed at a low level.


We can take it a step further and argue AMD geared their architectures for future advancements in API. Again, not really what that was all about. The guy just didn't get where I was going with it and that's understandable.


----------



## R-T-B (Jul 14, 2016)

PP Mguire said:


> In the context it does. You're arguing with me assuming I have a point to argue when I don't. The other guy thanked my post and stopped posting on that subject because of that right there. I was merely extending that. As somebody working with the media group and ADP (at Lockheed) for a project in UE4 helping them try to understand DX12 better in regards to hardware I'd say I have a pretty firm grasp of the subject as well.



I may have confused you with someone else (thought it was the same person this whole time, lol).  My apologies, I get lost in these threads.


----------



## bug (Jul 14, 2016)

Nosada said:


> This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects. HW DX12 support will actually increase the lifetime of a GPU, not decrease it like previous generations. The difference will show in a couple of years, when Pascal GPU's are insufficient to keep on gaming, while their AMD counterparts are able to just nudge by.
> 
> This is where AMD and nVidia differ most IMO: AMD thinks that being as forward thinking as possible is the way to go, while nVidia designs cards for games that are already out, with little regard to how the card will perform on API's that are yet to be released.
> 
> Both ideas have merit, and which one you prefer depends mostly on how frequently you change hardware.


Well, in a couple of years I hope I'll be gaming at 4k, so none of today's cards will be able to deliver (I don't SLI/Crossfire). So I'll need to upgrade anyway. I just don't play the "futureproofing" game.


----------



## Ubersonic (Jul 14, 2016)

Nosada said:


> This was true for previous DX gens because they were all about new features/effects. Both sides were designing cards for effects that were not entirely set in stone and they had no reasonable way of predicting how they would be used. DX12 is different because it focuses on lowering overhead and increasing efficiency of existing effects.



It will be true for this and future gens too, the increased efficiency we are now seeing from Vulkan/D3D12 is no different than the one seen from D3D11, it isn't here to stay and once development leaves OpenGL/D3D11 behind things will return to normal.

I.E World of Warcraft launched with Direct3D 7/8/9 modes (and OpenGL) to ensure a wide range of GPU support, the higher the version used the better the effects but the bigger the performance hit (resulting in some users manually setting a lower version to boost their FPS) however when Direct3D 11 support was added it didn't bring any added effects, just increased FPS over Direct3D 9 due to higher efficiency.

If a developer designs their game with the aim to run at 60 FPS and look as good as possible on high end hardware using OpenGL/Direct3D11 then adds Vulkan/Direct3D12 support that will cause a big FPS bump.  However if they instead design their game to run at 60 FPS and look as good as possible on high end hardware using Vulkan/Direct3D12 then the will be no big FPS bump there will just be a big effects/visual quality bump.


----------



## Melvis (Jul 14, 2016)

looncraz said:


> AMD cards simply age better.



I would actually disagree with you here sadly, and im not a fanboy of any GPU brand. I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.


----------



## Captain_Tom (Jul 14, 2016)

laszlo said:


> i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :
> 
> View attachment 76869



Of course it does.  Unfortunately it is still behind the 1070 and especially the 1080 by a little.    

But I am guessing one of the main reasons the reference 480 is so cheap is because these are the low-binned yields GloFlo can spit out quickly, and their process isn't quite as matured as TSMC's slightly larger 16nm process.


----------



## Captain_Tom (Jul 14, 2016)

Melvis said:


> I would actually disagree with you here sadly, and im not a fanboy of any GPU brand. I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.



What card are you referring to? 

In my experience AMD cards age WAY WAY better than Nvidia.  The only exception imo is the power hungry Fermi cards, but that even that is only if you ignore the paltry amounts of VRAM on the high-end offerings (Which is a big deal).


----------



## Aquinus (Jul 14, 2016)

Melvis said:


> I would agree with you if AMD kept supporting there older cards but they have stopped that which really annoyed me and where Nvidia seem to continue to support older cards for alot longer then AMD do and this for me is where I think Nvidia cards have a longer life then AMD cards purely because of Driver support.


I got 6 years of driver support for my 6870s, I wouldn't call that bad. On the other hand you have products like the E-350 which they dropped support for pretty quickly. I suspect that support for most GCN GPUs will last as long as my 6870s did.


----------



## FordGT90Concept (Jul 14, 2016)

Navi might not be of GCN lineage (which means 7### will lose support sooner rather than later).  It's hard to tell at this point.


----------



## Captain_Tom (Jul 14, 2016)

R-T-B said:


> From the perspective of the API, no there is not.  AMD is severely constrained in tesselation.  You could counter your arguement by saying NVIDIA hardware is ready for future games featuring massive tesselation, and AMD just lacks the hardware to compete, etc
> 
> That AMD has better compute (they do) does not make them better in low level apis.  It makes them better at games that ufilize compute heavily.  The API?  *The API just facilitates access to the hardware. * It has no brand loyalties.



And with that statement you just poked a hole in your own argument.  "The API just facilitates access to the hardware".    Yeah hardware Nvidia just straight up doesn't have.

I know here you come to say "Oh but AMD doesn't have tess..." - let me cut you off right there.  AMD can run tessellation just fine (In fact better than most Nvidia cards at this point) because they don't have to emulate hardware.  

I will make another analogy - what you are saying is the equivalent of someone going "This API is AMD biased because it allows the game to use 3GB of VRAM instead of just 2GB.  A lot of people said this when BF4 came out about their 680's.   Again, having larger textures isn't biased - it just allows the use of more hardware.  If Nvidia users wanted Ultra textures they should have bought a card with more VRAM, but dont worry because you can simply turn the setting down.  Nvidia cards gave RAM, just not as much.   You can't "Turn down Async", you are better just turning it off because Nvidia doesn't have the hardware in any way.


----------



## Captain_Tom (Jul 14, 2016)

FordGT90Concept said:


> Navi might not be of GCN lineage (which means 7### will lose support sooner rather than later).  It's hard to tell at this point.



I am guessing Navi will be an entirely new arch as well, but that doesn't mean the old ones will lose support.   Also keep in mind the 7000 series is nearly 5 years old, and will be 7 years old by the time Navi launches.


----------



## bug (Jul 14, 2016)

Captain_Tom said:


> I am guessing Navi will be an entirely new arch as well, but that doesn't mean the old ones will lose support.   Also keep in mind the 7000 series is nearly 5 years old, and will be 7 years old by the time Navi launches.


Also keep in mind that Nvidia has recently (i.e. this year) released a driver for GeForce 8 series. And that's 10 years old.


----------



## HD64G (Jul 14, 2016)

laszlo said:


> i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :
> 
> View attachment 76869



This single sw update made the RX480 20% more efficient than 970. So, for DX12 and Vulkan games Vega will have a walk in the park vs nVidia GPUs for the next 2 years at least.

I also hope W1z will include Vulcan Doom into his reviews from now on.


----------



## Dbiggs9 (Jul 14, 2016)

Most future games will be built around PS4.5 Xbox Nintendo NX all running a AMD GPU. So more games will be built around AMD hardware less so green team.


----------



## the54thvoid (Jul 14, 2016)

HD64G said:


> for DX12 and Vulkan games Vega will have a walk in the park vs nVidia GPUs for the next 2 years at least.



Two points.
1) That park has green apples and red apples. Don't be so naive to think a company as large as Nvidia is 'out'.
2) On Vega. The GTX1080 smokes everything and it's only the 980 replacement.  The GP100/102 chip is 'rumoured' 50% faster. That is Vegas competition.

Even without compute, Pascal's highly optimised and efficient core can run DX12 and Vulcan just fine.  I'll wager with you, £10, through PayPal that Vega won't beat the full Pascal chip.
If I lose, I'll be happy because it should start a price war. If I win, I'll be unhappy because Nvidia prices will reach the stratosphere.


----------



## R-T-B (Jul 14, 2016)

Captain_Tom said:


> And with that statement you just poked a hole in your own argument.  "The API just facilitates access to the hardware".    Yeah hardware Nvidia just straight up doesn't have.
> 
> I know here you come to say "Oh but AMD doesn't have tess..." - let me cut you off right there.  AMD can run tessellation just fine (In fact better than most Nvidia cards at this point) because they don't have to emulate hardware.
> 
> I will make another analogy - what you are saying is the equivalent of someone going "This API is AMD biased because it allows the game to use 3GB of VRAM instead of just 2GB.  A lot of people said this when BF4 came out about their 680's.   Again, having larger textures isn't biased - it just allows the use of more hardware.  If Nvidia users wanted Ultra textures they should have bought a card with more VRAM, but dont worry because you can simply turn the setting down.  Nvidia cards gave RAM, just not as much.   You can't "Turn down Async", you are better just turning it off because Nvidia doesn't have the hardware in any way.



None of this has anything to do with the point I was making, which is that the API doesn't care what the hardware has.  It just facilitates access to it.

Please support your tesselation claim.  Last I checked, AMD is far inferior in it due to a serial tesselator.  I could be wrong though.



> Nvidia cards gave RAM, just not as much. You can't "Turn down Async", you are better just turning it off because Nvidia doesn't have the hardware in any way.



You seem to mistake me for a fanboy.  I am not.  NVIDIA lacks async hardware.  There, I said it.  This has nothing to do with optimizing for a platform, which is the basis of DX12.  You again, do not "make" a platform for DX12.  You make your game for the platform.  That's what "low level" means.


----------



## Captain_Tom (Jul 14, 2016)

bug said:


> Also keep in mind that Nvidia has recently (i.e. this year) released a driver for GeForce 8 series. And that's 10 years old.



You do know that AMD still supports all the way back to HD 2000 series cards... Right?   Those launched 10 years ago.

Maybe you should make sure you know what you are talking about before you talk?



Also are you enjoying your 8800 GTX?   Not sure how considering it literally cannot run DX11 or newer....


----------



## R-T-B (Jul 14, 2016)

Captain_Tom said:


> You do know that AMD still supports all the way back to HD 2000 series cards... Right?   Those launched 10 years ago.
> 
> Maybe you should make sure you know what you are talking about before you talk?




No.

http://support.amd.com/en-us/kb-articles/Pages/Driver-Support-for-AMD-Radeon™-HD-4000,-HD-3000,-HD-2000-and-older-Series.aspx


----------



## bug (Jul 14, 2016)

Captain_Tom said:


> You do know that AMD still supports all the way back to HD 2000 series cards... Right?   Those launched 10 years ago.
> 
> Maybe you should make sure you know what you are talking about before you talk?
> 
> ...


Sure they do. The latest driver for Windows 7 is Catalyst 13.9. For Windows 8 and later the website says the driver is only available through Windows Update, so I can't verify the version (but it's probably the same 13.9). And if you go look for an Ubuntu driver, you get a page informing you that: "Since the AMD Radeon™ HD 4000 and older products were move to a legacy support model in 2013, they were not included in the list of supported products for these specific distributions." (For those who don't know, AMD dropped everything pre GCN before switching to their new driver.)
Other than that, yeah, it's the same level of support Nvidia provides.


----------



## Captain_Tom (Jul 14, 2016)

bug said:


> Sure they do. The latest driver for Windows 7 is Catalyst 13.9. For Windows 8 and later the website says the driver is only available through Windows Update, so I can't verify the version (but it's probably the same 13.9). And if you go look for an Ubuntu driver, you get a page informing you that: "Since the AMD Radeon™ HD 4000 and older products were move to a legacy support model in 2013, they were not included in the list of supported products for these specific distributions." (For those who don't know, AMD dropped everything pre GCN before switching to their new driver.)
> Other than that, yeah, it's the same level of support Nvidia provides.



Exactly.  After 7 years, there is no need for specific game-by-game optimization.  The driver is as efficient as it will ever get.

But R-T-B said "No" and posted the link I already checked myself.  So I guess I am just wrong because he can't read...


----------



## Aquinus (Jul 14, 2016)

Captain_Tom said:


> Exactly.  After 7 years, there is no need for specific game-by-game optimization.  The driver is as efficient as it will ever get.
> 
> But R-T-B said "No" and posted the link I already checked myself.  So I guess I am just wrong because he can't read...


No it's not. It's old and they stopped supporting it. Supporting it means that they're actively maintaining drivers, it doesn't mean they're perfect. Maybe you should take your own advice...


Captain_Tom said:


> Maybe you should make sure you know what you are talking about before you talk?


----------



## looncraz (Jul 14, 2016)

Aquinus said:


> No it's not. It's old and they stopped supporting it. Supporting it means that they're actively maintaining drivers, it doesn't mean they're perfect. Maybe you should take your own advice...



AMD is actually still supporting back to the HD2000 series through Microsoft's hardware compatibility programs.  You can use those cards in Windows 10, but you won't be able to use all the latest bells and whistles with any support from AMD... however, some people have managed to install CCC nonetheless.

http://www.wagnardmobile.com/forums/viewtopic.php?f=11&t=64

The R600 architecture is a decade old - and works with AMD drivers on Windows 10.  AMD simply doesn't offer a download on their web-site.


----------



## Aquinus (Jul 14, 2016)

looncraz said:


> AMD is actually still supporting back to the HD2000 series through Microsoft's hardware compatibility programs.  You can use those cards in Windows 10, but you won't be able to use all the latest bells and whistles with any support from AMD... however, some people have managed to install CCC nonetheless.
> 
> http://www.wagnardmobile.com/forums/viewtopic.php?f=11&t=64
> 
> The R600 architecture is a decade old - and works with AMD drivers on Windows 10.  AMD simply doesn't offer a download on their web-site.


Windows 10 doesn't require WDDM 2.0 drivers. It can run off of older Windows 7 drivers. I'm willing to bet that the 13.1 drivers probably are the ones being downloaded by Windows Update with the older cards. That's not what's being disputed though. Being able to install or use a driver versus ones that's actively being supported are two very different things.


----------



## cdawall (Jul 15, 2016)

I question why a company should be wasting money on such a minute portion of the population. So the highest usage of those cards is the HD5450, with a huge .61% of the machines on steam. 







http://store.steampowered.com/hwsurvey/videocard/


----------



## looncraz (Jul 15, 2016)

Aquinus said:


> Windows 10 doesn't require WDDM 2.0 drivers. It can run off of older Windows 7 drivers. I'm willing to bet that the 13.1 drivers probably are the ones being downloaded by Windows Update with the older cards. That's not what's being disputed though. Being able to install or use a driver versus ones that's actively being supported are two very different things.



There's not much difference, really. AMD doesn't release drivers to the public except through Windows Update - but they ARE updated drivers.  They receive important fixes, but no new features.  Besides nVidia including the Control Panel and AMD not including CCC in their legacy drivers, there is really no difference.

For nVidia, having the Control Panel is vital as their defaults are horrendous (such as defaulting to limited color ranges), but AMD has not had that problem - their defaults tend to be to fully enable a capability.  And if you want CCC, you can still install it and use it without issues.


----------



## BiggieShady (Jul 15, 2016)

laszlo said:


> i also was interested in power consumption and found something interesting... vulkan increase performance/watt in same tdp.. :



Finally, Vulkan puts RX 480 where it should be ... I'm amazed how Maxwell holds its own on 28 nm tech next to 14 nm FinFet efficient Polaris, also how up to date nvidia is with opengl support


----------



## Melvis (Jul 15, 2016)

Captain_Tom said:


> What card are you referring to?
> 
> In my experience AMD cards age WAY WAY better than Nvidia.  The only exception imo is the power hungry Fermi cards, but that even that is only if you ignore the paltry amounts of VRAM on the high-end offerings (Which is a big deal).




Im talking about as far back as the 8800GT series of cards and onwards where I find driver support for even these old cards (maybe not so much anymore) will work in modern OS's and games where AMD cards stopped after the HD 4000 Series and made my (at the time) 4870X2 alot harder to use even in Windows 7 and games considering the card still had alot to offer at the time.



Aquinus said:


> I got 6 years of driver support for my 6870s, I wouldn't call that bad. On the other hand you have products like the E-350 which they dropped support for pretty quickly. I suspect that support for most GCN GPUs will last as long as my 6870s did.



I dont consider the HD 6XXX series old and I was more so referring to the HD3000 and 4000 series of cards which at the time were great cards and still had lots to offer but AMD decided to stop support for them where as Nvidia still support there cards from that era, this is why I think you get longer life out of an Nvidia card compared to a AMD card.


----------



## Captain_Tom (Jul 15, 2016)

Melvis said:


> Im talking about as far back as the 8800GT series of cards and onwards where I find driver support for even these old cards (maybe not so much anymore) will work in modern OS's and games where AMD cards stopped after the HD 4000 Series and made my (at the time) 4870X2 alot harder to use even in Windows 7 and games considering the card still had alot to offer at the time.
> 
> 
> 
> I dont consider the HD 6XXX series old and I was more so referring to the HD3000 and 4000 series of cards which at the time were great cards and still had lots to offer but AMD decided to stop support for them where as Nvidia still support there cards from that era, this is why I think you get longer life out of an Nvidia card compared to a AMD card.




Where are you people getting your information from?  They are still supported just fine lol.   Check their page, you can download drivers for your 2006 HD 2000 card!


----------



## Captain_Tom (Jul 15, 2016)

Aquinus said:


> Windows 10 doesn't require WDDM 2.0 drivers. It can run off of older Windows 7 drivers. I'm willing to bet that the 13.1 drivers probably are the ones being downloaded by Windows Update with the older cards. That's not what's being disputed though. Being able to install or use a driver versus ones that's actively being supported are two very different things.



Let me get this straight.  You think Nvidia is working right now on optimizing 8800 GTX cards for.... What?  Borderlands:  TPS?!    No game in 2016 even uses below DX11.  And everything else runs fine with legacy drivers.


----------



## Melvis (Jul 15, 2016)

Captain_Tom said:


> Let me get this straight.  You think Nvidia is working right now on optimizing 8800 GTX cards for.... What?  Borderlands:  TPS?!    No game in 2016 even uses below DX11.  And everything else runs fine with legacy drivers.



And you think AMD do? same boat but at least Nvidia still has (or at least did) have where if you ran an older card that is still supported in new OS's and games.  Nvidia Driver for 8 series Release Date: 2016.3.16


----------



## cdawall (Jul 15, 2016)

Melvis said:


> And you think AMD do? same boat but at least Nvidia still has (or at least did) have where if you ran an older card that is still supported in new OS's and games.  Nvidia Driver for 8 series Release Date: 2016.3.16



Nvidia also rebranded the 8 series for many many more years.

Also 341.95 was the last driver for that batch of cards they are no longer supported.


----------



## Captain_Tom (Jul 15, 2016)

Melvis said:


> And you think AMD do? same boat but at least Nvidia still has (or at least did) have where if you ran an older card that is still supported in new OS's and games.  Nvidia Driver for 8 series Release Date: 2016.3.16



I think neither of them do.  I believe they both finished perfecting the drivers of 2006's cards 6 years ago!  They can say their driver is "newer" all they want.  It doesn't do a damn thing.  


And then I have to play devils advocate here:  What new game does an HD 2000 series card need optimization for from this or last year?   It can't even run games above DX9, so there is literally no point in releasing new drivers when no big DX9 game has come out for 2 years!!!   This is just PR for Nvidia so they can make it look like their cards age well when in reality a 7870 = 680 right now!


----------



## R-T-B (Jul 15, 2016)

looncraz said:


> There's not much difference, really. AMD doesn't release drivers to the public except through Windows Update - but they ARE updated drivers.  They receive important fixes, but no new features.  Besides nVidia including the Control Panel and AMD not including CCC in their legacy drivers, there is really no difference.



I'll bet a real $5.00 paypal bucks they are just repacks of the 13.1 drivers.  First come first serve with proof.  Note it may take me up to 24 hours to make payment as I don't live here.



Captain_Tom said:


> I think neither of them do.  I believe they both finished perfecting the drivers of 2006's cards 6 years ago!



Yes, that's basically what I was saying before someone (I wonder who?) threw out that HD2000 was still being actively supported.


----------



## bug (Jul 15, 2016)

Captain_Tom said:


> Where are you people getting your information from?  They are still supported just fine lol.   Check their page, you can download drivers for your 2006 HD 2000 card!


Not if you're on Win8, 8.1 or 10 you can't.



Captain_Tom said:


> Let me get this straight.  You think Nvidia is working right now on optimizing 8800 GTX cards for.... What?  Borderlands:  TPS?!    No game in 2016 even uses below DX11.  And everything else runs fine with legacy drivers.


They're probably not optimizing anything at this point, but they still fix the occasional issue. A game that runs just fine on Win7 may not run as such on Win10.


----------



## I No (Jul 15, 2016)

Who here thinks that from a business's point of view it's profitable to actively support EOL tech? Or people still think AMD is playing "good and fair" to their 6 year old customers? Or nVidia for that matter. As a customer you are only good when you actually buy the card not 6 years after you bought it. As far as expecting support for EOL tech you're barking up the wrong tree. Who in their right minds would put money into developing new software for the hardware that was in use 5, 6, 20 years ago. The only reason why GCN is getting it's support it's that it's in use for a very long time now. I'm just wondering what's going to happen to it if Vega turns out to be different than GCN. Although i wouldn't put my money on it being changed due to AMD's current budget for RnD being less than the department store across the road compared to the competition.


----------



## bug (Jul 15, 2016)

I No said:


> Who here thinks that from a business's point of view it's profitable to actively support EOL tech? Or people still think AMD is playing "good and fair" to their 6 year old customers? Or nVidia for that matter. As a customer you are only good when you actually buy the card not 6 years after you bought it. As far as expecting support for EOL tech you're barking up the wrong tree. Who in their right minds would put money into developing new software for the hardware that was in use 5, 6, 20 years ago. The only reason why GCN is getting it's support it's that it's in use for a very long time now. I'm just wondering what's going to happen to it if Vega turns out to be different than GCN. Although i wouldn't put my money on it being changed due to AMD's current budget for RnD being less than the department store across the road compared to the competition.


Well, AMD users are used to that line of thinking. And there's nothing wrong with that.
But the initial assertion was that AMD hardware ages better. Now, as a Nvidia user I'm used to knowing that I can use a 10 years old video card whether I'm on Windows or Linux (or Solaris, if I'm feeling particularly kinky). And this makes sense from a business PoV because support is the reason I stick to Nvidia in the first place. Plus, Nvidia uses a unified driver and keeping it up to date is more cost effective than it is for AMD. Of course, Nvidia doesn't support products forever, at some point architectures are relegated to a legacy branch, but even that continues to receive updates for a while, before being officially retired. For example, I can find a linux driver for GeForce 6 series (12 years old) updated in Nov 2015. And that gives me confidence when buying.


----------



## Captain_Tom (Jul 15, 2016)

R-T-B said:


> I'll bet a real $5.00 paypal bucks they are just repacks of the 13.1 drivers.  First come first serve with proof.  Note it may take me up to 24 hours to make payment as I don't live here.
> 
> 
> 
> Yes, that's basically what I was saying before someone (I wonder who?) threw out that HD2000 was still being actively supported.





Hahaha come on now.   Just because everyone is piling on you doesn't mean you can play the "That's what I was trying to say" card.  You made it sound like AMD doesn't have any drivers at all and Nvidia is still making sure 8800 GTX's can srun BF1.   The fact is both companies stop doing active optimization after about 6 years, and its because they don't need to.


----------



## Captain_Tom (Jul 15, 2016)

They're probably not optimizing anything at this point, but they still fix the occasional issue. A game that runs just fine on Win7 may not run as such on Win10.[/QUOTE]

You do realize that HD 2000 cards have a Windows 10 driver download. 


And why are you talking about Windows 7?   You have to go all the way back to Windows XP to check this buddy, and I would like someone to check it because all I am seeing is conjecture.   No one here actually had a problem.


----------



## Aquinus (Jul 15, 2016)

Captain_Tom said:


> Hahaha come on now.   Just because everyone is piling on you doesn't mean you can play the "That's what I was trying to say" card.  You made it sound like AMD doesn't have any drivers at all and Nvidia is still making sure 8800 GTX's can srun BF1.   The fact is both companies stop doing active optimization after about 6 years, and its because they don't need to.


Support indicates that they're still going to make improvements for the driver for particular hardware. We're not saying they don't have drivers available, what we're saying is that they're out of support and haven't been updated for almost 3 years.


Captain_Tom said:


> You do realize that HD 2000 cards have a Windows 10 driver download.


Yeah, it's the same driver from Windows 7. WDDM 1.1 drivers can run on Windows 10, just as WDDM 1.2 drivers can as well. I bet you can go on AMD's website, download the old driver for Windows 7, then use hardware manager to install the display driver directly. None of this means that the hardware is still being supported though, it just means that *old drivers for Windows 7 still work on Windows 10*.


----------



## Captain_Tom (Jul 15, 2016)

bug said:


> Well, AMD users are used to that line of thinking. And there's nothing wrong with that.
> But the initial assertion was that AMD hardware ages better. Now, as a Nvidia user I'm used to knowing that I can use a 10 years old video card whether I'm on Windows or Linux (or Solaris, if I'm feeling particularly kinky). And this makes sense from a business PoV because support is the reason I stick to Nvidia in the first place. Plus, Nvidia uses a unified driver and keeping it up to date is more cost effective than it is for AMD. Of course, Nvidia doesn't support products forever, at some point architectures are relegated to a legacy branch, but even that continues to receive updates for a while, before being officially retired. For example, I can find a linux driver for GeForce 6 series (12 years old) updated in Nov 2015. And that gives me confidence when buying.




The problem is you're trying to negate how terribly Nvidia's cards age by saying "They technically last longer".     In reality no one is out there complaining that their old 8800 or 3870 can't run BF4.  ZERO.  Not a single person.

However there are a lot of people who paid $700 for a 780 Ti that is beat by my overclocked 7950 in the latest games.   I paid $100 for this card to do some extra Darkcoin mining a couple years ago (My temporary card until I upgrade to Pascal/Polaris/Vega).   Stop using this made up driver issue to deflect the truth that an Nvidia card only lasts about a year before AMD's last-gen cards start beating it.


----------



## Captain_Tom (Jul 15, 2016)

Yeah, it's the same driver from Windows 7. WDDM 1.1 drivers can run on Windows 10, just as WDDM 1.2 drivers can as well. I bet you can go on AMD's website, download the old driver for Windows 7, then use hardware manager to install the display driver directly. None of this means that the hardware is still being supported though, it just means that *old drivers for Windows 7 still work on Windows 10*.[/QUOTE]

Ok so again - find me a game that isn't working.


----------



## Aquinus (Jul 15, 2016)

Captain_Tom said:


> Ok so again - find me a game that isn't working.


I'm not saying that it will or will not work okay with whatever game you throw at it, my only point is that no updates means no active support, period, end of story. I've said nothing beyond that.  I also don't have time to cobble together a machine with my old 2600 XT to figure that out because, like AMD, I don't really care anymore because I have several newer GPUs I can use first, such as one of my old 6870s. Also, there are some platforms where this isn't the case such at Linux. For example, I can't install FGLRX on Ubuntu 14.04, 15.10, or 16.04 for my old 2600 XT and I can't install FGLRX on my old laptop with a Mobility Radeon HD 3650 in it. Just because there is backwards compatibility in the places you care about it and it seemingly works does when you use it does not mean it's still actively supported.

Example, I'm using a Vista driver for the USB wi-fi card in my machine. They haven't released an update for a very long time but, I can still install the driver and run at a full 300Mbit without an issue despite the fact that it's an old driver. Same deal with the ASMedia and C-Media AHCI drivers for my motherboard's 3rd party SATA and eSATA controllers. It doesn't need to be supported to work but, if something does break, don't expect it to get fixed.


----------



## Captain_Tom (Jul 15, 2016)

Aquinus said:


> I'm not saying that it will or will not work okay with whatever game you throw at it, my only point is that no updates means no active support, period, end of story. I've said nothing beyond that.  I also don't have time to cobble together a machine with my old 2600 XT to figure that out because, like AMD, I don't really care anymore because I have several newer GPUs I can use first, such as one of my old 6870s. Also, there are some platforms where this isn't the case such at Linux. For example, I can't install FGLRX on Ubuntu 14.04, 15.10, or 16.04 for my old 2600 XT and I can't install FGLRX on my old laptop with a Mobility Radeon HD 3650 in it. Just because there is backwards compatibility in the places you care about it and it seemingly works does when you use it does not mean it's still actively supported.
> 
> Example, I'm using a Vista driver for the USB wi-fi card in my machine. They haven't released an update for a very long time but, I can still install the driver and run at a full 300Mbit without an issue despite the fact that it's an old driver. Same deal with the ASMedia and C-Media AHCI drivers for my motherboard's 3rd party SATA and eSATA controllers. It doesn't need to be supported to work but, if something does break, don't expect it to get fixed.



Well I think we are on the same page then.  My entire point though, is that anyone trying to say Nvidia's cards last longer due to "Driver support" is full of BS.  This is a non-point that derailed the conversation.

1) Things like newer OS' and DX/Vulkan/etc become a far greater compatibility issue long before any of these drivers would be.

2) Like you said - you (and I as well) run plenty of things with old drivers that work perfectly fine.  Once a driver is "perfected" there is nothing left for these companies to do - it will work fine.  Furthermore, anyone who says "It could be an issue" should at least have anecdotal evidence before they even pretend it is a thing.


----------



## R-T-B (Jul 15, 2016)

Captain_Tom said:


> You made it sound like AMD doesn't have any drivers at all and Nvidia is still making sure 8800 GTX's can srun BF1.




How you infer anything about nvidia from a simple "No" followed by a link from amd (stating support has ended and no future driver releases are planned) is beyond me.


----------



## cdawall (Jul 15, 2016)

So what does any of this have to do with vulkan?


----------



## HD64G (Jul 15, 2016)

the54thvoid said:


> Two points.
> 1) That park has green apples and red apples. Don't be so naive to think a company as large as Nvidia is 'out'.
> 2) On Vega. The GTX1080 smokes everything and it's only the 980 replacement.  The GP100/102 chip is 'rumoured' 50% faster. That is Vegas competition.
> 
> ...



OK, let's make clear what I wrote above then. I mean for the same price lvl that Vega will have a party on nVidia GPUs. Which will probably be 1080 imho. Navi might be the competition for next Titan, so Vega will go against 1080 (Vega 10) and 1080Ti (Vega 11). Except for power consumption Vega will be a monster for DX12 and Vulcan. nVidia will need another architecture to compete that imho.


----------



## GoldenX (Jul 15, 2016)

bug said:


> ... For example, I can find a linux driver for GeForce 6 series (12 years old) updated in Nov 2015. And that gives me confidence when buying.



Good luck using that driver on a current kernel and X.org. On DX10 and older hardware you're stuck with nouveau, unless you install an old and unsupported distro. Same thing with ATI/AMD Terascale hardware, you only have the free driver as an option.

How is the jump in performance for Kepler when using Vulkan?


----------



## HTC (Jul 16, 2016)

This dude's video explains where the extra performance comes from:










The relevant part starts @ around 8:46.

The reason AMD cards are gaining so much is not so much because Vulkan is so much better but rather because AMD's OGL is so damn worse then nVidia's: look @ the CPU overhead on both camps with OGL. It explains why nVidia's gains are so much lower: there' much less room for improvement with nVidia.


----------



## Fluffmeister (Jul 16, 2016)

Same situation as DX11, Nv's driver overhead was already great so they showed less gains unsurprisingly.

People are essentially celebrating mediocrity.


----------



## Xzibit (Jul 16, 2016)

HTC said:


> This dude's video explains where the extra performance comes from:
> 
> 
> 
> ...



Doesn't bold well for Nvidia when saying that. Looking at how a 480 can creap up to there 1070.

Both are still rendering the same thing and AMD lower price tier cards are jumping 1 maybe 2 spots.  While Nvidias are staying stagnant.


----------



## Fluffmeister (Jul 16, 2016)

Xzibit said:


> Doesn't bold well for Nvidia when saying that. Looking at how a 480 can creap up to there 1070.



Yeah, based on one game that hammers along regardless of Vulkan (at least on Nv hardware).

We get it, Nvidia are DOOOOOOOOOOOOMED!


----------



## Aquinus (Jul 16, 2016)

HTC said:


> The reason AMD cards are gaining so much is not so much because Vulkan is so much better but rather because AMD's OGL is so damn worse then nVidia's: look @ the CPU overhead on both camps with OGL. It explains why nVidia's gains are so much lower: there' much less room for improvement with nVidia.


Sure but, for just a bit of speculation. GPU vendors do their own OpenGL implementation and I'm willing to bet that nVidia merely was already batching draw calls under the hood by putting them in a queue or something and flushing it when the right OpenGL command came along where AMD probably was just doing the draw call on the spot. I mean, the CPU number is higher, sure but, I'm willing to bet that that huge number by "GPU" is an indicator that the machine is going gung-ho on calls that impact the GPU. I wouldn't be surprised if this was something nVidia simply was doing in driver space to smooth out the actual calls to the API so the application can more quickly get to the next thing it was going to do. Just a thought as this is the kind of thing I would do if I needed to get latency down and what was happening could be done later but, still in order. Queues are great for that kind of thing.

You know, this entire thing could merely be the difference between nVidia loosely following the specification but, well enough for everything to work while doing things under the hood to make it go faster versus AMD who could have strictly implemented the specification and suffers the consequences as a result. It's an interesting thought, that's for sure.


----------



## HTC (Jul 16, 2016)

Aquinus said:


> Sure but, for just a bit of speculation. GPU vendors do their own OpenGL implementation and I'm willing to bet that nVidia merely was already batching draw calls under the hood by putting them in a queue or something and flushing it when the right OpenGL command came along where AMD probably was just doing the draw call on the spot. I mean, the CPU number is higher, sure but, *I'm willing to bet that that huge number by "GPU" is an indicator that the machine is going gung-ho on calls that impact the GPU*. I wouldn't be surprised if this was something nVidia simply was doing in driver space to smooth out the actual calls to the API so the application can more quickly get to the next thing it was going to do. Just a thought as this is the kind of thing I would do if I needed to get latency down and what was happening could be done later but, still in order. Queues are great for that kind of thing.
> 
> You know, this entire thing could merely be the difference between nVidia loosely following the specification but, well enough for everything to work while doing things under the hood to make it go faster versus AMD who could have strictly implemented the specification and suffers the consequences as a result. It's an interesting thought, that's for sure.



The GPU numbers on AMD's cards are bugged: even the video poster says so. The CPU numbers *seem accurate*, though.

There's a good portion of AMD's card resources that go unused, which are now being tapped by Vulkan. The resources have been there the whole time but AMD has been unable to put them to good use. What Vulkan shows us is how much better the cards CAN BE if their resources are properly managed. In this regard, nVidia has been WAY more efficient.


----------



## cdawall (Jul 16, 2016)

HTC said:


> The GPU numbers on AMD's cards are bugged: even the video poster says so. The CPU numbers *seem accurate*, though.
> 
> There's a good portion of AMD's card resources that go unused, which are now being tapped by Vulkan. The resources have been there the whole time but AMD has been unable to put them to good use. What Vulkan shows us is how much better the cards CAN BE if their resources are properly managed. In this regard, nVidia has been WAY more efficient.



This is what happens when game coders are fucking lazy... There is a reason the amd cards rape when used for real things... They are more powerful period...


----------



## IRQ Conflict (Jul 16, 2016)

cdawall said:


> This is what happens when game coders are fucking lazy... There is a reason the amd cards rape when used for real things... They are more powerful period...


 This. If AMD had the money to throw at devs the way nvidia have over the years things would look MUCH different. But thanks to the time and effort AMD had put into Mantle we may well see a major revolution in games. Provided of course game devs use the tools GIVEN to them.


----------



## dalekdukesboy (Jul 16, 2016)

RejZoR said:


> Lol, Vulkan isn't "biased". AMD GPU's are just more advanced when it comes to more direct GPU access (that Vulkan and DX12 allow), the fact they weren't shining is because software wasn't taking any advantage of all that yet. Till now. I mean, AMD had partial async compute since HD7000 series and full in R9 290X. NVIDIA still doesn't have even partial in GTX 1080 from the looks of it. Async is when you ca seamlessly blend graphics, audio and physics computation on a single GPU. Something AMD was aiming basically the whole time since they created GCN. They support graphics, they've added audio on R9 290X and they've been working with physics for ages, some with Havok and some with Bullet.
> 
> R9 Fury X users don't feel that let down anymore  In fact R9 Fury cards in general shine in DX12 and apparently also in Vulkan. While I love my GTX 980 I kinda regret I haven't gone with R9 Fury/Fury X.
> 
> ...





ummmm....yeah.  If that rant against even former generation 900 series cards isn't you showing your AMD fanoyism I don't know what is.  You're simply wrong the 980/ti beat the competition hands down the ONLY thing you said right is "against last generation of AMD cards that weren't particularly awesome".  There you are absolutely right, AMD wasn't particularly awesome the gtx 980 matched or beat their top card in everything UP to 2k resolutions and was cheaper and the ti destroyed it at every resolution...how is that possibly objectively being Nvidia "sucks" versus last gen AMD? If I owned a fury I'd say same thing you're just not right and ranting.


----------



## dalekdukesboy (Jul 16, 2016)

Litvan said:


> Am I the only one wondering why there's not a single 1080 card on that list and only a 1070?



Yep, I saw it straight away and wondered why they'd leave it off except that it was right up there with the AMD cards so couldn't possibly show that.


----------



## PaNiC (Jul 16, 2016)

Parn said:


> If Vulkan is so biased towards AMD cards, I doubt any major developers will risk lower sales just to make games run faster on AMD by offering renders based solely on Vulkan.


Gratz !!! you won clueless poster award.
Blizzard loves mac support... so they'll Vulkan, Valve helped make Vulkan... so they'll use it and Bethesda is using it. That just leaves EA and ubisoft.
Edit: Vulkan will also be the gfx api for android. what harms Valkan harms Nvidia shield and there push into andriod


----------



## FordGT90Concept (Jul 16, 2016)

Doom was made by id Software.  Id used OpenGL since forever.  Id using Vulkan was kind of inevitable.


----------



## Flow (Jul 17, 2016)

All in all a healthy developement.
And yes, my eldest son gained well over 30fps on his amd 290 card with Doom.
I would say vulcan is here to stay and replace opengl. Consoles will be using it happily also, squeezing some more performance out of them boxes for some more years to come.

But in any case, nvidia was already highly optimized for many games, and now amd is catching up. Healthy competition nvidia company should be happy with.
Because it would certainly not be in their interest to be the sole provider of graphic chips.
Game on...


----------



## TheGuruStud (Jul 17, 2016)

Flow said:


> All in all a healthy developement.
> And yes, my eldest son gained well over 30fps on his amd 290 card with Doom.
> I would say vulcan is here to stay and replace opengl. Consoles will be using it happily also, squeezing some more performance out of them boxes for some more years to come.
> 
> ...



?

AMD only exists so intel can say "look, there's still a competitor". Nvidia works the same way. Both could undercut AMD so bad that they'd file for bankruptcy and a sale would be started in a few months.

Both are also probably scared that a juggernaut with deep pockets would acquire all the IP and put them under (like Samsung). Keeping AMD barely alive is highly profitable for these schmucks.


----------



## cdawall (Jul 17, 2016)

TheGuruStud said:


> ?
> 
> AMD only exists so intel can say "look, there's still a competitor". Nvidia works the same way. Both could undercut AMD so bad that they'd file for bankruptcy and a sale would be started in a few months.
> 
> Both are also probably scared that a juggernaut with deep pockets would acquire all the IP and put them under (like Samsung). Keeping AMD barely alive is highly profitable for these schmucks.



AMD is at 30% for GPU's and gaining market share I wouldn't count them out in that market Vulkan and DX12 as well as the bitcoining craze painted them in a good light.


----------



## FordGT90Concept (Jul 17, 2016)

AMD's future is riding on Zen more so than anything else.


----------



## Aquinus (Jul 17, 2016)

FordGT90Concept said:


> AMD's future is riding on Zen more so than anything else.


It depends on AMD not making another catastrophe but, I fear that no progress will change the direction AMD is going. Most of the things they've been doing as of late is adequate but, it will take a huge push for them to actually make progress. Intel is a juggernaut, much more so than nVidia. I'm hoping Xen will be good but, we can't kid ourselves when you consider the possible R&D budgets between the two companies. You're talking about AMD, which takes in the hundreds of millions of USD in revenue a year versus Intel who takes in several tens of billions of USDs in revenue a year. Granted, Intel's breadth in the market is much wider than AMD's but, it only shows the vast difference in capability between the two companies. I love AMD but, I think the sad reality is that AMD lost this war a long time ago and it's only a matter of time until they get relegated to the likes of companies like VIA... what a sad day that will be.


----------



## nem.. (Jul 18, 2016)




----------



## IRQ Conflict (Jul 18, 2016)

Isn't that just texture streaming? That happens on both brands and has been a "feature" of modern titles for quite some time now.

Edit: Just downloaded the demo. Ran fine on my 290X 4GB. Didn't notice any streaming issues. Probably just a game patch or driver update needed for NV cards.


----------



## kifi (Jul 19, 2016)

HTC said:


> This dude's video explains where the extra performance comes from:
> 
> 
> 
> ...


That is because Nvidia OpenGL method is non standard and developers foolish used the hacked method.


----------



## ShurikN (Jul 19, 2016)

Has async been implemented in Doom Vulkan yet?


----------



## kifi (Jul 19, 2016)

ShurikN said:


> Has async been implemented in Doom Vulkan yet?


Already available when turning TSAA on as pointed out by some readers owning AMD graphical cards (R380, Fury, RX480...).
Nvidia cards uses preemptions to compensate the lack of asynchronous compute in their hardware.


----------



## TheGuruStud (Jul 20, 2016)

kifi said:


> Already available when turning TSAA on as pointed out by some readers owning AMD graphical cards (R380, Fury, RX480...).
> Nvidia cards uses preemptions to compensate the lack of asynchronous compute in their hardware.



Compensate is a strong word. More like marketing speak.

I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.


----------



## Xzibit (Jul 20, 2016)

TheGuruStud said:


> Compensate is a strong word. More like marketing speak.
> 
> I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.



Also developers that off load PC version to a third party are less likely to implement those.  With the first wave we are seeing stripped version & DX11 regression.  Ports being ports.


----------



## bug (Jul 21, 2016)

TheGuruStud said:


> Compensate is a strong word. More like marketing speak.
> 
> I highly doubt that compute is being properly leveraged, too, being a pretty new API and Async being new for that matter.


Read Anandtech's GTX 1080 review. It explains in depth the state of async compute and why it currently makes more sense for AMD's hardware.


----------



## Ungari (Jul 21, 2016)

ShurikN said:


> Has async been implemented in Doom Vulkan yet?



Not by TPU in it's benchmarks for Pascal vs. Polaris reviews.


----------



## TheGuruStud (Jul 21, 2016)

bug said:


> Read Anandtech's GTX 1080 review. It explains in depth the state of async compute and why it currently makes more sense for AMD's hardware.



I can only handle so much shill shrimpi (the intel paychecks are just too much).
Nvidia would benefit, but they wanted to cut power usage for today's perf/watt.


----------



## bug (Jul 22, 2016)

TheGuruStud said:


> I can only handle so much shill shrimpi (the intel paychecks are just too much).
> Nvidia would benefit, but they wanted to cut power usage for today's perf/watt.


Well, you couldn't be more wrong, but if you won't be bothered reading, there's nothing else I can add.


----------



## FordGT90Concept (Jul 31, 2016)

I just ran Talos Principle benchmark on my R9 390 and 16.7.3 drivers (ultra, medium AA, 1920x1200):
Vulkan: 80.1 fps
DX11: 96.6 fps

It doesn't have the massive boost on GCN cards that Doom got--at least not yet. It is still beta.


----------



## BiggieShady (Aug 1, 2016)

FordGT90Concept said:


> It doesn't have the massive boost on GCN cards that Doom got--at least not yet. It is still beta.


Hm, getting closer ... too bad they can't profit from cpu bound scenarios since there are none


----------



## arbiter (Aug 8, 2016)

Since this isn't on this thread will just add it,


----------



## bug (Aug 8, 2016)

arbiter said:


> Since this isn't on this thread will just add it,


Yeah, AMD's OpenGL implementation has sucked for ages. It's one of the reasons I've stuck with Nvidia.


----------



## BiggieShady (Aug 8, 2016)

arbiter said:


> Since this isn't on this thread will just add it,


Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.


----------



## arbiter (Aug 8, 2016)

BiggieShady said:


> Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.


Its something to remember, its def something that needs more testing in future with more games to see if its isolated to doom or if it could be recurring issue in other games. I can't remember where i read it since it was years ago but there was story about AMD gpu/drivers running higher cpu load in same game then nvidia card did which could be why if that issues comes to be it.

links to couple articles they had:
http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-in-6-year-old-amd-and-intel-computers/
http://www.hardwareunboxed.com/amd-vs-nvidia-low-level-api-performance-what-exactly-is-going-on/


----------



## evernessince (Aug 9, 2016)

BiggieShady said:


> Wait a minute ... is this just doom specific or it's like this in general ... are you telling me AMD Vulkan implementation requires beefy CPU to really shine? ... talk about negating all benefits of low cpu overhead in Vulkan.



The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required.  This is why, if you haven't noticed, benchmarkers use the best CPU possible.  If they didn't the frame-rate would cap.

All we can draw from that benchmark is that it's pretty amazing that ancient CPU can even play that game over 60 FPS.  You could drop $50 on a G3258 and it would easily double the performance.  If you don't have that yet are buying an RX 480 there is something wrong.


----------



## FordGT90Concept (Aug 9, 2016)

I think the clockspeed is more important than the processor.  2.67/3.2 versus 4.5 GHz.  Not only did they use the highest clocked Intel processor available, it's overclocked too (4.0 GHz stock).  The two combined make the old processors look worse on GCN than they really are.  The fact every test was over 60 FPS says it all.


----------



## bug (Aug 9, 2016)

evernessince said:


> The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required.  This is why, if you haven't noticed, benchmarkers use the best CPU possible.  If they didn't the frame-rate would cap.



That is exactly NOT the case.
Lowering CPU overhead is meant to let weaker CPUs push more FPS. Too early to tell just by looking at one title, though.


----------



## BiggieShady (Aug 9, 2016)

evernessince said:


> ... This is why ...


Dude, look at the graph ... it also shows the same game and same cpu setups with gtx 1060, without the fps drop. 


arbiter said:


> links to couple articles they had ...


As far as I see, this is something specific to couple of games ... and also specific to new apis ... and new apis are much more low level, it's all understandable .. but bethsoft/id did both amd and nvidia vulkan renderer implementations for idtech engine, I'd expect they'd like to have less fps drop on older cpus with amd gpu. Also GCN is in all consoles so I'd expect them to optimize. It almost looks like whatever they did in the code that benefited consoles (compiling with optimizations for jaguar cores and more heterogenous environment), hurts performance on cpus with older cores and older pcie controllers, but I'm just guessing. It may all be in the driver, also. With apis this low, you never know.


----------



## bug (Aug 9, 2016)

BiggieShady said:


> ... It may all be in the driver, also. With apis this low, you never know.



Unlikely. Lower level APIs mean drivers become thinner. The driver does less work, the application has to do the heavy lifting now.
This is why I will not draw any conclusions based on one title. Where we had AMD and Nvidia doing the optimizations till now, we now (potentially) have every single developer to account for. In theory, everyone now only has to optimize for the API (kind of like coding for HTML5, not for a specific broswer), but sadly, there will always be calls that work better on one hardware than they do on the next. To be honest, it's not clear that smaller developers are even moving to Vulkan/DX12 at all. Interesting times ahead, though.


----------



## jabbadap (Aug 9, 2016)

arbiter said:


> Its something to remember, its def something that needs more testing in future with more games to see if its isolated to doom or if it could be recurring issue in other games. I can't remember where i read it since it was years ago but there was story about AMD gpu/drivers running higher cpu load in same game then nvidia card did which could be why if that issues comes to be it.
> 
> links to couple articles they had:
> http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-in-6-year-old-amd-and-intel-computers/
> http://www.hardwareunboxed.com/amd-vs-nvidia-low-level-api-performance-what-exactly-is-going-on/



Well read this too:
http://www.hardwareunboxed.com/gtx-1060-vs-rx-480-fx-showdown/


Spoiler: Doom on FX 8350















Results are quite schizophrenic, on 1080p with FX 8350 RX 480 is victorious on OGL but looses on Vulkan to gtx 1060. But in 1440p it's vice versa. I would say that RX 480 is on 1080p cpu limited and on 1440p gpu limited on vulkan, while gtx 1060 is gpu limited on both resolutions. In OGL gtx 1060 is cpu limited on 1080p but gpu limited on 1440p, amd is gpu limited in both res.


----------



## BiggieShady (Aug 9, 2016)

bug said:


> Unlikely. Lower level APIs mean drivers become thinner. The driver does less work, the application has to do the heavy lifting now.


GTX 1060 loses 4 fps moving from skylake to bulldozer ... and RX 480 loses 30 fps moving from skylake to bulldozer. Relatively it's obvious something CPU heavy is happening on radeon code path be it in game or in driver ... you shouldn't rule out the drivers yet because it's still early enough for non optimal critical code segments to exist. I mean drivers are thinner and game code has more control over GPU, it's not like drivers do absolutely nothing - in fact since less time overall per frame is spent running driver code, any sub optimal stuff potentially going on there has more effect overall in CPU bound scenarios if game code is properly optimized. 480 chews through single 1080p frame incredibly fast and doom is well optimized ...


----------



## EarthDog (Aug 9, 2016)

Or the CPU is just so slow comparatively...


----------



## arbiter (Aug 9, 2016)

evernessince said:


> The reason you are seeing the delta between the brand new Intel CPU and the ancient one is because the more frames your GPU outputs the more CPU power is required.  This is why, if you haven't noticed, benchmarkers use the best CPU possible.  If they didn't the frame-rate would cap.
> 
> All we can draw from that benchmark is that it's pretty amazing that ancient CPU can even play that game over 60 FPS.  You could drop $50 on a G3258 and it would easily double the performance.  If you don't have that yet are buying an RX 480 there is something wrong.


By you saying more frames gpu outputs the more cpu power is required, well if that is completely the case the fps of 1060 would dropped like 480 which wasn't the case. I think as we move more in to dx12 gonna have to start testing a more realistic rig option for users based on the card as well, if you are buying card like 480/1060 most don't have a 6/8core high end intel its more likely mid to low range cpu where its bit more limited. 4 year old cpu isn't really useless yet.


----------



## BiggieShady (Aug 10, 2016)

EarthDog said:


> Or the CPU is just so slow comparatively...


What's indicative is i5 750 and phenom x4 955. At those clocks they have similar single threaded performance, the main difference is i5 has integrated pcie controller and better memory controller. Radeon loses 10 frames there.
Maybe devs have some leftover code from console's heterogeneous memory optimizations for GCN code path  that would be ironic ... we have x86 everywhere and instead of maintaining only x86+GCN and x86+CUDA codepaths, you still have to maintain all codepaths even xb1 separately to benefit from that weird extra on-chip cache.


----------

