• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

The secret of Doom (2016) performance on AMD GPUs

Joined
Apr 18, 2013
Messages
1,260 (0.28/day)
Location
Artem S. Tashkinov
It's simple: the game was developed specifically for GCN as reported by a leading developer.

doom_vs_nvidia.png


Watch the presentation from SIGGRAPH2016.
 
It's simple: the game was developed specifically for GCN as reported by a leading developer.

View attachment 77413

Watch the presentation from SIGGRAPH2016.
In OpenGL where it was played at launch and until recently, nVidia was much better. So, no matter how the game was developed, awful AMD drivers for OpenGL couldn't put their GPUs to work properly. When Vulkan came around, AMD GPUs were fully used and took over since Vulkan is Mantle's successor. To sum up, Vulkan is made to use perfectly AMD GPUs and use nVidia ones a bit better than OpenGL. Where is the news there?
 
Just a hint, with OpenGL, NVIDIA also has it's own extensions and no one seems to have problems with that. But oh noes, Vulkan uses AMD specific goodies. Uh?
 
upload_2016-7-29_11-40-24.png

Not that I'm trying to stab at nVidia but, I find it very interesting that the engine starts the rendering of the next frame before the current frame has been post-processed by any filters when utilizing async compute. Doing that will always favor GPUs with more CUs/SMs when the support is there.
 
Current and future consoles are GCN based.
 
It's simple: the game was developed specifically for GCN as reported by a leading developer.

View attachment 77413

Watch the presentation from SIGGRAPH2016.

I watched the whole damn thing. It absolutely was not written for AMD GCN. It was written for all but it can much more effectively use extensions to deliver the better performance from AMD's GCN cards. It's very misleading to construe that Vulkan (in Doom) was written for AMD cards. It simply uses their hardware better for what they wanted to do with the graphics engine.

No, simple truth is, the Vulkan pathway in Doom is better used by GCN. The coding ID did for the game means they can use GCN in a far more effective way than Open GL can manage as a standalone. It's as @HD64G says, AMD are piss poor at Open GL compared to Nvidia but the GCN hardware just runs butter on Vulkan. And I mean butter - as in smooth.

Current and future consoles are GCN based.

The new Nintendo thingy-ma-jig is Tegra.
 
....aaannd Nvidia PR be like "this game is not reliable benchmark in DX12 mode, please do not include this game in your GPU review"
 
Heh, NVIDIA again "working" on driver support. Just like for async on GTX 900 series, right? :/
 
Just a hint, with OpenGL, NVIDIA also has it's own extensions and no one seems to have problems with that. But oh noes, Vulkan uses AMD specific goodies. Uh?

What OpenGL games? Aside from id Software only Indie devs use OpenGL. And Indies run so fast no one cares.

It was written for all but it can much more effectively use extensions to deliver the better performance from AMD's GCN cards.

No, simple truth is, the Vulkan pathway in Doom is better used by GCN.

You contradict my statement but then prove the opposite. Twice. Great.

I guess you are that person who hates every released PhysX/NVIDIAworks based game. When something is optimized for NVIDIA, NVIDIA are bloody cheaters.

When something is optimized from the get go for GCN then 1) NVIDIA cannot run it all 2) NVIDIA cannot develop good GPUs 3) NVIDIA sucks this and that. Double standards all the f*cking time.

Meanwhile no AMD fanatic is concerned that AMD's OpenGL drivers generally suck.
 
Last edited:
What OpenGL games? Aside from id Software only Indie devs use OpenGL. And Indies run so fast no one cares.
Any game that runs on OS X has OpenGL support. That does include Blizzard which I would hardly call Indie.
 
Is vulcan a closed system like PhysX and hair works? What are the barriers for NVIDIA using this game?

Is the code badly optimized? That is, does it force a card to render things that barely improve the game, visually, yet are a severe detriment to performance on NVIDIA cards?

I guess you are that person who hates every released PhysX/NVIDIAworks based game. When something is optimized for NVIDIA, NVIDIA are bloody cheaters.

When something is optimized from the get go for GCN then 1) NVIDIA cannot run it all 2) NVIDIA cannot develop good GPUs 3) NVIDIA sucks this and that. Double standards all the f*cking time.

Meanwhile no AMD fanatic is concerned that AMD's OpenGL drivers generally suck.

Is that a general rant or did someone say that in this thread?
 
Last edited:
Any game that runs on OS X has OpenGL support. That does include Blizzard which I would hardly call Indie.

Most if not all MacOS games, which were originally released for Windows, run via a D3D to OpenGL translator of some sort.

hair works

Hair works uses standard D3D APIs - nothing NVIDIA specific. This argument is invalid. Now that AMD has reimplemented tesselation in Polaris and the Witcher runs fine even with HairWorks on on AMD everyone has magically forgotten about this fact.

And if I'm not mistaken W1zzard doesn't use PhysX enabled games in his testbed (or disables it completely). So PhysX argument is equally invalid.
 
[
Most if not all MacOS games, which were originally released for Windows, run via a D3D to OpenGL translator of some sort.



Hair works uses standard D3D APIs - nothing NVIDIA specific. This argument is invalid. Now that AMD has reimplemented tesselation in Polaris and the Witcher runs fine even with HairWorks on on AMD everyone has magically forgotten about this fact.

And if I'm not mistaken W1zzard doesn't use PhysX enabled games in his testbed (or disables it completely). So PhysX argument is equally invalid.

I wasn't really making an argument. I'm not that well versed in the NVIDIA vs. Amd battle, but isn't it generally about one side doing something to negatively affect the performance on their competitors cards?
 
I wasn't really making an argument. I'm not that well versed in the NVIDIA vs. Amd battle, but isn't it generally about one side doing something to negatively affect the performance on their competitors cards?

This thread was created to calm WCCFtech fanatics who have found their way to TPU. Every Pascal/Polaris review on TPU has at least three very vocal people requiring Doom to be included in the gaming stand or "this review is completely flawed and I don't trust TPU any longer".
 
There is no such thing as "written for GCN". Vulkan and DX12 are standardized API's. And AMD is simply better at it. Probably has something to do with the fact this is their 4th generation of doing async capable GPU's. It's funny how everyone quickly blames AMD for playing dirty for dominating NVIDIA in Vulkan, but when NVIDIA dominated AMD in OpenGL, it' was just a "fact" and no one ever argued it...
 
Is vulcan a closed system like PhysX and hair works? What are the barriers for NVIDIA using this game?

It's not closed to the registered hardware vendors and contributors, Vulkan is like OpenGL, any registered IHV (Independent Hardware Vendor) can provide specific extensions for their hardware.

The problem for NVIDIA is because Vulkan is based on Mantle (developed by AMD), AMD is much more prepared with their hardware (related to Async Compute) and vendor specific extensions so developer can utilize their stuff from the get go, they don't need to wait for driver or extension update.
 
You contradict my statement but then prove the opposite. Twice. Great.

I guess you are that person who hates every released PhysX/NVIDIAworks based game. When something is optimized for NVIDIA, NVIDIA are bloody cheaters.

When something is optimized from the get go for GCN then 1) NVIDIA cannot run it all 2) NVIDIA cannot develop good GPUs 3) NVIDIA sucks this and that. Double standards all the f*cking time.

Meanwhile no AMD fanatic is concerned that AMD's OpenGL drivers generally suck.

Lol. Calm it, antsy pants.

My recent run is Titan (original), 780ti Classy (x2) and now Kingpin 980ti with modded Bitspower water block. I'm a goddamned Nvidia funder (that assemblage is over ÂŁ3000 worth).

You're still incorrect about Vulkan on Doom being written for AMD. The code was created and to implement for AMD it was easy to apply the extensions to utilise the GCN hardware. Nvidia don't have the same hardware to better utilise low level API's. The ID software guy said they like to use Asynchronous compute and will use it more in future. Not because it favours AMD but because it's easier and better for them.
Nvidia handles everything now equally. AMD doesn't. All low level API'S have an inherent bias towards specialist hardware, currently found in GCN. Well done AMD for being there.
Still, the sheer power of Pascal means the lower transistored GTX1080 still reigns supreme using OpenGL.

No, I'm no fanboy and I attack and defend both parties. Ironically both sides call me a Fanboy, so I must be neutral by matter of conflict resolution.
 
That's like saying DX12 is more prepared for AMD because the core idea around it was also Mantle. That's nonsense. AMD is simply better because they've been preparing for this moment for years. And they are harvesting the fruits of their long and back then greatly underappreciated hard work.

@the54thvoid
When you piss both sides off, you're doing something right.
 
Little useless information, the first vendor specific extension for Vulkan was released for Nvidia only hardware: http://www.phoronix.com/scan.php?page=news_item&px=Vulkan-1.0.5-Released It's only to make porting easier.

AMD just took the technological lead, like with AMD64 (x64). There is no favoritism from the developers of Doom.
And yes, OpenGL drivers in AMD cards sucks, just try for example Dolphin Emulator and compare the performance of AMD and Nvidia cards in OpenGL and DirecX11, this is a good test because the Dolphin developers used almost all vendor specific extensions.
 
Most if not all MacOS games, which were originally released for Windows, run via a D3D to OpenGL translator of some sort.
That's quite the generalization by assuming they all do some form of translation. :confused:
 
Little useless information, the first vendor specific extension for Vulkan was released for Nvidia only hardware: http://www.phoronix.com/scan.php?page=news_item&px=Vulkan-1.0.5-Released It's only to make porting easier.

AMD just took the technological lead, like with AMD64 (x64). There is no favoritism from the developers of Doom.
And yes, OpenGL drivers in AMD cards sucks, just try for example Dolphin Emulator and compare the performance of AMD and Nvidia cards in OpenGL and DirecX11, this is a good test because the Dolphin developers used almost all vendor specific extensions.

At this point no one even argues about OpenGL on Radeons. But to be realistic, OpenGL wasn't actually a real issue for Radeon either. I've played UT99, Deus Ex (both with D3D10 equivalent in OGL with custom renderer), Q3A, Doom 3, Quake 4 and lastly Rage on Radeon graphic cards. Sure, old games for today's standards. But back then, I was playing that on Radeon 9600 Pro, later x1950 pro and several variants of HD4000, HD5000, HD6000, HD7000 series. Have I ever felt like "omg, this OpenGL really sucks on Radeon"? Nope, never. Maybe it was worse when you look at the fancy graphs, but in real world conditions, there was no difference worth mentioning.
 
At this point no one even argues about OpenGL on Radeons. But to be realistic, OpenGL wasn't actually a real issue for Radeon either. I've played UT99, Deus Ex (both with D3D10 equivalent in OGL with custom renderer), Q3A, Doom 3, Quake 4 and lastly Rage on Radeon graphic cards. Sure, old games for today's standards. But back then, I was playing that on Radeon 9600 Pro, later x1950 pro and several variants of HD4000, HD5000, HD6000, HD7000 series. Have I ever felt like "omg, this OpenGL really sucks on Radeon"? Nope, never. Maybe it was worse when you look at the fancy graphs, but in real world conditions, there was no difference worth mentioning.

In known games it just works, but if you are testing specific declared extensions (the driver report the hardware is compatible with them), they just don't work, crash, etc.
Here are some examples: https://es.dolphin-emu.org/blog/2013/09/26/dolphin-emulator-and-opengl-drivers-hall-fameshame/
 
I wonder what will the fan boys respond when nvidia shift to ALU heavy structure in Volta. Welp they are probably gonna switch their stance immediately to praise DX12/Vulkan.

In a fanboy's world if nvidia is doing bad it is AMD to blame some how.
 
It's an indie emulator. Do you seriously believe they'll invest driver team time into it compared to games sold in millions of copies on a market that heavily depends on a functional ecosystem? I highly doubt that. So, if it's broken, it's just how it is. I'm not making excuses for AMD, I'm just being realistic here. Same reason why AMD never bothered to invest a lot of time and resources into fixing OpenGL, because they knew what they were preparing and working. Mantle, the predecessor of Vulkan and DX12. Considering they are company for profit, they invest where most profit is expected. OpenGL apparently wasn't all that big of an issue after all to be worthy bothering with it. If we are honest, whole thing has been overblown greatly. Sure, it might perform worse than NVIDIA, but there is always something one is a bit better than the other.
 
Back
Top