# Why AMD will perform better than NVIDIA in DirectX 12



## nem (Aug 24, 2015)

Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the *Shaders asynchronous.* Check this is not my argument but It seems well argued.

first the souce:http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843

Well I figured I'd create an account in order to explain away what you're all seeing in the Ashes of the Singularity DX12 Benchmarks. I won't divulge too much of my background information but suffice to say
that I'm an old veteran who used to go by the handle ElMoIsEviL.


First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...


Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:








Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.







This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.


Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits
their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:







A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:







What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):







As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is
clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn't hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.


nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is
suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2)
though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):







People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.


But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or
re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.


People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That's your answer.



A video which talks about Ashes of the Singularity in depth:









PS. Don't count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it's all on the developer to make efficient use of both nVIDIA and AMDs architectures.


----------



## phanbuey (Aug 24, 2015)

interesting read... thx!


----------



## darkangel0504 (Aug 24, 2015)

How about GCN 1.0 ?


----------



## the54thvoid (Aug 24, 2015)

The title should perhaps be, "why DX12 favours GCN1.1 better than it did in DX11".

It would be 'misguided' to assume DX12 will hand AMD the lead. Given Maxwell 2 is supposed to perform better under DX12 (just not as much as AMD) than DX11, then the game code can swing it, bare metal or not.  The games in development for DX12 still require coding (as the reviews mention). Now, a game that pushes the limit on ROP's will hamstring AMD, pushing 290X cards to the same level as Fury cards. Similarly, if the ROP's aren't flooded, AMD will get the headroom over Nvidia.  
I'm not a fan of 'Gameworks' but it should be considered that a game code can easily be written to hamper performance (intentionally or not) of a competitor.
Any game Nvidia gets involved with for DX12 simply needs to focus on Maxwell's hardware DX12 strengths.  

Like has been said before, this single benchmark is not a good indicator of what will be.
And again, DX12 needs some time to come to fruition in a gaming sense.  AMD in an even playing field have the edge, mostly, but let's not forget, its not an even playing field.
I know I'll get called a fanboy by some folks but  I'm just shining a light into the dark here. AoS isn't brand agnostic. The company was the first to use Mantle, so its naive to think it's handed an unbiased bench opportunity. Again, if a game requires those ROPs then AMD doesn't do so well if the load is too high.  
When UE4 Unity and other DX12 relevant software appears, we can revisit the argument but until then, one bench isn't enough.
AMD has excellent DX12 hardware but the game code may not always favour it.  

FTR, I have no problem buying AMD cards.  I waited for Fury X to release but decided against it. Maybe it's too ahead of its time.  I'll buy one next year maybe!


----------



## Octopuss (Aug 24, 2015)

Powerpoint slides don't mean jack shit. Rumours and hype, on the other hand...


----------



## the54thvoid (Aug 24, 2015)

Octopuss said:


> Powerpoint slides don't mean jack shit. Rumours and hype, on the other hand...



That's not the point of the OP. Numerous independent sites show AoS doing very well on AMD GCN 1.1 hardware.  It's clear to see.
The issue to me is that a game will use various DX12 elements and the engine merely has to tip the balance one way or the other, ROP's versus pure draw call for example.
These slides are genuine and not rumour but the expectation is perhaps being hyped too much, with (I believe) AoS being absolute best case scenario for AMD (not because of its Mantle pedigree but the fact it was Stardock that created it, who did the Mantle demo for AMD).


----------



## Dieinafire (Aug 24, 2015)

Amd fans really like to dream the pipe dream. How's dx 12_1 on amd cards coming along?


----------



## R-T-B (Aug 24, 2015)

Dieinafire said:


> Amd fans really like to dream the pipe dream. How's dx 12_1 on amd cards coming along?



How many DX12_1 games are there?

None?  Oh, ok there's at least a tech demo right?

No?  Ouch.


----------



## the54thvoid (Aug 24, 2015)

How about some moron (not meant for you RTB!) doesn't post another reply with no substance? It would be nice if a constructive thread developed with views about upcoming limitations or possibilities of hardware.
I'd like to know for example, what upcoming titles will be utilising DX12, what company 'helps out the developers' and what the release date is.
BF SW, Fallout 4, The Division etc, these are all AAA DX11 games. What's the first major title (I know Fable) that will use DX12.


----------



## DinaAngel (Aug 24, 2015)

nem said:


> Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the *Shaders asynchronous.* Check this is not my argument but It seems well argued.
> 
> first the souce:http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843
> 
> ...


http://www.3dmark.com/3dm/8070955?
i got 17 501 040 drawcalls with titan x


----------



## Dieinafire (Aug 24, 2015)

R-T-B said:


> How many DX12_1 games are there?
> 
> None?  Oh, ok there's at least a tech demo right?
> 
> No?  Ouch.



You can say that about this entire subject because the dx 12 games aren't out yet


----------



## arbiter (Aug 24, 2015)

DinaAngel said:


> http://www.3dmark.com/3dm/8070955?
> i got 17 501 040 drawcalls with titan x


http://www.3dmark.com/aot/50003
Almost 18million on a GTX980, using factory gpu clocks and +300mhz on memory.

that graph he posted is debunked as incorrect.
I will say this 1 time, Just cause that 1 game is better on AMD right now doesn't mean it will stay that way. It is 1 game in alpha build. Its also 1 game that had Mantle since the start so AMD did have a bit of head start with the higher draw calls. So its very possible that performance on nvidia cards could improve over coming months.


----------



## RejZoR (Aug 24, 2015)

I wonder how much has NVIDIA actually invested in DX12 from software end (drivers), considering there aren't any DX12 games and AMD has been bragging about it in a single DX12 game that has been developed with AMD onboard since day one basically...

I mean, anyone remembers the massive DX11 jump NVIDIA made some time ago? It was quite significant and it made them leading in DX11. It's not so much defending of NVIDIA because I have it now, but I just can't judge something based on one game and synthetic tests. We know how that went with GeForce FX. It was great on paper and in a lot of synthetic tests but rubbish in actual games... Or with AMD how it was great, but then had huge problems with tessellation (which also makes me wonder why they used factor 32 instead of 64)...


----------



## R-T-B (Aug 24, 2015)

Dieinafire said:


> You can say that about this entire subject because the dx 12 games aren't out yet



Ashes of the singularity and a fair number of tech demoes.


----------



## Hawkstream (Aug 24, 2015)

I don't see what the big deal is.  Most of us are not going to want to use our current cards in a year anyway.  Regardless of if its NVIDIA or AMD.  So in a year when there are a lot more DX12 games out, if in fact AMD is faster we will all just buy AMD at that time.  I don't understand this brand loyalty stuff unless you are employed by one of the companies.  NVIDIA is better NOW.


----------



## Dethroy (Aug 25, 2015)

Nvidia did indeed optimize for DX11, both software- and hardware-wise. But is that a bad thing? This way Nvidia was able to sell their units with a much higher margin than AMD + providing better perf/watt at the same time thanks to that optimization. And by the time DX12 will see wide adoption Pascal will be out. So things have worked out rather nicely for Nvidia I'd say.

*But* it can only be healthy for us gamers that AMD is able to regain lost ground thanks to DX12.


----------



## terroralpha (Sep 1, 2015)

oh yay. more charts, slides and graphs predicting nvidia's doom.

even if all this was true, it doesn't mean much to me. i game on a 4k TV where HDMI 2.0 is a must, which AMD for some reason decided to skip. the fabled DP to 4k 60Hz adapters haven't materialized yet.

if this turns out to be true, my next cards will be arctic island GPUs. but i've seen AMD over promise and under deliver enough times to be skeptical of their power points.


----------



## R-T-B (Sep 1, 2015)

> oh yay. more charts, slides and graphs predicting nvidia's doom.



More like calling them out on their typical crap.

I wouldn't really call it "predicting their doom."  No one said this is going to ground the company, no one here is that stupid (hopefully).


----------



## MxPhenom 216 (Sep 1, 2015)

Once we have games people actually want to play that are 100% DirectX 12, we will be onto completely new GPU generations for both AMD and Nvidia. All this garbage over a single game that has effectively blown up every geek forum into AMD vs NVIDIA jargon will be mute.



R-T-B said:


> More like calling them out on their typical crap.
> 
> I wouldn't really call it "predicting their doom."  No one said this is going to ground the company, no one here is that stupid (hopefully).



Well there is one guy, I won't name names, but ill give you a hint. Starts with Sony.


----------



## Mussels (Sep 1, 2015)

Nvidias current hardware focused on DX11.
AMD planned ahead (even with the console hardware) and aimed for what they wanted to become mantle/DX12.
With *CURRENT HARDWARE* AMD gets an advantage in DX12

the technical reasons vary with asynch shaders being a large part of it, but this weird black and white view of a complex issue is just bizarre... does AMD having a temporary lead make Nvidia cards E-peens shrink or something?


----------



## the54thvoid (Sep 1, 2015)

Mussels said:


> Nvidias current hardware focused on DX11.
> AMD planned ahead (even with the console hardware) and aimed for what they wanted to become mantle/DX12.
> With *CURRENT HARDWARE* AMD gets an advantage in DX12
> 
> the technical reasons vary with asynch shaders being a large part of it, but this weird black and white view of a complex issue is just bizarre... does AMD having a temporary lead make Nvidia cards E-peens shrink or something?



That planning ahead (Hawaii) meant the cards that were firmly bound in DX11 failed to overturn Nvidia's GPU lead.  That planning ahead has brought the company to its knees.  I have no doubt at all AoS is far superior on AMD hardware but it almost seems like its become a massive PR push for things that have no current relevance.  
It does seem Nvidia have a significant potential disadvantage and perhaps they are bluffing their way through on this gen but I don't for a second think AMD have played the game well over the past two years.  
Even now there is no reason why they can't allow AIB's to sell air cooled Fury X cards, with more appeal than an AIO water cooler.  AMD are still being arses.  They may have planned ahead but very little they have done is bringing financial success.
New management, a funding injection and broader R&D would benefit the company and the consumer far more.


----------



## Mussels (Sep 1, 2015)

the54thvoid said:


> That planning ahead (Hawaii) meant the cards that were firmly bound in DX11 failed to overturn Nvidia's GPU lead.  That planning ahead has brought the company to its knees.  I have no doubt at all AoS is far superior on AMD hardware but it almost seems like its become a massive PR push for things that have no current relevance.
> It does seem Nvidia have a significant potential disadvantage and perhaps they are bluffing their way through on this gen but I don't for a second think AMD have played the game well over the past two years.
> Even now there is no reason why they can't allow AIB's to sell air cooled Fury X cards, with more appeal than an AIO water cooler.  AMD are still being arses.  They may have planned ahead but very little they have done is bringing financial success.
> New management, a funding injection and broader R&D would benefit the company and the consumer far more.



time will tell on that one, because they can sell old stock of older, tried and tested products for DX12 gaming where Nvidia need that R&D budget to catch up. AMD's profits could jump ahead because of this.


----------



## R-T-B (Sep 1, 2015)

MxPhenom 216 said:


> Well there is one guy, I won't name names, but ill give you a hint. Starts with Sony.



Ah, come on.  I'm not that mean.

...not like I disagree, but... 

... ok maybe I am.


----------



## the54thvoid (Sep 1, 2015)

Mussels said:


> time will tell on that one, because they can sell old stock of older, tried and tested products for DX12 gaming where Nvidia need that R&D budget to catch up. AMD's profits could jump ahead because of this.



In that position they'll have harder time selling Fiji.  If Hawaii performs as well as 980ti in DX12, Fiji becomes a white elephant, bearing in mind on some of the AoS benches, Hawaii matches Fiji.

But given Nvidia's product development deviations, if DX12 does hurt them, expect an earlier version of Pascal on HBM1 if HBM2 looks too distant.
Nvidia rarely manage to stick to any projected plan (and usually it's because of problems).
But hey, if my card falls behind in the next year, I'll happily trade out for a 390X but then I'd be humped on DX11 games.  It is true time will tell but frankly it is too early.
Nvidia zealots come out with conspiracy nonsense and AMD flag wearers think the battle's won.
Fact is, the battle was lost by AMD a long time ago, the next battle isn't until 2016 and the MASS adoption of DX12.
Nobody can afford to be smug.


----------



## ne6togadno (Sep 1, 2015)

the54thvoid said:


> ...Even now there is no reason why they can't allow AIB's to sell air cooled Fury X cards...


limited core and/or hbm supplys.
foundry struggling to catch up is probably the reason for 6m delay on fury and all those paper launches.


----------



## the54thvoid (Sep 1, 2015)

ne6togadno said:


> limited core and/or hbm supplys.
> foundry struggling to catch up is probably the reason for 6m delay on fury and all those paper launches.



All the more reason to drop the expensive AIO and ship PCB's directly to the partner vendors. They already have concrete PCB parameters.  
Although I know nothing about how AMD and Nvidia do these things!


----------



## tabascosauz (Sep 1, 2015)

Eh, what's the point? Every time something like this happens, I rejoice for my 2 GCN 1.0 cards, realize that the real performance is happening at GCN 1.1, then I visit the actual comments thread and it's the same thing over...
                   ...and over
                                    ...and over
                                                     ...and over again.

Nvidia fanboys "Whatever, AMD's trash and although I have no reason to hate (dur, I have a GTX 750, dur) AMD is trash"
AMD fanboys "AMD wins, even though AMD's actually poor as trash right now and could use a *real* boost instead of some news that about 1% of users know about"

Rinse, repeat. Rinse, repeat. Every time, I hope that people start listening to each other. Every time, it doesn't happen.

@MxPhenom 216 I for some reason don't feel like naming names today. Not he-who-must-not-be-named-and-doesn't-actually-have-ears-or-eyes, not the one guy who shot back at me today with what he probably thought was a witty comeback, not even that other little monkey who signed up just to spew BS all over a thread. Too much monkeys throwing their s*** around, to quote Pagan Min, and I don't want to contribute to the s***storm (like, seriously, in about 3 hours he-who-must-not-be-named is going to come in here and try to make your head hurt again).

I'm curious as to why AMD seems to be shunning the majority of its AIBs by only having the Asus and Sapphire Furys.


----------



## ne6togadno (Sep 1, 2015)

the54thvoid said:


> All the more reason to drop the expensive AIO and ship PCB's directly to the partner vendors. They already have concrete PCB parameters.
> Although I know nothing about how AMD and Nvidia do these things!


problem isnt pcb but lack of gpu chips (as i see it). pcb manufacturiong and building vga card itself is trivial compared to acctual gpu (or cpu) chip manufacturing. and hbm+interposter thing makes it even harder.
when you dont have chips to ship it doesnt matter who will manufacture pcb and solder everything on it.


----------



## RejZoR (Sep 1, 2015)

Fury X PCB's are still designed by partners, they are just restricted to reference design. Meaning they make the whole thing, but they have to stick to blueprints provided by AMD.


----------



## ne6togadno (Sep 1, 2015)

RejZoR said:


> Fury X PCB's are still designed by partners, they are just restricted to reference design. Meaning they make the whole thing, but they have to stick to blueprints provided by AMD.


not sure if all partners do it or only sapphir and power color build and resell them to other aibs but ya it is made over amd documentation.


----------



## Ebo (Sep 1, 2015)

When dx12 becomes mainstream for pc games, things as of now dosent really matter. We(most) will have a new GFX anyway which is supporting dx12 much better than the ones we have today.

I dont really care about wich firm I support, i simply dosent have any loyalty in that direction. I just buy the stuff I want and at the time that I want it. I never think too much ahead, since we have new GFX cards almost 2 times every year.
I do my buy from my personal reason, and I dont care about 1-5 frames a second, as long as my GFX can handle the things I ask of it.

I know a lot of Nvidia guys and also some of my friends argue they are the best especially Maxwell due to preformance and they bash AMD about poor driver support. The truth is that Nvidia has had a LOT of driver problems during this summer where as AMD hasent, but thats just my opinion. Just count how many hotfix drivers Nvidia has released since May 1 2015.

They also say AMD's card uses sooooo much more power, well to me that dosent really matter, since the difference in a year is just about 50 dollars in power use. Do I care about 50 dollars on a yearly basis.....nah because if you have a relatively highend system like I have, 50 dollars is like farting to keep warm, it only works for 1 second or mabye 2. 

I would never put much in a single game or test, for me you need so many that there becomes a pattern then you can judge.

In 2016 nvidia has Pascal, and AMD has Greenland out which both will be using HBM2(if were lucky) *then *we have something to compare, since hopefully there will be some games out thats only using dx12.


----------



## Vayra86 (Sep 1, 2015)

I am having a deja vu.

*Headline:*
AMD 'will'.... (insert random prediction)

*Daily reality:*
Nvidia reigns supreme.

I'm out, cya


----------



## ne6togadno (Sep 1, 2015)

last weekend i was expoloring steam headlights and promotions and i was surpriced that from 10-12 titles i have explored about half of them was anounced with dx 12 support and most of them to be deliverd till end of this year.
non of them was major AAA title but still that's a lot of games with dx12 scheduled for this year (and i've only checked 10-12 of new titles). also very soon we can have big AAA game with dx12. with sp campain of star citizen schedueled for end of this year and promised mantle/dx12 support, if everything is on track, very soon we could have somthing to heavily smash our graphics hardwere with.


----------



## RejZoR (Sep 1, 2015)

Well, most developers will quickly jump on DX12 because of obvious performance benefits. For which you don't need async shaders or any other fancy crap.


----------



## Naito (Sep 1, 2015)

RejZoR said:


> Well, most developers will quickly jump on DX12 because of obvious performance benefits. For which you don't need async shaders or any other fancy crap.



I'd imagine it'd be easier to port from console to PC going via the DX12 path too.


----------



## RejZoR (Sep 1, 2015)

Dunno about that, but when such generic gains can be obtained, devs quickly jump on the features. If they require a lot of work for very specific hardly visible things, they tend to avoid them.


----------



## AsRock (Sep 1, 2015)

I just see a performance boost for AMD GCN based cards, what actually worries me  is if i buy a nVidia supported game if i should buy it. As we all know what nVidia is like.

Game company's should support both nVidia and AMD but chances of this happening is very unlikely.  I know when a DX12 game is released and it's nVidia supported i be holding back a while to make sure i get what i pay for. 

And to me this is not a nVidia V's AMD it's about AMD making GCN work better for their hardware and the only real example they can use is nVidia as they are not going to use Intels IGP lamo.


----------



## TheMailMan78 (Sep 1, 2015)

Ill wait for the next gen of NVIDIA and a W1zz review before I buy into some AMD marketing slides. With the track record AMD has with powerpoint I don't know how anyone takes them seriously.


----------



## Steevo (Sep 1, 2015)

TheMailMan78 said:


> Ill wait for the next gen of NVIDIA and a W1zz review before I buy into some AMD marketing slides. With the track record AMD has with powerpoint I don't know how anyone takes them seriously.




Breath in the red smoke, hold it and remember, if you don't cough you can't get off. 

Their vapor makes snoop dogg jealous.


----------



## Dethroy (Sep 1, 2015)

I'm glad I didn't upgrade this generation (still holding onto my GTX 670).
I'm sure with the new archtitectures of both Pascal and Arctic Islands and the new manufacturing processes that they are built upon we will see quite an improvement - DX12 or not.


----------



## arbiter (Sep 1, 2015)

the54thvoid said:


> That planning ahead (Hawaii) meant the cards that were firmly bound in DX11 failed to overturn Nvidia's GPU lead.  That planning ahead has brought the company to its knees.  I have no doubt at all AoS is far superior on AMD hardware but it almost seems like its become a massive PR push for things that have no current relevance.
> It does seem Nvidia have a significant potential disadvantage and perhaps they are bluffing their way through on this gen but I don't for a second think AMD have played the game well over the past two years.
> Even now there is no reason why they can't allow AIB's to sell air cooled Fury X cards, with more appeal than an AIO water cooler.  AMD are still being arses.  They may have planned ahead but very little they have done is bringing financial success.
> New management, a funding injection and broader R&D would benefit the company and the consumer far more.


I don't think AMD planned 3-4 years in advance that async would be in DX12. They got very lucky it was added, which i believe near last minute when maxwell was pretty much finalized on its design so couldn't be added. Still to be determined how many games will even use it and how hard it is to code for it. By the time it could even matter could be 2-3 years from now which likely maxwell gpu's will be replaced. DX11 will still be dominate for least 1 year yet.


ne6togadno said:


> limited core and/or hbm supplys.


It was reported before fury was even launched that supplies would be limited for the card.


----------



## nem (Sep 2, 2015)




----------



## lilhasselhoffer (Sep 2, 2015)

For the love of Christmas.  Heads out of backsides people.


AMD releases Mantle.
AMD gets together with Khronos and MS.
DX12 and Vulkan basically take Mantle and make it their standards.
AMD says Mantle is dead, and unsurprisingly suggest its features live on in DX12 and Vulkan.

http://www.kitguru.net/components/g...-absorbed-best-and-brightest-parts-of-mantle/



If you somehow believe AMD was "lucky" here you're an idiot.  AMD made Mantle and their hardware symbiotic.  Instead of the Nvidia douche-baggery, they allowed Khronos and MS to adopt their tech, and make it open sourced.  AMD may have flaws, but they know that the Nvidia specialty stuff (Hairworks, etc..) is poison for the industry.  As both MS and Khronos can jump all over the benefits AMD touts, their current hardware is going to perform better than Nvidia's on the new standard.  THE NEW STANDARD THEY HELPED WRITE.  Derp!  Once Nvidia takes a crack at DX12, and we get to see a new process node in action, this will be worth talking about.  Right now, it's like a 10 year old planning what they want to eat in 4320 days.  There's no competition, no metric with which to gauge any competition, but plenty of bluster around numbers that don't connect to real world performance.  


Seriously, look at the slide in the linked article.
1) New rendering technique - Async shaders anyone?
2) Increased draw cell count - This is the big stick that supposedly will make everything better in DX12.
3) Direct access to GPU features - Kinda fluff in my book, but combined with the consoles comments this basically is an admission that lower end hardware will see huge benefits by decreasing overhead.

Anyone else want to state the obvious?  Maybe that water is wet, or perhaps the pope is catholic?  These things border on tautology.


----------



## anubis44 (Sep 2, 2015)

Naito said:


> I'd imagine it'd be easier to port from console to PC going via the DX12 path too.



The code from which uses... asynchronous shaders. Fact is, if a game really was optimized for the consoles, which are all AMD-powered, it will run better on Radeon cards in DX12, since DX12 is virtually identical to the API on the consoles. nVidia has been check-mated here. They probably haven't baked hardware asynchronous shaders into Pascal, either, and now they've got an obsolete GPU design in the stores right now (Maxwell), and an already obsolete GPU in their as-yet unreleased Pascal design. Looks pretty bad for nVidia no matter how you look at it under DX12.


----------



## Xzibit (Sep 2, 2015)

anubis44 said:


> The code from which uses... asynchronous shaders. Fact is, if a game really was optimized for the consoles, which are all AMD-powered, it will run better on Radeon cards in DX12, since DX12 is virtually identical to the API on the consoles. nVidia has actually been check-mated here. They probably haven't baked hardware asynchronous shaders into Pascal, either, and now they've got an obsolete GPU design in the stores right now (Maxwell), and an already obsolete GPU in their as-yet unreleased Pascal design. Looks pretty bad for nVidia no matter how you look at it under DX12.



Nvidias has money to sponsor console ports and add gameworks.  If game devs do in fact take advantage of async compute.  It would be logical Nvidia would spend money to sponsor "support" titles and add GameWorks to as many as they can.


----------



## tabascosauz (Sep 2, 2015)

Xzibit said:


> Nvidias has money to sponsor console ports and add gameworks.  If game devs do in fact take advantage of async compute.  It would be logical Nvidia would spend money to sponsor "support" titles and add GameWorks to as many as they can.



Especially when hardly anyone is even beginning to label the cleverly-marketed Gameworks as a blatantly anti-competitive affair. I guess a lot of gamers and reviewers out there really do think Gameworks makes the game work better. Crippling your Radeon card, so that you can play the game the way it's meant to be played.

The AMD celebration is a little premature.


----------



## the54thvoid (Sep 2, 2015)

anubis44 said:


> The code from which uses... asynchronous shaders. Fact is, if a game really was optimized for the consoles, which are all AMD-powered, it will run better on Radeon cards in DX12, since DX12 is virtually identical to the API on the consoles. nVidia has been broad-sided here. They probably haven't baked hardware asynchronous shaders into Pascal, either, and now they've got an obsolete GPU design in the stores right now (Maxwell), and an already obsolete GPU in their as-yet unreleased Pascal design. Looks pretty bad for nVidia no matter how you look at it under DX12.



Keep drinking the crazy juice. Maxwell is certainly not obsolete now, nor will it be when DX12 is everywhere.  While it may not perform as well as Fiji, it'll still do the required work, perhaps only performing as well as AMD's Hawaii rebrands (if AoS is the benchmark for DX12).
To think Nvidia can't address Maxwell shortcomings in DX12 with Pascal is also quite naive.  Trust in the big nasty team green and they'll get their act together.
Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia now but again, time will tell what's really going to happen.


----------



## arbiter (Sep 2, 2015)

the54thvoid said:


> Keep drinking the crazy juice. Maxwell is certainly not obsolete now, nor will it be when DX12 is everywhere.  While it may not perform as well as Fiji, it'll still do the required work, perhaps only performing as well as AMD's Hawaii rebrands (if AoS is the benchmark for DX12).
> To think Nvidia can't address Maxwell shortcomings in DX12 with Pascal is also quite naive.  Trust in the big nasty team green and they'll get their act together.
> Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia now but again, time will tell what's really going to happen.


What did ya expect from someone uses AMD logo for their avatar. Pascal is far from done likely will have it. AMD future on DX12 looks good but I wouldn't Bank on it given AMD's track record of last few years of taking things that look to be good and turning it in to a turd (cough hawaii and Fiji Launches cough). I would bet money on Nvidia way before AMD.


----------



## HumanSmoke (Sep 2, 2015)

the54thvoid said:


> I'd like to know for example, what upcoming titles will be utilising DX12, what company 'helps out the developers' and what the release date is.


Getting a list together might be a more worthwhile exercise than what is presently being argued. As far as I'm aware:

Fable Legends (Unreal Engine 4)
Gears of War Ultimate (Unreal Engine 3)
Ashes of the Singularity (Nitrous engine)
Deus Ex: Mankind Divided (Dawn Engine)
Ark: Survival Evolved (Unreal Engine 4)
DayZ  (Real Virtuality 3 engine)
ArmA 3 (Real Virtuality 3 engine)
Star Citizen (CryEngine 4)
Doom (Tech 6 engine)

You would think that FrostBite would patch DX11 games to DX12 in addition to new DX12 titles given their Mantle association, but I haven't heard anything concrete - just PR-speak from the developer.


----------



## rvalencia (Sep 2, 2015)

HumanSmoke said:


> Getting a list together might be a more worthwhile exercise than what is presently being argued. As far as I'm aware:
> 
> Fable Legends (Unreal Engine 4)
> Gears of War Ultimate (Unreal Engine 3)
> ...


Project Cars DX12 patch

Fable Legends uses Async
Deus Ex: Mankind Divided uses Async
Rise of the Tomb Raider uses Async (XBO 2015, PC version 2016)


EA DICE, DX12 news in spring.
https://twitter.com/gustavhalling/status/588730307232301057


----------



## AsRock (Sep 2, 2015)

You have hard evidence that Bohemia are making Arma 3 DX12 ?, i heard of the gossip but not noticed any facts that it's actually going happen with Arma 3 just the possibility with the new map they are doing.


----------



## HumanSmoke (Sep 2, 2015)

AsRock said:


> You have hard evidence that Bohemia are making Arma 3 DX12 ?, i heard of the gossip but not noticed any facts that it's actually going happen with Arma 3 just the possibility with the new map they are doing.


All you can really go on is the developers word, and as far as I know, the word is that ArmA 3 will get DX12 support.

EDIT: The *4:52 mark of this video* mentions the DX12 update


----------



## AsRock (Sep 2, 2015)

HumanSmoke said:


> All you can really go on is the developers word, and as far as I know, the word is that ArmA 3 will get DX12 support.



Well that would explain why i never noticed it, i keep away from such sites.  The point that bothers me as it's there and not on their website\forum.  But maybe they are trying to get it added but need to see how it goes so to me i don't believe they know at this time.


----------



## lilhasselhoffer (Sep 2, 2015)

HumanSmoke said:


> Getting a list together might be a more worthwhile exercise than what is presently being argued. As far as I'm aware:
> 
> Fable Legends (Unreal Engine 4)
> Gears of War Ultimate (Unreal Engine 3)
> ...



Much obliged!


I'm sorry to say this, and it kinda makes me feel dirty.  The developer streams from Digital Extreme indicated that DX12 support is on their long list of upgrades for Warframe in the future.  As PBR has been on the list for the better part of a year now, they may well have it in place by the time Pascal and Arctic Islands GPUs are available....maybe....half implemented...sigh....failure.......


----------



## Solaris17 (Sep 2, 2015)

the54thvoid said:


> In that position they'll have harder time selling Fiji. If Hawaii performs as well as 980ti in DX12, Fiji becomes a white elephant, bearing in mind on some of the AoS benches, Hawaii matches Fiji.



To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.


----------



## Makaveli (Sep 2, 2015)

Dieinafire said:


> You can say that about this entire subject because the dx 12 games aren't out yet



Go troll somewhere else.


----------



## yogurt_21 (Sep 2, 2015)

Solaris17 said:


> To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.


while that's true the old saying goes "if you want to sell station wagons, you need to have a sports car in the window" So while we're not the consumer base, their consumer base is affected when they don't compete well at the enthusiast level.

besides, amd shot themselves in the foot with their code names. everyone knows Hawaii is better than Fiji


----------



## terroralpha (Sep 4, 2015)

RejZoR said:


> Fury X PCB's are still designed by partners, they are just restricted to reference design. Meaning they make the whole thing, but they have to stick to blueprints provided by AMD.



that's like saying "you can choose any color you want but it has to be blue." the PCBs aren't designed by partners. they are built by partners. everything is identical from the worthless cooler master pump they all use to the capacitors, VRMs, etc.



yogurt_21 said:


> besides, amd shot themselves in the foot with their code names. everyone knows Hawaii is better than Fiji



tahiti >>>>>>>>>>>>>> hawaii and fiji


----------



## Pill Monster (Sep 4, 2015)

Wow, bad case of dejavau.. 


Halfway down the page:
http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head




> nem  Joel Hruska  • 11 days ago
> 
> Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the *Shaders asynchronous*check this is not my argument but It seems well argued.
> 
> ...





And....
http://wccftech.com/amd-major-driver-update-catalyst-157-dx12/




And....

http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843


All credit for perseverance though.


----------



## RejZoR (Sep 4, 2015)

terroralpha said:


> that's like saying "you can choose any color you want but it has to be blue." the PCBs aren't designed by partners. they are built by partners. everything is identical from the worthless cooler master pump they all use to the capacitors, VRMs, etc.
> 
> 
> 
> tahiti >>>>>>>>>>>>>> hawaii and fiji



You clearly don't understand what "reference design" means... And it actually means "you can use whatever color of stickers you want, for as long as hardware is designed per designer (AMD in this case) reference specs".


----------



## terroralpha (Sep 4, 2015)

RejZoR said:


> You clearly don't understand what "reference design" means... And it actually means "you can use whatever color of stickers you want, for as long as hardware is designed per designer (AMD in this case) reference specs".



i think the analogy confused you. forget the colors and stickers. you said that the fury x boards are "still designed by partners." then you said "they are just restricted to reference design." clearly, both things cannot be true. you either design it, or you follow the blue prints. all the Fury X cards use the same exact components down to worthless cooler master pump and the VRMs. which is why you don't see a fury x with "military grade" components or some stupid shit like that. at best, the different cards may have different fans on the radiators. since this is the case, no one is designing anything.


----------



## RejZoR (Sep 4, 2015)

I may used inccorrect word for that (which is why you're confused). I meant manufacture. Design would be making their own board layout...


----------



## johnspack (Sep 4, 2015)

I hope it does...  I can't afford a 450can 970,  so what do I care about 1500can cards.   http://www.ncix.com/category/amd-nvidia-gpu-geforce-gtx-titan-x-25-108-2059.htm
Even gamers can't care any more,  just so bloody expensive.


----------



## terroralpha (Sep 4, 2015)

RejZoR said:


> I may used inccorrect word for that (which is why you're confused). I meant manufacture. Design would be making their own board layout...



ehh, whatever. i guess we are on the same page after all.

in any case, back to the original conversation. i read all this crap and after all of that i assumed the 980 Ti would get stomped into the ground. but that's not what i'm seeing in actual comparisons...







those benches are from Ashes of the Singularity and these are stock cards. the 980 Ti, and I have 3 of them, OC like bats out of hell. all 3 of my 980 Tis can do at least 1475 MHz without ever throttling. one of them can do 1520MHz.

am i missing something? am i looking at the wrong thing?


----------



## anubis44 (Sep 4, 2015)

arbiter said:


> What did ya expect from someone uses AMD logo for their avatar. Pascal is far from done likely will have it. AMD future on DX12 looks good but I wouldn't Bank on it given AMD's track record of last few years of taking things that look to be good and turning it in to a turd (cough hawaii and Fiji Launches cough). I would bet money on Nvidia way before AMD.



I use an AMD logo as an avatar because I respect the company. They have always been forward-looking. The Athlon 64 made 64 bit computing a reality for the masses, and forced Intel to make 64 bit x86 CPUs at a time when they were going to move to Itanium and cut off competition. AMD saw that multi-threaded software was the future, and built the Bulldozer series of chips for that multi-threaded future, along with baking Asynchronous compute into their GCN GPUs. nVidia and Intel, on the other hand, only ever seem to cheat, throw their weight around or pay off companies NOT to innovate. It disgusts me. If that makes me some kind of 'fan' of AMD, then I guess that's what I am.

Back on to the topic at hand, what's going on here with DX12 supporting Mantle's features is all part of AMD's long-term game plan. AMD has not turned anything into a 'turd', instead, Microsoft has been dragging its feet by holding on to a single-threaded API (DX11 and older) that can't take advantage of multi-core CPUs for years (hence the familiar 'only one core is heavily loaded' syndrome on all DX11 and older games) and AMD, through porting the game console API over to the PC in the form of Mantle, has once again pushed the industry forward, kicking and screaming. nVidia simply didn't expect a Mantle-like API to become the standard so quickly, and they've been caught with their pants down, plain and simple. This isn't fanboy-ism, it's a simple, empirical observation supported by overwhelming evidence. nVidia can try to cheat and pay-off game companies to use Gimp-works, but the game companies will have to ask themselves if it's worth it to ignore the consoles and only make a game run OK on an nVidia card on PC, instead of making it run well on all the consoles and Radeon GPUs. AMD can and will also pay companies to optimize for Radeon (Battlefield 4, Civilization, etc.) and they will.



AsRock said:


> I just see a performance boost for AMD GCN based cards, what actually worries me  is if i buy a nVidia supported game if i should buy it. As we all know what nVidia is like.
> 
> Game company's should support both nVidia and AMD but chances of this happening is very unlikely.  I know when a DX12 game is released and it's nVidia supported i be holding back a while to make sure i get what i pay for.
> 
> And to me this is not a nVidia V's AMD it's about AMD making GCN work better for their hardware and the only real example they can use is nVidia as they are not going to use Intels IGP lamo.



Yes, exactly. It's all fine and well to 'optimize' a game to run especially well for a given architecture, but when you deliberately sabotage a game so it'll run badly for the other GPU company's cards, it means owners of the targeted cards simply won't buy the game, and no game developer wants to limit their sales. nVidia's gimp-works is likely only going to alienate most game developers even further away from nVidia. It's a short-term, desperate strategy to try to drive sales of already-obsolete Maxwells because nVidia knows full well they've been caught with their pants down with the unexpected release of DX12 supporting multi-threaded access to the GPU (remember, it takes a couple of years to design and manufacture a GPU) and without the hardware level context switching and asynchronous shader support to take advantage of this.



lilhasselhoffer said:


> If you somehow believe AMD was "lucky" here you're an idiot. AMD made Mantle and their hardware symbiotic. Instead of the Nvidia douche-baggery, they allowed Khronos and MS to adopt their tech, and make it open sourced. AMD may have flaws, but they know that the Nvidia specialty stuff (Hairworks, etc..) is poison for the industry.



Yes, exactly. Trouble is, many nVidia card owners WANT to believe that AMD is 'weak', and couldn't possibly have made such a deft, strategic manoever. They want the world to be simple, with 'winners' and 'losers', and the world isn't always simple. AMD was playing the long game, betting on multi-threaded software and hardware, and is now the big fish, and nVidia is the company on the ropes here. I would imagine that with Zen-based APUs sporting HBM and GCN2.0 cores next year, many gamers will have less need to buy an add-in board at all. If Intel steps up it's game with faster integrated graphics, nVidia will slowly be crushed between high-performance Intel and AMD APUs.



the54thvoid said:


> Keep drinking the crazy juice. Maxwell is certainly not obsolete now, nor will it be when DX12 is everywhere.  While it may not perform as well as Fiji, it'll still do the required work, perhaps only performing as well as AMD's Hawaii rebrands (if AoS is the benchmark for DX12).
> To think Nvidia can't address Maxwell shortcomings in DX12 with Pascal is also quite naive.  Trust in the big nasty team green and they'll get their act together.
> Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia now but again, time will tell what's really going to happen.



Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.


----------



## ne6togadno (Sep 4, 2015)

anubis44 said:


> Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.


pls use edit and multi-quote options. dobule posting isnt tolerated, triple even less.


----------



## Pill Monster (Sep 4, 2015)

terroralpha said:


> ehh, whatever. i guess we are on the same page after all.
> 
> in any case, back to the original conversation. i read all this crap and after all of that i assumed the 980 Ti would get stomped into the ground. but that's not what i'm seeing in actual comparisons...
> 
> ...


It's a synthetic benchmark.  Nobody cares.....


----------



## EarthDog (Sep 4, 2015)

anubis44 said:


> Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.


that's half the battle... now what about DX12 games being released before pascal? A handful?


----------



## HumanSmoke (Sep 4, 2015)

EarthDog said:


> that's half the battle... now what about DX12 games being released before pascal? A handful?


Probably.
Just to put this whole "AMD is going to rule the world" jag that anubis44 seems to be on into perspective - and probably send him into an apoplectic triple-post frenzy into the bargain, the first DX12 (patched) game will be ARK:Survival I believe - that's the one with GameWorks baked in. Now I'm reasonably sure that AMD's Gaming Evolved program won't go overboard on the async compute for computes sake - not just because it hobbles some performance for owners of its competitors cards, but because independent devs probably aren't going down that road either, and because if it's a "shots fired" scenario, no one wins - because sure as hell if it does, there is a pretty big chance that UE4 (which is already due to power over a hundred games, some of which will be AAA) - with its close association with Nvidia, and baked in support for conservative rasterization, raster ordered views, and hybrid ray tracing, could very well return the favour in spades. So, bearing that in mind, I kind of doubt that either side has much to gain by exposing the architectural shortcomings of the other.

By the time DX12 gains momentum ( a lot of announced AAA titles are still DX11 going forward), we will be looking at new architectures from both vendors. It isn't much different from the move from DX9. ATI and Nvidia both had unified shader architectures (R600 /G80) on the board years before D3D could take advantage of them - and didn't eventuate until they were required. Under DX9, pixel and vertex pipelines gave adequate performance with a low power use penalty.


----------



## 64K (Sep 4, 2015)

My opinion right now is that the AMD cards out there are going to run a little better on DX12 than the Nvidia cards but that is quite a stretch from saying that DX12 games are going to run like shit on Nvidia cards. I say that for two reasons and they both are rooted in real world business. Publishers aren't going to give up billions of dollars in sales by allowing 75% of their PC gamer customers to be screwed out of buying their games. The second is more ugly. While AMD went into the red $400 million dollars last year and owe more than they're worth, Nvidia made a profit of $630 million dollars. If it came down to it Nvidia would pay some Publishers to make games run better on their cards. Yes that's shitty but that's business and God help AMD if they take too much food off of Intel's plate with Zen. They made a profit of 11.7 billion dollars last year. Over 5 times what AMD is even worth.


----------



## lilhasselhoffer (Sep 4, 2015)

anubis44 said:


> Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them. As for DX12 being everywhere, Windows 10 is seeing an adoption rate that's unprecedented in Windows history. Steam is reporting that 17% of users are already using Windows 10 after only 1 month! That's higher than Windows 7's adoption rate. If it keeps going like this, nearly every gamer will have upgraded to Windows 10 by Christmas, just in time for the release of a bunch of DX12 titles that will run better on Radeons than nVidia cards.



We're on the same philosophical side, yet it's time for a reality check.

Windows 10 has high adoption numbers because it's a free upgrade for anyone who has a recent (last 2 major releases, 7 and 8) version of Windows.  The adoption rate is not, I repeat NOT, a factor of anything else.  As such, adoption rates will be artificially inflated as people either not inclined to technology, or those who love being early adopters, get their "free" upgrade.  Even now, the adoption rate has slowed down to the point where servers don't have a huge line to download W10.  If you want it, you just have to click that stupid little flag and initiate download.  This means that 17% figure is likely only going to creep to 18% going forward.  10.1, or whatever the first service pack will be, is going to bouy numbers again, but we're past the point of early adoption and "free" upgrading making adoption rates soar.

As far as ready in time for the holidays (December 2015), that's laughable.  DX12 is currently being tested utilizing software that does a hand full of DX12 features.  The cited async shader testing is only looking at shaders.  The tests showing AMD is advantaged in draw calls only demonstrated that.  The simple truth is that writing code to test for a metric is relatively easy, but making that code a fundamental part of a game engine in less than a year (DX12 was "finalized" this year) is pretty insane.  Assuming it's somehow implemented, you've still got to be able to use it.  As little as I like to say it, this is AMD based drivers we're talking about here.  They've gotten better, but realistically tackling a whole new revision of DX with your drivers is a challenge.  Even with the experience from developing Mantle, I'll wait until real world testing bears out AMD being superior before I swallow the red pill.  Heck, I'm saying this from a 7970 if you can believe that.

Moving on, Maxwell isn't a slouch.  When TSMC basically gave up on a die shrink, and doomed us to a third generation of 28 nm lithography, there was a decision made by both Nvidia and AMD.  Nvidia said we're going to invest time and resources, and we'll improve Maxwell by making it the ultimate architecture for DX11.  They invested in finding all sorts of optimization for DX11, and the performance of Maxwell bears that out.  It's an excellent demonstration of getting the last bit of use out of the 28 nm node, barring the obvious controversy with the memory structure.  AMD went the other way.  They basically rebranded cards, focused on new memory technology (HBM1), and aimed at new standards (DX12, which was Mantle at the time).  If you remove Nano and Fury from the 3xx stack of AMD GPUs you'll find that there's nothing new.  The minor increases in clocks, combined with increases in power consumption, basically highlight exactly how much AMD's current cards aren't really meant for DX11.




In short, AMD has the minor lead on DX12 right now, but that's because they've given up on DX11.  Their current product stack bears that out, and it isn't an advantage that will last very long.  Between DX12 being a new standard, Pascal and Arctic Islands dropping in 2016, and completely new memory available with HBM2 the AMD vs. Nvidia debate is pointless.  Both companies current offerings are good, but not good enough to warrant upgrading from any card you've bought in the last three years.  2016 will change that argument, but DX12 early adoption advantages aren't really going to sell AMD above Nvidia today.


----------



## the54thvoid (Sep 4, 2015)

anubis44 said:


> Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them.



Who is in denial exactly when I said this as part of my post?



> Certainly AMD have a very bright DX12 future and they definitely have a great theoretical advantage over Nvidia



@lilhasselhoffer and @HumanSmoke have explained in great detail the very reasons why Async is not super dooper relevant RIGHT NOW.  And when it is relevant - both red and green will have new architectures to sell to the masses.


----------



## 64K (Sep 4, 2015)

Assuming Pascal will be better at Async. I've heard all along that Pascal will be better at compute because their professional cards need it. I have no idea if that translates to better at Async. The rumors I've been seeing is that Big Pascal is already taped out and will come out first before the mid range and entry level Pascal GPUs but that's just rumors. Hopefully it's better at Async because it's a done deal now.

I think what some people are focusing on is that a lot of people aren't going to be buying a Pascal or Arctic Islands if they own a Maxwell right now. A lot of people only upgrade every other generation. They will be stuck with Maxwell for a couple more years at least. There will undoubtedly be several DX12 games over that lifespan that they will want to play. I still don't believe that they will be drastically affected by a lack of Async though. It will probably get worked through somehow.


----------



## Mussels (Sep 4, 2015)

64K said:


> I think what some people are focusing on is that a lot of people aren't going to be buying a Pascal or Arctic Islands if they own a Maxwell right now. A lot of people only upgrade every other generation. They will be stuck with Maxwell for a couple more years at least. There will undoubtedly be several DX12 games over that lifespan that they will want to play. I still don't believe that they will be drastically affected by a lack of Async though. It will probably get worked through somehow.



I think everyones just losing sight of the fact they're talking about current gen cards, right now - the only people that benefit are the ones who got lucky with a high end AMD card, since their investment drags out a little longer before needing an upgrade.


----------



## Vayra86 (Sep 4, 2015)

Solaris17 said:


> To be fair I dont think AMD has relied on there highend cards making them money for some time. there mid range price cuts have been crazy in the past 5 years. I honestly dont think AMDs business model is the people in this forum.



That's an apologists' remark of no value. I remember the same being said about CPU when FX turned out to suck from here to infinity, but there is no truth in it, case in point here is Zen which AMD says is a return to high end... Every company that wants to play ball has flagships that carry the lower tiers of products. If your flagships suck the rest won't sell, very simple. If BMW would only make 3-door mom's grocery carts they would never sell any cars. Hell even ultra-budget Dacia has a flagship car, go figure.


----------



## RejZoR (Sep 4, 2015)

Remember how Pirelli tire sales dropped when people heard they suck in F1? Even though F1 high end tires have no connection what so ever to the road tires...


----------



## the54thvoid (Sep 4, 2015)

Mussels said:


> I think everyones just losing sight of the fact they're talking about current gen cards, right now - the only people that benefit are the ones who got lucky with a high end AMD card, since their investment drags out a little longer before needing an upgrade.



Yup.  Hawaii and Fiji will have great longevity in a DX12 environment.

But what people are really missing is this (and I'll repost the graph that has been reposted already).







If Nvidia Maxwell architecture cant handle Async (and this is an Async utilising bench) why is it on par with an Async monster?  Fury X and 980ti are equal cost (or close enough - in UK 980ti are cheaper or more expensive, custom card dependent).  Fiji has a shit load of shading power, in DX 12 it's showing it's true legs.  But, and this is a huge freaking but - Maxwell (the most awful Async hardware ever apparently) is still on par with Fiji.

So - Fiji is more awesome than Maxwell at DX12 because of Async shaders but in an Async shader using benchmark (don't even know how many queues AoS uses???) the two cards are pretty much equal (and in fact, Fiji's lead drops at 'heavy' - maybe clock speed there helping NV?)

So what AMD folksies should be very worried about is that a crippled DX11 focused architecture is matching a DX12 designed architecture on a benchmark from the guys that leveraged Mantle for AMD.

Think about it.......  If this is AMD's best shot (and they've been very vocal about how bad Async on Maxwell is) why is it only achieving a tiny lead (if any)?

This isn't me banging NV's drum but the penny just dropped when i started writing: *crippled card* *performs near to or equal a card designed to run it*.........


----------



## RCoon (Sep 4, 2015)

So is everyone just assuming that ACE's are going to be the staple standard "moar horses" for DX12 performance?

Seems odd, considering nobody gave a damn up until AMD started shouting about how they have more.


----------



## Aquinus (Sep 4, 2015)

RCoon said:


> Seems odd, considering nobody gave a damn up until AMD started shouting about how they have more.


Ahhh, the AMD PR machine at work.


----------



## Vayra86 (Sep 4, 2015)

AMD always has more.

Mostly more units sitting on shelves. Let's leave it at that and wait it out. This is pointless.


----------



## vega22 (Sep 4, 2015)

i think mussels hit the nail on the head, but some people refuse to think amd are able to have any kind of plan.

they were in a place to make sure they had the best hardware to make use of what has become dx12. nvidia could of done this but the idea of giving software away is in their past.

will this async stuff damage nvidia for dx12? do they already have a plan to sell cuda 4 to devs which will do most of the stuff and "more"?

time will tell.

i can say this, after nearly 2 years with a 290x that keeps gaining fps with driver updates i think it was about the best £330 i ever spent on a gpu


----------



## the54thvoid (Sep 4, 2015)

marsey99 said:


> i think mussels hit the nail on the head, but some people refuse to think amd are able to have any kind of plan.
> 
> they were in a place to make sure they had the best hardware to make use of what has become dx12



Agreed that it is only turds that think AMD engineers are useless (just the management).  I imagine AMD winning the consoles ('winning' here is subjective) gave them the impetus to rally against the DX11 shortcomings and try to develop more towards an architecture that was driver agnostic.  In doing so, with their limited pool of R&D resources, they worked on Mantle to deliver that driver minimising code to allow the bare metal to be used without complex and game specific optimisation (what NV do well, until they break it).  Mantle would be seen as favourable as a code path as it relied less on gfx vendors writing their own code and let devs create a nice game (yes I'm simplifying things a bit here)
Reality hits though and with NV's market share Mantle doesn't look as though it's going to gain traction.  All the while MS is working at a bare metal API and slowly DX12 is born.  Blah blah Vulcan this, Kronos that, et voila, open source non proprietary wholesome goodness.

Of course AMD has a plan - they had it for years.  And it's working to massively level the playing field - taking away NV's software (driver) optimisation edge.

Still, it's only leveled it for now, with one benchmark.  What if queue depths are kept under 31 and NV uses master rasterisation gimmicks or whatever?  Do we seriously expect Nv to play fair?  Of course not.  I might buy Nvidia gfx cards (and kill baby seals and eat kittens cos we're all evil) but i'm also aware how quickly NV will work with Dev's to ensure AAA titles they're involved with absolutely take that edge away from AMD.  Is that fair? NO.  But it's business.

It will be very interesting to watch this play out but as is repeatedly being pointed out and people seem to ignore, AoS (from the guys that gave you the first Mantle showpiece) using Async only makes it level.  Is this AMD's (Fiji's) best scenario?  If so, Nvidia will be licking their lips for their turn.  And we should all be scared because it will be a Gameworks apocalypse


----------



## Moofachuka (Sep 4, 2015)

Pretty sure by the time Pascal comes out, they will beat AMD in DX12.  And there should be some DX12 games to compare by then.


----------



## rtwjunkie (Sep 4, 2015)

Moofachuka said:


> Pretty sure by the time Pascal comes out, they will beat AMD in DX12.  And there should be some DX12 games to compare by then.


 
Even if they do, and that's a big if, since AMD has had a good headstart on designing hardware for DX12, as opposed to DX 11, it's not a big deal either way.  You see the graph, based on the ONE benchmark/game out now, that shows the the Nvidia top dog, not designed for DX12, and the AMD top dog, designed for DX12, are not very far apart. 

I foresee AMD improving, and Nvidia having Pascal designed for DX 12, and that there won't be a heck of alot of difference between the competitors at most price points.  Personally, I think it's a helluva alot of noise being made on both sides about the issue right now, when there is one DX12 game and only a handful of DX12 games by the time Pascal drops. 

I don't think either side's strident and zealous believers can really claim anything at this point.


----------



## the54thvoid (Sep 4, 2015)

rtwjunkie said:


> I don't think either side's strident and zealous believers can really claim anything at this point.



I claim Antarctica


----------



## LightningJR (Sep 4, 2015)

I don't like it when AMD makes their products for the future, they did it with bulldozer... I don't buy my PC parts to get better gains a yr or two in the future, I want them now. Intel and NVidia seems to understand this or AMD just simply has no other choice but to do business this way with their limited budget.


----------



## arbiter (Sep 4, 2015)

anubis44 said:


> I use an AMD logo as an avatar because I respect the company.


Sorry I kinda find it hard to respect a company that claims things which turn out to be lies or not as good as their boast. 



anubis44 said:


> Crazy juice? I'm not the one in denial about the importance of asynchronous shaders and nVidia's lack of them.


What you expect when AMD create async shaders so they are gonna be better at it them nvidia to start with. Want to make that kinda comparison should start turning on Physx and hair works in game testing as it would make testing fair. I doubt that async is allowed to be turned off to remove what was originally AMD design.  Likely won't be many games that really use it to start with cept for few AMD backed games but with it off it would should a true Apples to Apples test instead of Apples to oranges since it involves AMD tech.


----------



## anubis44 (Sep 4, 2015)

ne6togadno said:


> pls use edit and multi-quote options. dobule posting isnt tolerated, triple even less.



I'm afraid I don't understand. I was replying to three different people, not the same person three times. That's my understanding of a double or triple post. Are you saying I can't respond to more than one person at a time? If so, that's quite the limitation, and will invariably lead to several people ganging up on one person.


----------



## rtwjunkie (Sep 4, 2015)

anubis44 said:


> I'm afraid I don't understand. I was replying to three different people, not the same person three times. That's my understanding of a double or triple post. Are you saying I can't respond to more than one person at a time? If so, that's quite the limitation, and will invariably lead to several people ganging up on one person.


 
No, simply use the multi-quote button, or you can simply respond to multiple people in turn using the @user name.


----------



## BiggieShady (Sep 4, 2015)

rtwjunkie said:


> No, simply use the multi-quote button, or you can simply respond to multiple people in turn using the @user name.


Kudos on finding the user who is user. He is apparently from Moscow.


----------



## ne6togadno (Sep 4, 2015)

anubis44 said:


> I'm afraid I don't understand. I was replying to three different people, not the same person three times. That's my understanding of a double or triple post. Are you saying I can't respond to more than one person at a time? If so, that's quite the limitation, and will invariably lead to several people ganging up on one person.


those are your 3 posts after mod have fixed em.
if you had used multi-quote option or if you had quoted next person after you have answered to previous one in the same post it would looked like this. everyone that you have quoted would receive alert that he has been quoted and non of them would have missed that you have responded personally to him


----------



## rtwjunkie (Sep 4, 2015)

BiggieShady said:


> Kudos on finding the user who is user. He is apparently from Moscow.


 
WOW!  I didn't even realize that when I did it!!    Hopefully that person doesn't get alot of callouts in threads from now on.


----------



## HumanSmoke (Sep 4, 2015)

the54thvoid said:


> It will be very interesting to watch this play out but as is repeatedly being pointed out and people seem to ignore, AoS (from the guys that gave you the first Mantle showpiece) using Async only makes it level.  Is this AMD's (Fiji's) best scenario?  If so, Nvidia will be licking their lips for their turn.  And we should all be scared because it will be a Gameworks apocalypse


I'm a-scared....but the division between games due to vendor prioritization is only part of it. We are told that DX12 places more power in the hands of the developer, and the onus to have the developer get it right - to get the coding working first time since the graphics driver has a more limited ability to influence the final product.

Bearing that in mind, and the rise of game studios/publishers rushing out titles that are less than polished, does _anyone_ have _any _faith that the EA's and Ubisoft's of the industry are up to the extra workload?


----------



## Vayra86 (Sep 8, 2015)

HumanSmoke said:


> I'm a-scared....but the division between games due to vendor prioritization is only part of it. We are told that DX12 places more power in the hands of the developer, and the onus to have the developer get it right - to get the coding working first time since the graphics driver has a more limited ability to influence the final product.
> 
> Bearing that in mind, and the rise of game studios/publishers rushing out titles that are less than polished, does _anyone_ have _any _faith that the EA's and Ubisoft's of the industry are up to the extra workload?



Not only them, but how about the hundreds of indie devs. Games are going to be more costly to make. And this is yet another reason for slower DX12 adoption rates. I keep saying this, because logic seems lost on way too many people over here.


----------



## rtwjunkie (Sep 8, 2015)

Vayra86 said:


> Not only them, but how about the hundreds of indie devs. Games are going to be more costly to make. And this is yet another reason for slower DX12 adoption rates. I keep saying this, because logic seems lost on way too many people over here.


 
I'm with you, and been saying it as well.  The number of DirectX 12 games is not going to grow at any exponential rates this first year as some keep predicting.  W10 is only hovering around 20%.  lilhasselhoffer remarked, correctly I think, that the initial 16% in the first month was that quick rush to partake in new hardware, yet despite being free, the adoption rate has slowed.

Game manufacturers are going to concentrate on the majority for at least the next year, which right now is using DirectX 11.  Hell, look at all the huge games released this year, and even post W10: DirectX 11.  Upcoming AAA titles also, DirectX 11.

So while AMD appears to perform better in DirectX 12 now, Nvidia will be following suit in the Spring with DirectX 12 GPU's, and be performing presumably at the same level by the time any meaningful number of DirectX 12 games are released.


----------



## Mussels (Sep 8, 2015)

rtwjunkie said:


> I'm with you, and been saying it as well.  The number of DirectX 12 games is not going to grow at any exponential rates this first year as some keep predicting.  W10 is only hovering around 20%.  lilhasselhoffer remarked, correctly I think, that the initial 16% in the first month was that quick rush to partake in new hardware, yet despite being free, the adoption rate has slowed.
> 
> Game manufacturers are going to concentrate on the majority for at least the next year, which right now is using DirectX 11.  Hell, look at all the huge games released this year, and even post W10: DirectX 11.  Upcoming AAA titles also, DirectX 11.
> 
> So while AMD appears to perform better in DirectX 12 now, Nvidia will be following suit in the Spring with DirectX 12 GPU's, and be performing presumably at the same level by the time any meaningful number of DirectX 12 games are released.



dont forget that all new Xbox one games will be DX12, so they may not hit PC til next year but the numbers could climb fast due to them. Part of the idea there was all they need is an alternate control scheme and they're halfway to a PC port.


----------



## rtwjunkie (Sep 8, 2015)

Mussels said:


> dont forget that all new Xbox one games will be DX12, so they may not hit PC til next year but the numbers could climb fast due to them. Part of the idea there was all they need is an alternate control scheme and they're halfway to a PC port.


 
I agree with you!  As you say though, "next year" which is why I assert it is not a huge deal right now that AMD performs better in DX 12.


----------



## RealNeil (Sep 8, 2015)

I can't see either AMD or NVIDIA ~not~ taking an active role in encouraging game developers to utilize their own DX-12 technologies. 
Both of them have already helped with proper coding of games in the past because it's in their best interests to do just that. 
I would love to see them approach _like_ solutions to solve in-game challenges so that there will be standards that everyone ( GPU ~and~ Monitor makers) can cleave to.

This might lead to simpler choices for those of us who want to upgrade our LED monitors without having to buy into one company's technology and not be able to use GPUs from the other company.
The G-Sync premium is an unnecessary expense being tacked onto monitors.


----------



## Vayra86 (Sep 8, 2015)

RealNeil said:


> I can't see either AMD or NVIDIA ~not~ taking an active role in encouraging game developers to utilize their own DX-12 technologies.
> Both of them have already helped with proper coding of games in the past because it's in their best interests to do just that.
> I would love to see them approach _like_ solutions to solve in-game challenges so that there will be standards that everyone ( GPU ~and~ Monitor makers) can cleave to.
> 
> ...



No it won't. Either company just wants to make money. If their shit works, it works, proprietary or not. If customers buy that shit, they have already won the game. There is no need to keep investing in universal standards. Look at Apple, and mini-USB and you see why this is true.

People need to get those AMD fairy tales out of their head. This perfect coalition is not going to happen, it is utopia. Companies WANT to differentiate their products, they call that USP's.

For this reason I just avoid 'sync' screens altogether, it is a price premium for a minimal advantage. Vsync still exists and 90% of all games can handle a little input lag just fine. I put that money towards more GPU and lock stuff at 120fps/hz instead. Done deal, no vendor lock in, more performance at equal cost.


----------



## arbiter (Sep 8, 2015)

rtwjunkie said:


> I'm with you, and been saying it as well. The number of DirectX 12 games is not going to grow at any exponential rates this first year as some keep predicting. W10 is only hovering around 20%.





rtwjunkie said:


> I agree with you! As you say though, "next year" which is why I assert it is not a huge deal right now that AMD performs better in DX 12.


What will be thing, I would expect that games will have DX12 low lvl in them but they will also have DX11 fall back for people still on 7/8. Which will limit performance gains in games since draw calls like will be limited to DX11 speeds. That will help AMD yes but likely won't give them that much of a lead if any. Be like probably BF4 got mantle, did give AMD only like 10% over nvidia on higher end machines.


----------



## BiggieShady (Sep 8, 2015)

arbiter said:


> Which will limit performance gains in games since draw calls like will be limited to DX11 speeds.


Not necessarily, when you increase draw distance in a game or a vehicle count or similar settings, you automatically increase number of draw calls ... so it's certain when switching to dx12 mode in next year's games, there will be opportunity to increase settings for more eye candy at the same dx11 frame rate. Granted, that's also opportunity for devs to do more advanced (and computationally expensive) effects in dx12 mode that you can't turn off ... after all we have seen both examples with dx9/dx10/dx11 transitions.


----------



## TheGuruStud (Sep 8, 2015)

At least I can watch videos on my 290x. 980ti will randomly lock (but can game all day OCed). Thanks Nvidia.

Tell me again how superior nvidia is. What a joke.

Nvidia cheaps out on stuff they don't need at present (and to lower power draw), like compute and hardware bits for dx12. That's not a surprise. AMD packs in tons of tech even though it won't be used in the card's lifespan.


----------



## TheoneandonlyMrK (Sep 8, 2015)

arbiter said:


> What will be thing, I would expect that games will have DX12 low lvl in them but they will also have DX11 fall back for people still on 7/8. Which will limit performance gains in games since draw calls like will be limited to DX11 speeds. That will help AMD yes but likely won't give them that much of a lead if any. Be like probably BF4 got mantle, did give AMD only like 10% over nvidia on higher end machines.


10% more battlefield next or starwars battlefront or star citizen or Aos could well sell a few cards though.
So Amd invented and patented an Ace engine and has been doubling up on them since day one ,its not going to be easy for Nvidia to make different tech that works better and id say could take a while, they did overtake Amds early lead with tesalation though, so hopefully good times ahead ,no doubt,, because  they both really need to start upping their game since 2 cards for high fps 4k is a thing still and not one i like.


----------



## arbiter (Sep 8, 2015)

TheGuruStud said:


> At least I can watch videos on my 290x. 980ti will randomly lock (but can game all day OCed). Thanks Nvidia.
> Tell me again how superior nvidia is. What a joke.


Sounds like something wrong with your machine.


TheGuruStud said:


> Nvidia cheaps out on stuff they don't need at present (and to lower power draw), like compute and hardware bits for dx12. That's not a surprise. AMD packs in tons of tech even though it won't be used in the card's lifespan.


Kinda hard to support something that was a locked tech by another company. AMD fans love to point that out when talking about PhysX. Ironic that AMD pulled the same thing.


theoneandonlymrk said:


> 10% more battlefield next or starwars battlefront or star citizen or Aos could well sell a few cards though.


Thing is with all the questionable marketing and lies AMD has told last 4-5 years, People will be gun shy about touching AMD hardware since you don't know if what they claim about the card is the truth. Example like the claimed performance of Fury X vs 980ti and most likely Nano vs 970(its easy to see the claims won't hold up to what AMD wants people to think)


----------



## Aquinus (Sep 8, 2015)

I think this needs to be inserted here:

AMD and NVidia both make good products. They each have their advantages and their flaws. NVidia undoubtedly makes the currently best performing GPU out right now. That is not to say that it's the most feature filled or the best cost.

It's true though, nVidia trimmed the edges in order to squeeze out a very performant core for what it can do. AMD isn't bad, however, there are a lot of things you probably wont need... at least, not yet.

When push comes to shove, I think my 390 handles games pretty well. If I got a 390x, Fury, 970, 980, or 980 Ti, I bet I would still be happy.

Without knowing what games in the future are going to demand out of games makes it kind of hard to know if the existence or lack of features will make a big difference or not. In the case of Async processing, AMD already had something, (ACE.) It's nice that AMD has it but, for most modern games it doesn't mean a whole lot.

AMD's only advantage is that they've been thinking about GCN for a while and they intended it to be more than simply a GPU. There are a lot of features that tout GCNs compute ability.

So enough with the bashing. New technology is new. Before we know it, nVidia will have something like ACE on their next lineup of cards. The question will be if AMD will have made any substantial change to combat it. Given their budget, I doubt it.


----------



## the54thvoid (Sep 8, 2015)

What a few people probably want to know is how does Kepler do with DX12 Async?

I found this powerpoint about CUDA and it's so far over my head I may as well go to bed and dream of ice cream. 

http://its.unc.edu/files/2014/11/UNC02_Fundamental_CUDA_Optimization.pdf


----------



## TheoneandonlyMrK (Sep 9, 2015)

arbiter said:


> Sounds like something wrong with your machine.
> 
> Kinda hard to support something that was a locked tech by another company. AMD fans love to point that out when talking about PhysX. Ironic that AMD pulled the same thing.
> 
> Thing is with all the questionable marketing and lies AMD has told last 4-5 years, People will be gun shy about touching AMD hardware since you don't know if what they claim about the card is the truth. Example like the claimed performance of Fury X vs 980ti and most likely Nano vs 970(its easy to see the claims won't hold up to what AMD wants people to think)


The sun does note shine out of nvidias ass, nvidia ,intel Amd ,Qualcomm all make statements upon what they expect they will be selling before its in their hands to sell and they could all be called liers legitimately by zelous fanbois for what they have sometimes delivered.


----------



## arbiter (Sep 9, 2015)

I am seriously lose for words how people expect Nvidia to handle an locked AMD tech since 7000 series, How does people expect Nvidia to handle that perfectly when likely added at last minute so Maxwell couldn't been updated for it.
It would been like PhysX being made DX12 standard and expect AMD cards to do it perfectly.


----------



## Mussels (Sep 10, 2015)

arbiter said:


> I am seriously lose for words how people expect Nvidia to handle an locked AMD tech since 7000 series, How does people expect Nvidia to handle that perfectly when likely added at last minute so Maxwell couldn't been updated for it.
> It would been like PhysX being made DX12 standard and expect AMD cards to do it perfectly.



its not AMD locked at all - it was designed around the DX11 specs with extras on top. AMD have a feature advantage this generation, which will even out as soon as nvidia release new cards. thats it, a one generation advantage. no different to any previous DirectX release where one company got out first.


----------



## arbiter (Sep 10, 2015)

Mussels said:


> its not AMD locked at all - it was designed around the DX11 specs with extras on top. AMD have a feature advantage this generation, which will even out as soon as nvidia release new cards. thats it, a one generation advantage. no different to any previous DirectX release where one company got out first.


it was locked til DX12 adopted it now its not. But that would be like PhysX being added in, i bet AMD fans would be screaming since amd cards won't be able to run it. But can't expect nvidia card to support something that was looking like wasn't added til after maxwell was finalized design and put in to production.

I bet AMD lobbied to heck to get it added in after maxwell was final knowing it wasn't gonna be able to do as well as their card.


----------



## Steevo (Sep 10, 2015)

Is it time to drag out the Nvidia has feature level 12.3 bullshit threads yet?


----------



## Moofachuka (Sep 15, 2015)

rtwjunkie said:


> Even if they do, and that's a big if, since AMD has had a good headstart on designing hardware for DX12, as opposed to DX 11, it's not a big deal either way.  You see the graph, based on the ONE benchmark/game out now, that shows the the Nvidia top dog, not designed for DX12, and the AMD top dog, designed for DX12, are not very far apart.
> 
> I foresee AMD improving, and Nvidia having Pascal designed for DX 12, and that there won't be a heck of alot of difference between the competitors at most price points.  Personally, I think it's a helluva alot of noise being made on both sides about the issue right now, when there is one DX12 game and only a handful of DX12 games by the time Pascal drops.
> 
> I don't think either side's strident and zealous believers can really claim anything at this point.



kinda late reply, but even if Pascal's DX12 will not have a lot of difference than AMD, their DX11 games overall beat AMD.  so if Pascal beats AMD in DX11 and DX12 near par against AMD, I would choose Pascal in this case.


----------



## arbiter (Sep 15, 2015)

Moofachuka said:


> kinda late reply, but even if Pascal's DX12 will not have a lot of difference than AMD, their DX11 games overall beat AMD.  so if Pascal beats AMD in DX11 and DX12 near par against AMD, I would choose Pascal in this case.


reumors on pascal was 5000-6000 cuda cores so would be a monster on top of HBM2


----------



## Makaveli (Sep 20, 2015)

arbiter said:


> it was locked til DX12 adopted it now its not. But that would be like PhysX being added in, i bet AMD fans would be screaming since amd cards won't be able to run it. But can't expect nvidia card to support something that was looking like wasn't added til after maxwell was finalized design and put in to production.
> 
> I bet AMD lobbied to heck to get it added in after maxwell was final knowing it wasn't gonna be able to do as well as their card.



What are you talking about???


----------



## Aquinus (Sep 20, 2015)

You know, for all of you who bash AMD, I must say, I'm fairly happy with what my 390 has delivered to me so far. For the price, it's certainly not a bad card. While AMD's products aren't perfect, I think people are being a little too critical of them lately. It's always been true that GCN has been a compute-oriented architecture. Nvidia made it very clear what it was going for when they slashed double precision math performance and started going gung-ho on the parts of the GPU games were using the most. The gains to be had from async compute makes a lot of sense considering how GCN was designed and I think that as these kinds of uses are realized, we'll see a shift back to some more compute oriented hardware out of nVidia. The only requirement is if the market demands it, which means that these improvements have to show some significantly tangible gain to be realized. If AMD's GPUs could actually handle 8GT/s GDDR5 ICs, I think we would see a much different card as, at least my 390, seems to scale almost linearly with memory clock speed. HBM might have been an attempt to improve the situation but GCN might very well be hungry for lower latency and not higher bandwidth.

My only complaint would be the amount of power the GPU suck down but, idle consumption us a whole lot better over the 6870s, so I can't complain too much since my GPU spends a lot of time idling, such as while I write this post.


----------



## cadaveca (Sep 20, 2015)

arbiter said:


> it was locked til DX12 adopted it now its not. But that would be like PhysX being added in, i bet AMD fans would be screaming since amd cards won't be able to run it. But can't expect nvidia card to support something that was looking like wasn't added til after maxwell was finalized design and put in to production.
> 
> I bet AMD lobbied to heck to get it added in after maxwell was final knowing it wasn't gonna be able to do as well as their card.


Asynchronous Compute has been part of AMD's design since the X1900 days. It's merely an oversight that had NVidia not optimize for it, but given that the tech itself is actually pretty old, it is not surprising that NVidia have designs that don't work well doing an updated version of old tech.

It is also why AMD performs better than NVidia in such workloads...it has been part of AMD GPU design for the last decade. The whole idea that is being presented by this topic is rather humorous to me, actually, since these design differences are really what makes AMD and NVidia GPUs different from each other. A lot of this "knowledge" about GPU design has been forgotten, it seems.



> The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor".


----------



## terroralpha (Sep 20, 2015)

Pill Monster said:


> It's a synthetic benchmark.  Nobody cares.....



that's a game benchmark, not a synthetic benchmark. like the post stated. jack ass.


----------



## arbiter (Sep 20, 2015)

Aquinus said:


> You know, for all of you who bash AMD, I must say, I'm fairly happy with what my 390 has delivered to me so far. For the price, it's certainly not a bad card. While AMD's products aren't perfect, I think people are being a little too critical of them lately. It's always been true that GCN has been a compute-oriented architecture.


Sadly AMD has given people a LOT of Ammo to end users and reviews for that matter to be critical of them. Look at all the PR claims over last 4 years and the twitter rant of of AMD Roy recently. AMD's PR staff digs the company a hole and has been digging for years.



terroralpha said:


> that's a game benchmark, not a synthetic benchmark. like the post stated. jack ass.


Its an Alpha AMD sponsored game so reality is results for it ATM are not any better then a synthetic benchmark.


----------



## cadaveca (Sep 21, 2015)

arbiter said:


> Its an Alpha AMD sponsored game so reality is results for it ATM are not any better then a synthetic benchmark.



THat's a rather moot point, though. You'd have a better stance simply stating that it is just a single title, and doesn't represent all titles, and ergo, does not say that AMD will be better in DX12.


----------



## Arjai (Sep 21, 2015)

lilhasselhoffer said:


> tautology



Nice. This whole post is tautologous.


----------



## arbiter (Sep 21, 2015)

cadaveca said:


> THat's a rather moot point, though. You'd have a better stance simply stating that it is just a single title, and doesn't represent all titles, and ergo, does not say that AMD will be better in DX12.


i have that said in fact like 1000 times, but yet it seems to mean nothing and gets same response you gave.


----------



## cadaveca (Sep 21, 2015)

arbiter said:


> i have that said in fact like 1000 times, but yet it seems to mean nothing and gets same response you gave.


meh. Sorry, but I did not read those posts, or if I did, I did not recall. But nice to know that we agree.


----------



## darkangel0504 (Sep 22, 2015)

hey, I have a question : " will GCN 1.0 get benefit from DX 12 ? "


----------



## R-T-B (Sep 22, 2015)

Wow did I just walk into the flame-each-other festival or something?  Lotsa insults being thrown around.  Keep it clean guys.


----------



## RCoon (Sep 22, 2015)

R-T-B said:


> Wow did I just walk into the flame-each-other festival or something?  Lotsa insults being thrown around.  Keep it clean guys.



The title says "AMD" "NVidia" and "better". This was always going to turn into an ugly mess of nonsense. I've begun to just assume very little of worth ends up coming out of these "discussions"


----------



## Vayra86 (Sep 22, 2015)

I would not be opposed to instantly locking any thread with this subject.

It's pointless.


----------



## Mussels (Sep 22, 2015)

Vayra86 said:


> I would not be opposed to instantly locking any thread with this subject.
> 
> It's pointless.



then new threads get made about it. think of it like an idiot quarantine, with people wandering by with sticks to poke them occasionally.


----------



## dorsetknob (Sep 22, 2015)

Mussels said:


> then new threads get made about it. think of it like an idiot quarantine, with people wandering by with sticks to poke them occasionally.


Electrified Cattle prods or Tazers would be a better idea


----------



## Mussels (Sep 22, 2015)

dorsetknob said:


> Electrified Cattle prods or Tazers would be a better idea



we tried that. mailman took them all... it was a rough time.


----------



## HumanSmoke (Sep 22, 2015)

dorsetknob said:


> Electrified Cattle prods or Tazers would be a better idea


Ghetto ECT for the win.


----------



## dorsetknob (Sep 22, 2015)

Mussels said:


> we tried that. mailman took them all... it was a rough time.


Explains the Avater hair then   he must be self abusingTasering


----------



## TheoneandonlyMrK (Sep 22, 2015)

Thats 6 very helpful posts all bang on topic now 7.
If you're going to join the thread you could at least add to it ,i got notified for tat.
Plus its like me talkin politics ,i dont vote so its pointless as is pissing in a thread you dont like just dont view it, moronic.

And as for on topic i voted with cash Amd r9 390 will keep me sweet for cheap till 14nm.


----------



## Aquinus (Sep 22, 2015)

theoneandonlymrk said:


> Thats 6 very helpful posts all bang on topic now 7.
> If you're going to join the thread you could at least add to it ,i got notified for tat.
> Plus its like me talkin politics ,i dont vote so its pointless as is pissing in a thread you dont like just dont view it, moronic.
> 
> And as for on topic i voted with cash Amd r9 390 will keep me sweet for cheap till 14nm.


That's how I felt about the matter. If I have to wait longer, 8GB is a nice combo if you run CFX long term. I keep saying, you might not want that 8GB now but, you very well might want it in a couple years if 14nm keeps getting delayed or if it has issues or is simply too expensive. The 390 seemed like the best middle ground, at least to me.


----------



## R-T-B (Sep 23, 2015)

RCoon said:


> The title says "AMD" "NVidia" and "better". This was always going to turn into an ugly mess of nonsense. I've begun to just assume very little of worth ends up coming out of these "discussions"



It's sad that's what tech forums have by and large come to.  What ever happened to having an educated point?  What about respectful debate?

I must be old.


----------



## Kanan (Sep 23, 2015)

R-T-B said:


> It's sad that's what tech forums have by and large come to.  What ever happened to having an educated point?  What about respectful debate?
> 
> I must be old.


Yes, (not to "you are old", the other thing) it should be "why Radeon will perform better than GeForce" that would be less fishy and fanboy-ish. It's too obvious the thread starter is a fan of AMD. Whatever, it is what you make of it, I didn't read the entire thread, but it can possibly be a good discussion.


----------



## R-T-B (Sep 23, 2015)

Kanan said:


> Yes, (not to "you are old", the other thing) it should be "why Radeon will perform better than GeForce" that would be less fishy and fanboy-ish. It's too obvious the thread starter is a fan of AMD. Whatever, it is what you make of it, I didn't read the entire thread, but it can possibly be a good discussion.



Yeah, kinda came into it hoping for good discussion, but was disappointed.  You are possibly right that the OP wasn't helpful in how he phrased his post.


----------



## Kanan (Sep 23, 2015)

R-T-B said:


> Yeah, kinda came into it hoping for good discussion, but was disappointed.  You are possibly right that the OP wasn't helpful in how he phrased his post.


Make it better! Repair the thread! xD

btw I think AMD has a good shot on DX12, ACE is a + if a lot games are to use it, if not, well then AMD isn't at least any worse. The good thing for Radeon cards is though, their "power limiter" from DX11 is essentially removed in DX12, meaning free performance laying on the ground before, can now be used. GeForce cards on the other hand are running smooth right now, so I don't see any "plus" on their ground, they are already caped to the max. 

Btw: title of the thread really should be "at the beginning", because in the long run, simply nobody can know it. I'd bet on a tie though.


----------



## Mussels (Sep 28, 2015)

shopwit kumar said:


> DirectX is a programming interface (API) that allows developers to tap into a device's hardware for producing complex 3D graphics. DX12 is the latest version and promises to bring more efficient CPU utilization, reduced driver overheads, and various other benefits. Lead developer Max McMullen said one of DX12's goals was to bring "console-level efficiency" to PCs. Console hardware is focussed entirely on games, where PCs are often doing other tasks even while running games.



why does that feel like a direct quote from google?


----------



## RCoon (Sep 28, 2015)

Mussels said:


> why does that feel like a direct quote from google?



Because it is

http://www.windowscentral.com/amd-graphics-cards-huge-gains-dx12


----------



## Mussels (Sep 28, 2015)

ooh a spambot, lets give it time to mature before squashing it.


----------



## Tatty_One (Sep 28, 2015)

I think he already did, funny thing is Op mentions that he is a veteran as he has 2 accounts, I checked his old one from 2013 and there are only 4 or 5 posts (Veteran?) and they are all pretty much about AMD, so I would guess that his intentions with the thread title were pretty deliberate, with that in mind I don't feel guilty about closing this as it appears to be pretty much done now anyway.


----------

