Thursday, June 4th 2015
NVIDIA Could Capitalize on AMD Graphics CoreNext Not Supporting Direct3D 12_1
AMD's Graphics CoreNext (GCN) architecture does not support Direct3D feature-level 12_1 (DirectX 12.1), according to a ComputerBase.de report. The architecture only supports Direct3D up to feature-level 12_0. Feature-level 12_1 adds three features over 12_0, namely Volume-Tiled Resources, Conservative Rasterization and Rasterizer Ordered Views.
Volume Tiled-resources, is an evolution of tiled-resources (analogous to OpenGL mega-texture), in which the GPU seeks and loads only those portions of a large texture that are relevant to the scene it's rendering, rather than loading the entire texture to the memory. Think of it as a virtual memory system for textures. This greatly reduces video memory usage and bandwidth consumption. Volume tiled-resources is a way of seeking portions of a texture not only along X and Y axes, but adds third dimension. Conservative Rasterization is a means of drawing polygons with additional pixels that make it easier for two polygons to interact with each other in dynamic objects. Raster Ordered Views is a means to optimize raster loads in the order in which they appear in an object. Practical applications include improved shadows.Given that GCN doesn't feature bare-metal support for D3D feature-level 12_1, its implementation will be as limited as feature-level 11_1 was, when NVIDIA's Kepler didn't support it. This is compounded by the fact that GCN is a more popular GPU architecture than Maxwell (which supports 12_1), thanks to new-generation game consoles. It could explain why NVIDIA dedicated three-fourths of its GeForce GTX 980 Ti press-deck to talking about the features of D3D 12_1 at length. The company probably wants to make a few new effects that rely on D3D 12_1 part of GameWorks, and deflect accusations of exclusivity to the competition (AMD) not supporting certain API features, which are open to them. Granted, AMD GPUs, and modern game consoles such as the Xbox One and PlayStation 4 don't support GameWorks, but that didn't stop big game devs from implementing them.
Source:
ComputerBase.de
Volume Tiled-resources, is an evolution of tiled-resources (analogous to OpenGL mega-texture), in which the GPU seeks and loads only those portions of a large texture that are relevant to the scene it's rendering, rather than loading the entire texture to the memory. Think of it as a virtual memory system for textures. This greatly reduces video memory usage and bandwidth consumption. Volume tiled-resources is a way of seeking portions of a texture not only along X and Y axes, but adds third dimension. Conservative Rasterization is a means of drawing polygons with additional pixels that make it easier for two polygons to interact with each other in dynamic objects. Raster Ordered Views is a means to optimize raster loads in the order in which they appear in an object. Practical applications include improved shadows.Given that GCN doesn't feature bare-metal support for D3D feature-level 12_1, its implementation will be as limited as feature-level 11_1 was, when NVIDIA's Kepler didn't support it. This is compounded by the fact that GCN is a more popular GPU architecture than Maxwell (which supports 12_1), thanks to new-generation game consoles. It could explain why NVIDIA dedicated three-fourths of its GeForce GTX 980 Ti press-deck to talking about the features of D3D 12_1 at length. The company probably wants to make a few new effects that rely on D3D 12_1 part of GameWorks, and deflect accusations of exclusivity to the competition (AMD) not supporting certain API features, which are open to them. Granted, AMD GPUs, and modern game consoles such as the Xbox One and PlayStation 4 don't support GameWorks, but that didn't stop big game devs from implementing them.
79 Comments on NVIDIA Could Capitalize on AMD Graphics CoreNext Not Supporting Direct3D 12_1
If the card is saving GPU horsepower by better use of rasterization resources then the amount of gain depends upon the scenes being rendered. Not all games or game engjnes are created equal, and that doesn't take into account a myriad other graphical computation levels also needing to be taken into consideration ( I.e. tessellation). Even if you could quantify the gains/deficits, they are then affected by how different architectures handle post-rasterization image quality features ( post process depth of field, motion blur, global illumination etc.)
Basically what you want is a set figure when the actuality is that wont ever be the case unless every other variable becomes a constant- and every architecture and every part within every architecture handles every facet of the game to a varying degree.
If only one brand supports it, as is suggested, no one in their right mind will code for it.
So short answer is no, it won't make much difference at all.
Aren't they making a new Doom? (I bought Wolfenstein: The New Order the day of release and STILL have an unused Doom beta key sitting on my desk...)
What if they came out swinging with Doom on their IDTech OpenGL based engine when DX12 was launched? They have had a good bit of time to optimize it since Rage came out.
but wait for Windows 10, DX12 and DX12 games to find out..
The only caveats are game engines developed primarily or in tandem with PC where the features could be unused in the console version, and OpenGL game engines of course. Not really. The tessellator in the R600 was known aboutfrom the moment it was launched - it wasn't some kind of secret squirrel hidden capability.
The whole point of this article and thread, is that the GCN 1.x architecture cannot undertake conservative raterization in hardware. It can however emulate it in software using the compute ability of the architectures inbuilt asynchronous compute engines.
Same old ATI/AMD tune isn't it?
At least ATI worked to get TruForm integrated as a game feature. AMD get involved and immediately turn an R600 feature into a footnote in history by tossingATI's game development programinto the nearest dumpster.
p.s didnt amd have a hand in developing gpu tessellation and had fully supporting dx11 gpu's before nvidia? would that not also be around the same time gcn was proving to be more powerful than kepler in direct compute and gpu acceleration?
well i know for sure tessellation works fine on my amd gpu's and amd optimized tessellation looks great.. especially in evolved games.
No. ATI's TruForm was the first GPU tessellation hardware followed by Matrox's Parhelia line (N-Patch support), then ATI's Xenos (R500 / C1) graphics chip for the Xbox 360. All of these pre-date AMD's involvement with ATI.
Yes. AMD's Evergreen series were the first DirectX 11 compliant GPUs. They arrived just over six months before Nvidia's own DX11 cards.Do tell? You're starting to sound like AMD's Roy Taylor.
DirectCompute, like most computation depends upon the workload, software, and just as importantly, software support (drivers). It also depends heavily upon the emphasis placed upon the designs by the architects. A case in point is the tessellation you seem very keen to explore. ATI pioneered it, but it went largely unused. Under AMD's regime tessellation wasn't prioritized where Nvidia made Maxwell a tessellation monster. Neither DC or tessellation on their own define the architecture, or are indicators of much besides that facet.
Well, if Vulkan and the new OpenGL extensions take off like people are expecting, the DirectX coding arena may have their hand forced. If the new OGL turns into the old OGL, Microsoft can probably wait ten years before updating DirectX.
10 years haha yeah dx12 should be easier to work with and i mean what are they even going to add to make it more complex that is a real game changer like tessellation was.
1990 REHASHED HAHA
DX13? :)
Read http://devgurus.amd.com/message/1308511
Question:
I need for my application that every drawing produce at least one pixel output (even if this is an empty triangle = 3 identical points). NVidia have an extension (GL_NV_conservative_raster) to enable such a mode (on Maxwell+ cards). Is there a similar extension on AMD cards
Answer (from AMD):
Some of our hardware can support functionality similar to that in the NVIDIA extension you mention, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time.
Even Nvidia first gen Maxwell card 750Ti that was launch early in 2014 doesn't even have 12.1.
From Christophe Riccio
twitter.com/g_truc/status/581224843556843521
It seems that shader invocation ordering is proportionally a lot more expensive on GM204 than S.I. or HSW.
AMD is aware of extreme tessellation issue hence improvements with R9-285 (28 CU).
Qualifications of the writer: Written by Alessio Tommasini, Directx 12 early access program member.
In seriousness, I may browse through it when I get a chance...
What it seems to be is that no GPU out now or soon fully supports DX12. nVidia 12_1 is not better then AMD 12_0 (or vise versa), it just seems to be a different feature sub, sub set.
edit: Even these sub, sub set numbers (12_0 or 12_1) do not fully support all of DX12 features. Also it would seem that DX11 is a subset of DX12. Therefore, while GCN 1.0 may not support some of the new features in DX12, it still "supports" DX12. Even Fermi appears to "support" DX12, but only support DX11.1 feature level.
•Tier 1: INTEL Haswell e Broadwell, NVIDIA Fermi
•Tier 2: NVIDIA Kepler, Maxwell 1.0 and Maxwell 2.0
•Tier 3: AMD GCN 1.0, GCN 1.1 and GCN 1.2
•Feature level 11.0: NVIDIA Fermi, Kepler, Maxwell 1.0
•Feature level 11.1: AMD GCN 1.0, INTEL Haswell and Broadwell
•Feature level 12.0: AMD GCN 1.1 and GCN 1.2
•Feature level 12.1: NVIDIA Maxwell 2.0
The max DX12 support would be Tier 3 and Feature Level 12.1.
Tier level and feature level support would be useless if the said features are slow i.e. decelerator
The feature level is a classification of what DX12 features said GPU supports. So, Fermi, Kepler, Maxwell 1.0 all support feature level 11.0 of DX12. GCN 1.0, Haswell, and Broadwell support feature level 11.1 and so on.
With all that said, "Despite being pleonastic, it is worth to restate that feature level 12.1 does not coincide with an imaginary “full/complete DirectX 12 support” since it does not cover many important or secondary features exposed by Direct3D 12."
The basic features of DX12 will be available to most of the GPUs (as well as DirectX 11.3).It will be up to game developers as to which additional features they might include - but I would say that if there is not broad based support for existing cards, the additional features will be options within the game code rather than mandatory.
But of course we don't know about the about to be released new card.
Anyway the german site has a nice table: www.computerbase.de/2015-06/directx-12-amd-radeon-feature-level-12-0-gcn/
Microsoft allocated a lecture on Resource Binding tier levels during GDC 2015.
channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance
Time stamp: 8:09