Friday, September 16th 2016
AMD Actively Promoting Vulkan Beyond GPUOpen
Vulkan, the new-generation cross-platform 3D graphics API governed by the people behind OpenGL, the Khronos Group, is gaining in relevance, with Google making it the primary 3D graphics API for Android. AMD said that it's actively promoting the API. Responding to a question by TechPowerUp in its recent Radeon Technology Group (RTG) first anniversary presser, its chief Raja Koduri agreed that the company is actively working with developers to add Vulkan to their productions, and optimize them for Radeon GPUs. This, we believe, could be due to one of many strategic reasons.
First, Vulkan works inherently better on AMD Graphics CoreNext GPU architecture because it's been largely derived from Mantle, a now defunct 3D graphics API by AMD that brings a lot of "close-to-metal" API features that make game consoles more performance-efficient, over to the PC ecosystem. The proof of this pudding is the AAA title and 2016 reboot of the iconic first-person shooter "Doom," in which Radeon GPUs get significant performance boosts switching from the default OpenGL renderer to Vulkan. These boosts aren't as pronounced on NVIDIA GPUs.Second, and this could be a long shot, but the growing popularity of Vulkan could give AMD leverage over Microsoft to steer Direct3D development in areas that AMD GPUs are inherently good at - these include asynchronous compute, and tiled-resources (AMD GPUs benefit due to higher memory bandwidths). AMD has been engaging aggressively with game studios working on AAA games that use DirectX 12, and thus far AMD GPUs have been either gaining or sustaining performance better than NVIDIA GPUs, when switching from DirectX 11 fallbacks to DirectX 12 renderers.
AMD has already "opened" up much of its GPU IP to game developers through its GPUOpen initiative. Here, developers will find detailed technical resources on how to take advantage of not just AMD-specific GPU IP, but also some industry standards. Vulkan is among the richly differentiated resources AMD is giving away through the initiative.
Vulkan still has a long way to go before it becomes the primary API in AAA releases. To most gamers who don't tinker with advanced graphics settings, "Doom" still works on OpenGL. and "Talos Prinicple," works on Direct3D 11 by default, for example. It could be a while before a game runs on Vulkan out of the box, and the way its special interest group Khronos, and more importantly AMD, promote its use, not just during game development, but also long-term support, will have a lot to do with it. A lot will also depend on NVIDIA, which holds about 70% in PC discrete GPU market share, to support the API. Over-customizing Vulkan would send it the way of OpenGL. Too many vendor-specific extensions to keep up drove game developers to Direct3D in the first place.
First, Vulkan works inherently better on AMD Graphics CoreNext GPU architecture because it's been largely derived from Mantle, a now defunct 3D graphics API by AMD that brings a lot of "close-to-metal" API features that make game consoles more performance-efficient, over to the PC ecosystem. The proof of this pudding is the AAA title and 2016 reboot of the iconic first-person shooter "Doom," in which Radeon GPUs get significant performance boosts switching from the default OpenGL renderer to Vulkan. These boosts aren't as pronounced on NVIDIA GPUs.Second, and this could be a long shot, but the growing popularity of Vulkan could give AMD leverage over Microsoft to steer Direct3D development in areas that AMD GPUs are inherently good at - these include asynchronous compute, and tiled-resources (AMD GPUs benefit due to higher memory bandwidths). AMD has been engaging aggressively with game studios working on AAA games that use DirectX 12, and thus far AMD GPUs have been either gaining or sustaining performance better than NVIDIA GPUs, when switching from DirectX 11 fallbacks to DirectX 12 renderers.
AMD has already "opened" up much of its GPU IP to game developers through its GPUOpen initiative. Here, developers will find detailed technical resources on how to take advantage of not just AMD-specific GPU IP, but also some industry standards. Vulkan is among the richly differentiated resources AMD is giving away through the initiative.
Vulkan still has a long way to go before it becomes the primary API in AAA releases. To most gamers who don't tinker with advanced graphics settings, "Doom" still works on OpenGL. and "Talos Prinicple," works on Direct3D 11 by default, for example. It could be a while before a game runs on Vulkan out of the box, and the way its special interest group Khronos, and more importantly AMD, promote its use, not just during game development, but also long-term support, will have a lot to do with it. A lot will also depend on NVIDIA, which holds about 70% in PC discrete GPU market share, to support the API. Over-customizing Vulkan would send it the way of OpenGL. Too many vendor-specific extensions to keep up drove game developers to Direct3D in the first place.
111 Comments on AMD Actively Promoting Vulkan Beyond GPUOpen
It seems like Nvidia is currently making a big cash grab for the high-end since they know Paxwell will lose a massive performance advantage once Vulkan/DX12 are standard. Thus they will only sell their overpriced 10xx series as long as it isn't a dominating issue. If most releases are using Async compute by march, and AMD's Vega is indeed as strong as the Titan X: Nvidia will launch the 1180 by the end of spring.
AMD's efficiency is totally fine, 14nm just isn't mature for big chips yet. Furthermore you should look at efficiency for AMD in Vulkan. Their far cheaper to produce 480 is roughly as efficient as the 1070 (Like a 10% difference).
But don't get it twisted.. This doesn't make AMD, king of the hill all of a sudden.. Not by a long shot..
First, there is no evidence supporting the claim that it work inherently better on AMD hardware. In fact, the only "evidence" is games specifically targeting AMD hardware which has later been ported.
Secondly, Vulkan is not based on Mantle. As you can read in the specs, Vulkan is built on SPIR-V. SPIR-V is the compiler infrastructure and intermediate representation of a shader language which is the basis for OpenCL (2.1) and Vulkan. The features of Vulkan is built on top of this, and this architecture has nothing in common with either Mantle nor Direct3D*. What Vulkan has inherited from Mantle is not the underlaying architecture, but some aspects of the front end. To claim that one piece of software is based on another for implementing similar features is obviously gibberish, just like no one is claiming that Chrome is based on IE for implementing similar features. Any coder will understand this.
AMD have no real advantage on Vulkan compared to it's rivals. Nvidia were in fact the first vendor to demonstrate a working Vulkan driver, and the first to release one (both PC and Android). AMD were the last to get certification, and had to write a driver from scratch like everyone else.
*) In fact, the next Shader Model of Direct3D will adapt a similar architecture. I would expect that you knew this, since you actually covered it on this news site. Nvidia has also done the same for more than a decade. Contrary to popular belief, most of GameWorks is actually open, and it's the most extensive collection of examples, tutorials and best practices for graphics development.
Do not believe everything a PR spokesman says. Nvidia is already offering excellent Vulkan support on all platforms.
Extensions have never been a problem for OpenGL, the problem has been the slow standardization process.
----- Nvidia has offered superb OpenGL support for Linux for more than a decade, but they've been the only one. You are talking about the "hippie" drivers, nobody who cares about stability, features or performance cares about those. The new "open" drivers are based on Gallium which is a generic GPU abstraction layer, so just forget about optimized support for any API on those.
----- I don't know which fantasy world you live in, but since AMD released their last major architecture, Nvidia has released Maxwell and Pascal.
Pascal was introduced because Nvidia was unable to complete Volta by 2016, bringing some of the features of Volta. This was done primarily for the compute oriented customers (Tesla).
There is no major disadvantage with Nvidia's architectures vs. GCN in terms of modern APIs.
----- Yes, the good Direct3D 12 titles will come in a while, perhaps early next year. It always takes 2-3 years before the "good" games arrive.
Does anyone remember Crysis? The under-performing GCN cards has nothing to do with the APIs.
We all know Nvidia's architectures are much more advanced, and one of it's advantages is more flexible compute cores and a very powerful scheduler. AMD has a more simple approach; more simpler cores and a simple scheduler. When you compare GTX 980 Ti to Fury X you'll see that Nvidia is able to saturate it's GPU while Fury is more than 1/3 unutilized. So AMD have typically ~50% more resources for comparable performance, but are there workloads which benefits from AMD's more simple brute force approach? Yes, of course. A number of compute workloads actually perform very well on GCN. This consists of workloads which are more a stream of independent data. AMD clearly have more computational power, so if their GPUs are saturated they can perform very well. The problem is that rendering typically have a lot of internal dependencies. E.g. resources(textures, meshes) are reused several times in a single frame, and if 5 cores requests the same data they will have to wait in turn. That's why scheduling is essential to saturate a GPU during rendering. I would actually draw a parallel with AMD Bulldozer vs. Intel Sandy-Bridge and newer, AMD has clearly more computational power for competing products, but is only able to utilize them in certain (uncommon) workloads. AMD is finally bringing Zen with necessary improvements, and they need to do a similar thing with GCN.
In addition; Nvidia does a number of smart implementations of rendering. E.g; Maxwell and Pascal rasterizes and processes fragments in tiles while AMD process in screen space. This allows Nvidia to use less memory bandwidth, and keep all the important data in L2 cache, to ensure the GPU is completely saturated. With AMD on the other hand, the data has to travel back and forth between GPU memory and L2 cache, causing bottlenecks and cache misses. For those who are not familiar with programming GPUs; fragment shading easily takes up 60-80% or more of rendering time, so a bottleneck here makes a huge impact. This is one of the primary reasons why Nvidia can perform better with much lower memory bandwidth.
We also know Nvidia has a much more powerful tessellation engine, etc.
----- More games are optimized for AMD this time around because of the major gaming consoles.
Async compute is fully supported by Nvidia, but the advantage is dependent on unutilized GPU resources. In many cases games tries to utilize the same resources for two queues, and since Nvidia is already better at saturating their GPUs, they will get "smaller improvements".
But what's your point? It's still a bigger progression than GCN 1.4.
Though it is great news android will use it, that doesn't mean the pc market will adapt. I certainly hope it does, more competition never hurts. I wont hold my breath though.
It was only when Maxwell and the Fury X came out that I finally felt like I no longer had an Enthusiast card. Meanwhile I had been maxing out prettier and prettier games while my friend with the 680 had to continually turn more and more settings down because he didn't have enough VRAM or shaders.
The same situation is happening now. The Fury is selling for $300 and competing with the 1070 and 1080. That means people who bought that over the 980 Ti are laughing to the bank as they watch a more expensive Nvidia card start losing to the 390X! Furthermore, people building NEW PC's are buying up a lot of 1 year old Fury's because apparently they are beating 1070's in the latest games for 2/3rds the cost.
The 680 ended up running out of ram. The 680 4GB is still generally faster than the 7970 (but yes the 7970 eneded up being faster once framebuffer ran out), over the course of 4 years. There has been 2-3 generations of enthusiast cards since then.
Couple of things:
- "Soon enough I had a card playing as well as my friends 780 and 780ti" - sorry, no matter what you did to that 7970 it did not play as well as a 780(ti) as it was/is 15-20% slower at stock. If you consider overclocking then the 780's really pull away.
- There is no card competing with the 1080 (I wish there was), even with all the boosts its still 15-20% faster than the Fury X. Is this the same friend that bought the 2GB 680? because he didn't learn his lesson... that 4GB frame buffer isn't going to be enough if he wants to hold onto it for 4 years...
Bottom line:
Hardware will go obsolete. Buy a card based on overall performance. The whole idea that "ALL THE GAMES ARE GOING TO COME OUT AT DX<whatever> AND YOU NEED TO FUTURE PROOF" is trash. No your Fury X isn't going to beat a 1080, yes you will need to sell a kidney to afford a high end card, and yes it will drop in value faster than that brand new yellow corvette you just drove off the lot.
Trolling aside though, all but the most strident knew this was going to happen. Kepler and up have been heavily optimised for DX11\OpenGL pathways and are missing a good chunk of compute (that simply wasn't needed for gaming back in 2011). Pascal is simply on the cusp of the change back to compute orientated architectures being preferential. The more interesting card will be Volta to see if Nvidia can realign itself to the new paradigm.
That being said, the wind is certainly blowing against Nvidia. If it doesn't get a design win soon and regain marketshare (remembering consoles here), it will see its influence on pathways continue to decrease as developers target the GCN architecture as a commonality between platforms in a sort of reverse what happened to AMD and being fucked over with Gameworks.
(For the record, I own a 1080. Can't argue with it being the fastest card out with no competition from AMD - but IMO it'll date pretty quickly. And with most AAA titles this season coming out with Vulkan\DX12, Nvidia better hope AMD keeps dragging its feet on its high end cards, as going off the RX480, it continues to look to be the better buy than the 1060 everytime a new AAA title comes out).
Ok you back? Good.
1) The 7970 overclocks better than anything that has been released since then. My 7970 ran at 1220/1840. My brother's 7950 ran at 1120/1800, and all of my crypto-mining 7950's ran at 1100+/1800+. Those are 40% overclocks lmao! My 7970 benches as well as a 980 in Deus Ex: MD and BF1. So drop that argument here.
2) 2-3 generations? You completely missed what I was saying. I said that withing a year of the 7970's launch it was ALREADY beating the 680 by 10-20% on average. Most people keep their cards for 2-3 years in my experience.
Furthermore just because it is 1-2 generations newer doesn't make a difference. Everyone CONSTANTLY complains about AMD's recent trend of re-branding old GPU's. I will admit that I think it is stupid too, but can you blaim them? Radeon is like 1-2 times smaller than Nvidia. If they can sell the 7970 2 years later and have it compete with the 970 they will lmao. Hence why I just bought a Fury for $310 - it beats the 1070 in TODAY's games. That's just stupid.