I already posted why, posts #80 and #92.
Thanks for partially quoting me and removing all the context I guess? My point was that Nvidia can make much more with Tegra in automotive than they can with consoles, nit that they consider $20 too little (else they wouldn't have bothered with things like the GT 720)
Now, onwards to your posts:
3. Most important. It keeps Nvidia out of the consoles, and considering that many top games are ports from consoles, it keeps AMD's gpus alive.
That's also the reason why Nvidia would kill to be able to supply an x86 APU like chip, even with zero margins. But they can't.
We can all see the effects of PhysX and GameWorks on AMD GPUs. We can even see the effects of GameWorks on older Nvidia GPUs. If Nvidia was controlling GPUs in consoles, then it's proprietary techs would have been already a de facto standard. Every game programmed on consoles would have PhysX and GameWorks in it's core. It would have been close to impossible for AMD to create drivers that would be performing without problems and bad performance even on the simplest console game ports. Every game would have been a Project Cars AT BEST.
Keeping nvidia out of consoles is an incredibly minor win for AMD compared to nvidia completely and utterly dominating AMD in the HPC and server space: $20 per chip shipped vs $1000s per Tesla card, more in support contracts and even more in fully built solutions by Nvidia like GRID.
PhysX works partially even on AMD systems, and is the bigger risk of the two for vendor lock-in than anything else.
Gameworks on the other hand is a much more traditional closed-source code licensing affair, with no restrictions on running it with non-Nvidia hardware. It runs slow on everything because it's heavy (you know, a bit like Crysis back in 2006... except now it has a pretty marketing name instead of being nothing more than a meme). Why does it run particularly slowly on AMD GPUs? Well, quite simply because AMD GPUs are designed quite differently from Nvidia. If most games had gameworks, AMD would simply respond by desoigning a new GPU that looks a lot more like the Fermi/Kepler/Maxwell evoutionary family than GCN. No more, no less.
Much the same happenned with the GeForce 8000 and Radeon HD 2000 when Direct3D 10 changed the rendering pipeline completely: the industry as a whole moved from pixel pipelines to much more general-purpose shader processors instead.
Much the same also happens in the CPU side of things, with how Intel and AMD have vastly different CPU designs that perform vastly differently based on different workloads, the current one being Bulldozer vs Haswell/Broadwell, before that NetBurst vs K8, and even further before that, K6 vs Pentium 2/P6.
Nothing to see here in Gameworks/Physx, so move along and stop bringing it up unless you're ripping apart AMD's driver teams' excuses, in which case, do bring it up as much as possible.
Now, if you say that Gameworks is bad from a more conflict of interest point of view, then remember, TressFX is also around, as well as various other AMD-centric stuff under AMD Gaming. Besides, Gameworks has always existed, albeit less well-marketed under the "The way it's meant to be played" program from way back, but you don't see people whining about it after the first 3-4months of being suspicious, and even then, much less loudly than now.
Consoles where not x86 PCs before this generation. Also Nvidia didn't had almost 80% of the discrete graphics cards market on PCs. Not to mention that it was not aggressively pushing proprietary techs like GameWorks, PhysX, GSync etc. as they do today. Games for PCs also where not ports from consoles. All these combined with the deeper pockets of Nvidia and the stronger relations they have with the game developers would give them the absolute advantage over AMD. And Mantle couldn't become the de facto standard for many reasons. No money, no market share, competition had much bigger influence on game developers, I also think consoles don't use Mantle anyway.
You have to realize something first. Mantle was not meant to give a big advantage to AMD's GPUs. It was made to make that awful Bulldozer architecture look better at games. It was meant to close the gap between Intel cpus and FX cpus. To give an extra push to APU's performance. That's why AMD gave Mantle to Khronos, that's why they stopped developing it when Microsoft announced DX12. The day Microsoft announced DX12, AMD's plan succeeded. Windows 10 could have come without DX12 like Windows 8. You don't know that Microsoft was going to come out with DX12. I don't know that. The only company that needed DX12 yesterday, was AMD, with it's mediocre DX11 drivers and that useless Bulldozer architecture(Thuban at 32nm you morons. Thuban at 32nm). Intel, Nvidia, even Microsoft was happy with the situation. No one from those three cared if DX12 would come out or not. On the other hand AMD was desperate for a low level API. Mantle was the best wasted money AMD had spend.
Those benchmarks did show AMD's problem with the DX11 drivers. But I guess the core of their drivers couldn't change. They should have fixed that problem the day they decided to follow the "more cores" route on the cpu front.
I am not going to repeat my self here. We just see a few things complete differently
As I said before, the CPU architecture is an irrelevant argument. Console makers would have been just as happy with ARM or POWER or even MIPS. Obviously nobody besides AMD found it profitable enough to bother custom engineering the silicon for the console makers.
Mantle was a push mostly from DICE (Johan Andersson, specifically, probably also why he/DICE got the first Fury X, ahead of reviewers ), not from AMD, though AMD was the more responsive company by far, likely because it would make CPUs less of an argument in games. And sure, while Microsoft was happy with D3D11 as it was with no real plans for major re-architecting in the works, Nvidia, AMD and game devs would keep pushing new features, and MS and Khronos (OpenGL/Vulkan) would oblige by extending the APIs as neeeded, as they largely have since D3D10/OGL4. Before Johan pushed and AMD noticed, AMD was happy to coast along and extend D3D10-11/OGL4.x just like Nvidia.
Oh, and no, given where it came from, it's blindingly obvious that Mantle was all about getting better 3D performance. Propping up Bulldozer and Jaguar were just excellent side benefits as far as Johan (DICE) was concerned, but excellent marketing material for AMD if they could make it stick. And try they did, getting a decent amount of support from all corners and whether intentional or not, spent a fair bit of time keeping it closed-source despite their open-source claims.
AMD then gave Mantle to Khronos because not only was Mantle's job as a proof of concept was done, and they had finally sanitized the code and manuals enough it could be both handed over and made open-source. Besides, D3D12 was on, Khronos had started NGOGL to look into lower-level API for OpenGl users - suddenly Mantle was not something AMD could use as a competitive advantage anymore, so they handed it over.
However, it is irrelevant in the grand scheme of things: AMD failed to scale Bulldozer and GCN, partly because basically everyone besides Intel failed to deliver 22/20nm, but mostly because they made foolish bets: On the CPU-side, they tried to build a better, lower-clocked, wider, higher-cored NetBurst and smacked right into the same problems Intel failed to solve over 4 (FOUR!) whole manufacturing nodes, and on the GPU side.. GCN is basically AMD's Fermi, their first truly compute-oriented card, and runs similarly hot, rather amusingly.
Still irrelevant in the scheme of consoles by and large though: all three consoles run modifed version of various existing APIs, all with very low-level access to things, effectively they already had Mantle, and whatever Nvidia would have cooked up if they had accepted.