AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment.
enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.
While it's (obviously) disappointing that AMD has yet to respond to Nvidia's performance/power gains since Pascal, and competition is
desperately needed in the consumer GPU space, what they're doing makes sense in terms of a) keeping AMD alive, and b) letting them bring a truly competitive product to market
in time.
Now, to emphasize: this sucks for us end-users. It really sucks. I would much rather live in a world where this wasn't the situation. But it's been pretty clear since the launch of Vega that this is the planned way forward, which makes sense in terms of AMD only recently returning to profitability and thus having to prioritize heavily what they spend their R&D money on.
But here's how I see this: AMD has a compute-centric GPU architecture, which
still beats Nvidia (at least Pascal) in certain perf/w and perf/$ metrics when it comes to compute. At the very least, they're far more competitive there than they are in perf/W for gaming (which again limits their ability to compete in the high end, where cards are either power or silicon area limited). They've decided to play to their strengths with the current arch, and pitch it as an alternative to the Quadros and Teslas of the world. Which, as it looks right now, they're having reasonable success with, even with the added challenge that the vast majority of enterprise compute software is written for CUDA. Their consistent focus on promoting open-source software and open standards for writing software has obviously helped this. The key here, though, is that Vega - as it stands today - is a decently compelling product for this type of workload.
The question of what they could have done to improve gaming performance, as this is obviously where Vega lags behind Nvidia the most. This is an extremely complicated question. According to AMD around launch time, the majority of the increase in transistor count between Polaris and Vega was spent on increasing clock speeds, which ... well, didn't really do all that much. Around 200 MHz (1400-ish to 1600-ish), or ~14%. It's pretty clear they'd struggle to go further here. Now, I've also seen postings about 4096 SPs being a "hard" limit of the GCN architecture for whatever reason. I can't back that up, but at least it would seem to make sense in light of there being no increase between the Fury X and Vega 64. So, the architecture might need significant reworking to accommodate a wider layout (though I can't find any confirmation that this is actually the case). They're not starved for memory bandwidth (given that the Vega 56 and 64 match or exceed Nvidia's best). So what can they improve without investing a massive amount of money into R&D? We know multi-chip GPUs aren't ready yet, so ... there doesn't seem to be much. They'll improve power draw and possibly clock speeds by moving to new process nodes, but that's about it.
In other words: it seems very likely that AMD needs an architecture update far bigger than anything we've seen since the launch of GCN. This is a wildly expensive and massively time-consuming endeavor. Also note that AMD has about 1/10 the resources of Nvidia (if that), and have until recently been preoccupied with reducing debt and returning to profitability, all while designing a from-scratch X86 architecture. In other words: they haven't had the resources to do this. Yet.
But things
are starting to come together. Zen is extremely successful with consumers, and is looking like it will be the same in the enterprise market. Which will give AMD much-needed funds to increase R&D spending. Vega was highly competitive in compute when it launched, and still seems to do quite well, even if their market share is a fraction of Nvidia's. It's still bringing in some money. All the while, this situation has essentially forced AMD to abandon the high-end gaming market. Is this a nice decision? No, as a gamer, right now, I don't like it at all. But for the future of both gaming and competition in the GPU market in general, I think they're doing the right thing. Hold off today, so that they can compete tomorrow. Investing what little R&D money they had into putting some proverbial lipstick on Vega to sell to gamers (which likely still wouldn't have let them compete in the high end) would have been crazy expensive, but not given them much back, and gained them nothing in the long run. Yet another "it can keep up with 2nd-tier Nvidia, but at 50W more at $100 less" card wouldn't have gained AMD much of an increased user base, given Nvidia's mindshare advantage. But if prioritizing high-margin compute markets for Vega for now, and focusing on Zen for really making money allows them to produce a properly improved architecture in a year? That's the right way to go, even if it leaves me using my Fury X for a while longer than I had originally planned to.
Of course, it's entirely possible that the new arch will fall flat on its face. I don't think so, but it's
possible. But it's far more certain that yet another limited-resource, short-term GCN refresh would be even worse.