Thursday, May 31st 2018
NVIDIA to Detail New Mainstream GPU at Hot Chips Symposium in August
Even as NVIDIA's next-generation computer graphics architecture for mainstream users remains an elusive unicorn, speculation and tendrils of smokehave kept the community in a somewhat tight edge when it comes to the how and when of its features and introduction. NVIDIA may have launched another architecture since its current consumer-level Pascal in Volta, but that one has been reserved to professional, computing-intensive scenarios. Speculation is rife on NVIDIA's next-generation architecture, and the posted program for the Hot Chips Symposium could be the light at the end of the tunnel for a new breath of life into the graphics card market.
Looking at the Hot Chips' Symposium program, the detailed section for the first day of the conference, in August 20th, lists a talk by NVIDIA's Stuart Oberman, titled "NVIDIA's Next Generation Mainstream GPU". This likely means exactly as it reads, and is an introduction to NVIDIA's next-generation computing solution under its gaming GeForce brand - or it could be an announcement, though a Hot Chips Symposium for that seems slightly off the mark. You can check the symposium's schedule on the source link - there are some interesting subjects there, such as Intel's "High Performance Graphics solutions in thin and light mobile form factors" - which could see talks of the Intel-AMD collaboration in Kaby Lake G, and possibly of the work being done on Intel's in-house high-performance graphics technologies (with many of AMD's own RTG veterans, of course).
Source:
Hot Chips Program
Looking at the Hot Chips' Symposium program, the detailed section for the first day of the conference, in August 20th, lists a talk by NVIDIA's Stuart Oberman, titled "NVIDIA's Next Generation Mainstream GPU". This likely means exactly as it reads, and is an introduction to NVIDIA's next-generation computing solution under its gaming GeForce brand - or it could be an announcement, though a Hot Chips Symposium for that seems slightly off the mark. You can check the symposium's schedule on the source link - there are some interesting subjects there, such as Intel's "High Performance Graphics solutions in thin and light mobile form factors" - which could see talks of the Intel-AMD collaboration in Kaby Lake G, and possibly of the work being done on Intel's in-house high-performance graphics technologies (with many of AMD's own RTG veterans, of course).
30 Comments on NVIDIA to Detail New Mainstream GPU at Hot Chips Symposium in August
That said this new 1180 should be just tiny bit faster and the cycle is complete.
Best day of my life We'll see... not holding my breath for anything that puts Vega and high end GPUs in one sentence though. I'll see it when its released, if ever. Its clear as day that Vega is subpar as a gaming GPU and primarily aimed at other markets. It'll do low power well. High performance? Not so much. Also, since Navi is still on the roadmap and has far more potential than anything Vega in terms of gaming.
In any case, first announcement is nearly always of the Gx-104 chip, x being whatever this family is called. Since this is their gaming card annnouncement, and it is mainstream card, I contend that it will be on Gx-104 chip, and this the 1180 and 1170.
Going for the fastest GPU was never a good idea and it never will be :) (based on price/perf)
www.anandtech.com/show/10588/hot-chips-2016-nvidia-gp100-die-shot-released
Of course this was after the release of products based on both the GP100, GP104 and GP106.
RX 580 needs 2304 cores, 6175 GFlop/s and 256 GB/s of bandwidth to keep up with GTX 1060's 1280 cores, 3855 GFlop/s and 192.1 GB/s.
Both AMD and Nvidia will probably eventually move to a MCM based design, but the problem for AMD is that MCM will not solve their underlying problem, it will actually just make it worse. AMD's problem for Fiji, Polaris and Vega is scheduling. All of these designs have plenty of resources compared to their counterparts from Nvidia, but they struggle because AMD can't use their resources efficiently while Nvidia can. AMD sits at about ~67% of the efficiency of Nvidia in gaming workloads, but scale nearly perfectly in simple compute workloads. This has clearly to do with scheduling of resources. The parallel nature of rendering might mislead some into thinking it's easily scalable, but most don't know it's actually a pipeline of small parallel blocks, full for resource dependencies. If it's not managed well, the GPU will keep having idle cycles for parts of the GPU, leading to the problem AMD currently have. Just keep throwing more resources at it wouldn't help either, as managing more resources well is even harder.
MCM will help with cost and yields, but it will make scaling harder too. In a MCM configuration, the GPU would have a scheduler and several separate GPU modules. The cost of transferring data between these will increase, so scheduling have to be drastically improved to keep up with the efficiency of a monolithic GPU design. Nvidia will also have to step up their game to do this well, but they are currently much better at it already. What AMD needs is a complete redesign built for efficiency rather than brute force, and of course abandon GCN.
Its only a matter of time until GCN is also unable to play well in the midrange. AMD is literally just revamping 2012 technology by adjusting their target one step down the ladder every gen. That's not good. That's a sign of falling further behind every year. They're not moving forward and when they do, the product fails in one way or another. This is the real trend we're seeing since Fury X and they cannot really turn it around. Neither Polaris or Vega are sufficient for that. There is a reason GCN shines in the lower performance segments, at lower clocks and below 100% power targets: under the hood its essentially still the same tech optimized for 1Ghz clocks.
But the problem is that Polaris was never a good alternative to Pascal. Polaris even struggles to compete with Nvidia's lowest member in the mid range; GTX 1060, and GTX 1060 will be replaced this fall. It's fine that AMD can't compete with GTX 1080 Ti, but they need strong contenders to the pricepoints of GTX 1060/1070/1080, because that's where they can make some money. If the gap between them keeps increasing, they would struggle to perform close to the next 1160/1170/1180 models. And if we're honest, AMD isn't really competing with GTX 1060 or above today. Their strategy of using brute force instead of designing a new architecture is just going to make them trail farther and farther behind, until the point where their large inefficient chips become too expensive.
Nvidia had the same problem back in the FX5000 days, but they could still stretch it enough to give us the horrible FX5900 and the FX5950(?) Ultra.