Tuesday, May 21st 2019
Sapphire Reps Leak Juicy Details on AMD Radeon Navi
A Sapphire product manager and PR director, speaking to the Chinese press spilled the beans on AMD's upcoming Radeon Navi graphics card lineup. It looks like with Navi, AMD is targeting the meat of the serious gamer market, at two specific price points, USD $399 with a "Pro" (cut-down) product, and $499 with an "XT" (fully-fledged) product. AMD has two NVIDIA products in its crosshairs, the GeForce RTX 2070, and the RTX 2060. In the interview, the Sapphire rep mentioned "stronger than 2070", when talking about performance numbers, which we assume is for the Navi XT variant - definitely promising. The $399 Navi "Pro" is probably being designed with a performance target somewhere between the RTX 2060 and RTX 2070, so you typically pay $50 more than you would for an RTX 2060, for noticeably higher performance.
Sapphire also confirmed that AMD's Navi does not have specialized ray-tracing hardware on the silicon, but such technology will debut with "next year's new architecture". They also suggested that AMD is unlikely to scale up Navi for the enthusiast segment, and that the Vega-based Radeon VII will continue to be the company's flagship product. On the topic of Radeon VII custom designs, Sapphire commented that "there is no plans for that". On the other hand, Sapphire is actively working on custom designs for the Navi architecture, and mentioned that "work on a "Toxic" version of Navi is complete, and it is watercooled". Many people have speculated that AMD will unveil Navi at its Computex keynote address on May 27. Sapphire confirmed that date, and also added that the launch will be on 7th of July, 2019.
Source:
Zhihu (Blog)
Sapphire also confirmed that AMD's Navi does not have specialized ray-tracing hardware on the silicon, but such technology will debut with "next year's new architecture". They also suggested that AMD is unlikely to scale up Navi for the enthusiast segment, and that the Vega-based Radeon VII will continue to be the company's flagship product. On the topic of Radeon VII custom designs, Sapphire commented that "there is no plans for that". On the other hand, Sapphire is actively working on custom designs for the Navi architecture, and mentioned that "work on a "Toxic" version of Navi is complete, and it is watercooled". Many people have speculated that AMD will unveil Navi at its Computex keynote address on May 27. Sapphire confirmed that date, and also added that the launch will be on 7th of July, 2019.
119 Comments on Sapphire Reps Leak Juicy Details on AMD Radeon Navi
And as @Vayra86 pointed out above: low-end cards don't sell if they're too expensive. Sticking a $70 AIO on a $200 RX 580 doesn't make sense, but it does so on a $500 RTX 2070, even though they're roughly the same wattage, as the cost of the cooler would then represent a much smaller percentage of the total price, and that market segment is generally more open to "premium cooling". No, but it does make sense to not change the fundamentals of a chip architecture and keep the same name - that would be very confusing for everyone involved, particularly the people writing drivers for the hardware. And as pointed out above, GCN has not been fundamentally changed since its inception, it has been iterated upon, tweaked and upgraded, expanded, had features added - but the base architecture is still roughly the same, and works within the same framework - unlike, say, Nvidia's transition from Kepler to Maxwell, where driver compatibility fell off a cliff due to major architectural differences.
What I know right now is:
1. Cards with water cooling sport above average power draw.
2. Till now GCN didn't do TBR, so it had power draw well above Nvidia's.
3. The first glimpse we have at Navi is apparently water cooled.
People keep hoping for a GPU's Zen, I keep seeing Bulldozer iterations...
Vega and Turing both have new small geometry shaders that replace certain tessellation stages. In Vega, they're called primitive shaders, and in Turing, simply, mesh shaders. AMD is waiting for standardization in major APIs, while Nvidia seems fine with using a proprietary API extension to call them. Both types will further speed small geometry creation to enhance game realism, while AMD can also use them to speed geometry culling using their shader arrays to help their geometry front-ends.
Nvidia's basic GPC design (mini-GPUs within an interconnect fabric) dates back to Fermi, although Kepler fixed many of Fermi's shortcomings, Maxwell was the one to really propel it forward in perf/watt and not just from moving to immediate mode tiling rasterizers. Nvidia has also iterated on their GPC architecture, but in a much more aggressive manner (it helps to have a large R&D budget). Turing is still a VLIW2 GPC design*, using up to 6 GPCs in TU102. 7nm can extend that up to 8 GPCs when Nvidia moves to Ampere, but with RT taking priority now, Nvidia may just dedicate more die space to accelerating BVH traversal and intersection, trying to reduce ray tracing's very random hits to VRAM, and of course, making hybrid rendering, as a whole, more efficient and performant.
But, both AMD's GCN (2011) and Nvidia's GPC (2010) designs have been around for quite some time.
* Turing has to execute 2 SMs concurrently due to INT32 taking up 64 of 128 cores within an SM. So, using 2 SMs, 128 FP32/CUDA cores are tasked (warp is still 32 threads), similarly to Pascal and prior and thereby retains compatibility.
www.techpowerup.com/reviews/EVGA/GTX_1650_SC_Ultra_Black/28.html
Vega is more efficient than Polaris.
The original RX400 series were okay on performance / watt, but after 1060 released AMD try to get that little bit of performance for a rather large TDP increase with their RX580 refresh.
1. Power-hungry
2. Sound-hungry
3. Perf.-hungry?!
Therefore it (at least for Vega 56) wasn't as far off the efficiency curve as Polaris ended up.
But you do see the same crazy power draw happening with the AIO version of Vega64, that performance / watt dropped off a cliff.
Just so you know, informed, to the point posts are not the norm here. But this being your first, I won't report it ;)
BTW comparing Polaris to Vega is unfair to begin with. Vega has more efficient HBM2 memory, but is also a more powerful gpu. Vega 64(4096SP, 256TMU, 64ROPs) has 10215-12665 GFLOPs vs RX570(2048SP, 128TMU, 32ROPs)which has 4784-5095 GFLOPs. Vega 64 is 114-149% more powerful on paper, but in reality It's only 97.5% faster than RX 570 in 4K resolution.
If we really wanted to compare which one is more efficient, we would need to have a 32-36CU version of Vega without HBM2.
updated may 19
www.pcgameshardware.de/Grafikkarten-Grafikkarte-97980/Specials/Rangliste-GPU-Grafikchip-Benchmark-1174201/2/
this has ddr6,less cu,lower clocks and worse performance per cu than R7 which beats 2070 by 6%.
stronger than 2070,yeah,right. come on,let's not pretend that 90% of such channels cater for anything more than one or the other fanbases exclusively."this video is nothing new from what you've already seen a hundred times" doesn't earn clicks.look at pcgh test above.or the one that computerbase.de recently updated too.
worthless videos.but you go ahead and believe what they tell you.and don't forget to like and subscribe :)
they're gonna have to throw in one hell of a game bundle for people to defend this.
This have become the Achilles heel of GCN the four 4 Shader Engine / Geometry Engine limit thus far.
To my knowledge there is no architectural reason why it wouldn't be possible, but there is a very good reason why they don't do it; adding e.g. 50% SPs would increase the energy consumption by ~50% but only increase the performance by ~20-30% at best, because a GPU with more clusters would need more powerful scheduling, and to maintain higher efficiency than the predecessor it would require more than 50% better scheduling. The problem for GCN have always been management of resources, and this is the reason why GCN has fallen behind Nvidia. GCN have plenty of computational power, just not the means to harness it.
With all this being said, my Fury X is getting long enough in the tooth that I might still get one of these if they match or beat the 2070. But I'd really like for Arcturus(?) to arrive sooner rather than later.