To say that AMD is at the forefront of new technology in the PC-graphics field is an understatement. The company rigorously pursues and in many cases introduces new technology into the PC consumer-graphics space. AMD's past two memorable technological breakthroughs in this space were Graphics CoreNext, a powerful new number-crunching machinery for the GPU, which made not just AMD, but also a lot of crypto-currency enthusiasts a lot of money, and the first GPU with GDDR5 memory in their giant-killing Radeon HD 4870. The past year hasn't been kind to AMD in terms of GPU-market share, which is partly because the company didn't introduce anything major since 2013; all due to competition from NVIDIA with its "Maxwell" architecture and probably also because the company is focusing on high-volume ISV deals, such as new-generation game consoles, and the development of the chip we're reviewing today, the Radeon R9 Fury X.
The R9 Fury X is not a case of AMD taking its existing tech and up-scaling it. The company probably can't do that anymore. AMD and NVIDIA's common foundry partner for GPUs, TSMC, has seen major setbacks in implementing its next silicon fabrication tech, which threw both companies' product-development cycles off the track. While it hit NVIDIA hard, which had to rework its "Maxwell" architecture for the existing 28 nm process, it hit AMD harder. The company was already hitting 275W TBP (typical board power) with its previous-gen high-end chip, the R9 290X, and making the kind of SIMD increase that it did with its new chip could have adversely affected TBP. The card has the same exact TBP as the R9 290X at 275W, but at a whopping 45% increase in its number-crunching machinery. So AMD has obviously done something very big with the physically very small R9 Fury X - High Bandwidth Memory (HBM).
Stacked High Bandwidth Memory (HBM) is the biggest innovation in video card memory since GDDR5 (circa summer 2008). It is complemented by a transfer of video memory from physical chips surrounding the GPU package - over 32-bit-wide data paths, each. More on this later.
The Radeon R9 290 and R9 290X hit NVIDIA's high-end GPU lineup back in fall-2013. The $399 R9 290 was faster than the $999 GTX TITAN and the $650 GTX 780; while the R9 290X held out on its own until NVIDIA made a product intervention with the GTX 780 Ti and GTX TITAN Black to reclaim those two price points. That didn't affect AMD's competitiveness until NVIDIA launched the "Maxwell" based GTX 970 and GTX 980. Some of the major complaints with the R9 290/X were that it was very hot and noisy. We dreaded the prospect of a dual-GPU card based on the "Hawaii" silicon, but were pleasantly surprised when the dual-GPU R9 295X2 turned out to be cooler and quieter than single-GPU NVIDIA chips. It could do so because AMD tapped into liquid cooling. The card shipped with a factory-fitted, close-loop liquid cooler. Close-loop liquid coolers for CPUs were taking off around the same time, and so the two-piece contraption strung together by coolant tubes didn't really affect the product's standing with buyers. NVIDIA launching a prohibitively expensive GTX TITAN-Z helped too. Given the Radeon Fury X has the same board-power rating as the R9 290X, AMD also gave it a liquid-cooling solution in an effort to clinch the heat-and-noise game against NVIDIA's high-end "Maxwell" GPUs, including the GTX Titan X and the more recently launched GTX 980 Ti.
AMD is pricing the Radeon Fury X at US $650. This and $550 have traditionally been the price-points at which AMD launches its high-end, single-GPU products. NVIDIA prepared for this product launch by introducing the GTX 980 Ti at $650, based on the same silicon as the GTX Titan X, and cutting pricing of the $550 GTX 980 down to $500. The Radeon R9 Fury X has its task cut out for itself in repeating the R9 290 series launch in which the biggest winner was the consumer.