Thursday, November 23rd 2023
AMD Radeon "GFX12" RX 8000 Series GPUs Based on RDNA4 Appear
AMD is working hard on delivering next-generation products, and today, its Linux team has submitted a few interesting patches that made a subtle appearance through recent GitHub patches for GFX12 targets, as reported by Phoronix. These patches have introduced two new discrete GPUs into the LLVM compiler for Linux, fueling speculation that these will be the first iterations of the RDNA4 graphics architecture, potentially being a part of the Radeon RX 8000 series of desktop graphics cards. The naming scheme for these new targets, GFX1200 and GFX1201, suggests a continuation of AMD's logical progression through graphics architectures, considering the company's history of associating RDNA1 with GFX10 and following suit with subsequent generations, like RDNA2 was GFX10.2 and RDNA3 was GFX11.
The development of these new GPUs is still in the early stages, indicated by the lack of detailed information about the upcoming graphics ISA or its features within the patches. Currently, the new GFX12 targets are set to be treated akin to GFX11 as the patch notes that "For now they behave identically to GFX11," implying that AMD is keeping the specifics under wraps until closer to release. The patch that defines target names and ELF numbers for new GFX12 targets GFX1200 and GFX1201 is needed in order to enable timely support for AMD ROCm compute stack, the AMDVLK Vulkan driver, and the RadeonSI Gallium3D driver.
Sources:
Phoronix, via Tom's Hardware
The development of these new GPUs is still in the early stages, indicated by the lack of detailed information about the upcoming graphics ISA or its features within the patches. Currently, the new GFX12 targets are set to be treated akin to GFX11 as the patch notes that "For now they behave identically to GFX11," implying that AMD is keeping the specifics under wraps until closer to release. The patch that defines target names and ELF numbers for new GFX12 targets GFX1200 and GFX1201 is needed in order to enable timely support for AMD ROCm compute stack, the AMDVLK Vulkan driver, and the RadeonSI Gallium3D driver.
15 Comments on AMD Radeon "GFX12" RX 8000 Series GPUs Based on RDNA4 Appear
1st-gen chiplets were mediocre as they only focused on cost-reduction by moving non-scaling logic off the expensive TSMC premium nodes.
Hopefully they will have the time and experience to start splitting the compute units out into chiplets, which will eventually give us multi-chiplet scalability the same way we have with Epyc and Threadripper. As much as the 4090 is an impressive piece of kit, it's insanely expensive to make a single die that big on the most expensive process node available. A "midrange" compute chiplet significantly smaller than even Navi32 with, say, 40 compute units (2560 cores) would be excellent for a mainstream product, and scale nicely to 2GCDs, 3GCDs, 4GCD's etc like like the Ryzens do. That economy of scale would work wonders, too, since AMD would only have to make one GCD instead of the three they do right now - so more effort could be spent on tuning and optimising that one GCD.
Essentially, treat MLID as a spokesperson for hardware vendors who don't want to be on the official record. If he's citing AMD or Nvidia rep discussions, the info he's stating is little more than official marketing anyway and his info is rarely more than a week earlier than the official line.
When he's speculating on closely guarded rumours/leaks for products still in development, he's useless. For anything more than about 6 months out, his 50:50 historic accuracy is little better than a completely blind guess.
Looking @ AMD's Instinct 'APUs' and news of exponential improvements in yields/costs/bonding of HBM, has set an expectation for me.
Unless UE5 STALKER 2 and MW5:Clans massacre my current Vega(s) (@ 1080p), I'm not looking to buy a 'new' GPU, until HBM returns.
Basically, I'm more interested and enthused by 'the tech' than by 'raw performance' (in games I'm not even interested in).
The games I do play the most of, are older (or, well-on-their-way to being 'optimized'). He comes-off as an arse, and I certainly can see his detractors' points. However, I can't disagree here, at all.
He's been accurate-enough to not "write-off, at face value".
And when he's wrong it never gets mentioned, of course - unless he explicitly said he made a guess and that was always his original stance.
You don't have to like someone to treat their info as valid or not though. He has proven time and time again that he has a sizeable pool of industry insiders who are willing to leak information. You just need to be careful when citing him that he's quoting his sources and not doing his own speculation. I'm not even going to say his speculation is bad, many of his educated guesses are insightful - but that still doesn't make them anything more than guesses. HBM is always going to be expensive compared to packaged GDDR in the same way that RAM will always have a higher cost than NAND.
IMO profit/performance is always the most important metric for GPU manufacturers and as long as GDDR VRAM is good enough for consumer solutions, they will pick that first. It's not like the 4090 or 7900XTX are short of bandwidth - yes, if they had more they might be situationally faster, but overclocking the VRAM alone on a flagship gaming GPU gives minimal gains. HBM exists in the enterprise market because there's a non-gaming customer-base that is willing to pay double to get the memory bandwidth because compute applications are bandwidth-limited and scale almost linearly in some cases.
RDNA5 is expected in H1 25.
It doesn't entirely make sense to me that AMD would bet the farm on an unproven chiplet design before they overcame the inherent parallelism issue on modern APIs, but hey I don't pretend to have inside sources or whatever either.