Thursday, November 23rd 2023

AMD Radeon "GFX12" RX 8000 Series GPUs Based on RDNA4 Appear

AMD is working hard on delivering next-generation products, and today, its Linux team has submitted a few interesting patches that made a subtle appearance through recent GitHub patches for GFX12 targets, as reported by Phoronix. These patches have introduced two new discrete GPUs into the LLVM compiler for Linux, fueling speculation that these will be the first iterations of the RDNA4 graphics architecture, potentially being a part of the Radeon RX 8000 series of desktop graphics cards. The naming scheme for these new targets, GFX1200 and GFX1201, suggests a continuation of AMD's logical progression through graphics architectures, considering the company's history of associating RDNA1 with GFX10 and following suit with subsequent generations, like RDNA2 was GFX10.2 and RDNA3 was GFX11.

The development of these new GPUs is still in the early stages, indicated by the lack of detailed information about the upcoming graphics ISA or its features within the patches. Currently, the new GFX12 targets are set to be treated akin to GFX11 as the patch notes that "For now they behave identically to GFX11," implying that AMD is keeping the specifics under wraps until closer to release. The patch that defines target names and ELF numbers for new GFX12 targets GFX1200 and GFX1201 is needed in order to enable timely support for AMD ROCm compute stack, the AMDVLK Vulkan driver, and the RadeonSI Gallium3D driver.
Sources: Phoronix, via Tom's Hardware
Add your own comment

15 Comments on AMD Radeon "GFX12" RX 8000 Series GPUs Based on RDNA4 Appear

#1
jesdals
Bring it on - I have money burning in the pocket and a strange urge to become burned as beta tester once again (some kind of short time memory loss)
Posted on Reply
#2
Chrispy_
I hope they stick to the chiplet design.

1st-gen chiplets were mediocre as they only focused on cost-reduction by moving non-scaling logic off the expensive TSMC premium nodes.

Hopefully they will have the time and experience to start splitting the compute units out into chiplets, which will eventually give us multi-chiplet scalability the same way we have with Epyc and Threadripper. As much as the 4090 is an impressive piece of kit, it's insanely expensive to make a single die that big on the most expensive process node available. A "midrange" compute chiplet significantly smaller than even Navi32 with, say, 40 compute units (2560 cores) would be excellent for a mainstream product, and scale nicely to 2GCDs, 3GCDs, 4GCD's etc like like the Ryzens do. That economy of scale would work wonders, too, since AMD would only have to make one GCD instead of the three they do right now - so more effort could be spent on tuning and optimising that one GCD.
Posted on Reply
#3
ecomorph
Chrispy_I hope they stick to the chiplet design.

1st-gen chiplets were mediocre as they only focused on cost-reduction by moving non-scaling logic off the expensive TSMC premium nodes.

Hopefully they will have the time and experience to start splitting the compute units out into chiplets, which will eventually give us multi-chiplet scalability the same way we have with Epyc and Threadripper. As much as the 4090 is an impressive piece of kit, it's insanely expensive to make a single die that big on the most expensive process node available. A "midrange" compute chiplet significantly smaller than even Navi32 with, say, 40 compute units (2560 cores) would be excellent for a mainstream product, and scale nicely to 2GCDs, 3GCDs, 4GCD's etc like like the Ryzens do. That economy of scale would work wonders, too, since AMD would only have to make one GCD instead of the three they do right now - so more effort could be spent on tuning and optimising that one GCD.
If MLID leaks are true, 8800XT will be monolithic and the cancelled high-end one was supposed to be chiplet, but they couldn't make it work in time (if we're lucky, RDNA 5 9900XT will deliver that).

Posted on Reply
#4
KLMR
Congratulations for making china burn more coal!
Posted on Reply
#5
Assimilator
ecomorphIf MLID leaks are true, 8800XT will be monolithic and the cancelled high-end one was supposed to be chiplet, but they couldn't make it work in time (if we're lucky, RDNA 5 9900XT will deliver that).

Standard MLID BS, those "leaks" don't make any sense whatsoever. Why would AMD regress from a full chiplet design on all cards in RDNA3, to partial chiplets in RDNA4?
Posted on Reply
#6
Kyan
AssimilatorStandard MLID BS, those "leaks" don't make any sense whatsoever. Why would AMD regress from a full chiplet design on all cards in RDNA3, to partial chiplets in RDNA4?
The RX 7600 is monolithic, but yeah, it would be strange to not commit on chiplet design for the whole RDNA 4 line up.
Posted on Reply
#7
Chrispy_
KyanThe RX 7600 is monolithic, but yeah, it would be strange to don't commit on chiplet design for the whole RDNA 4 line up.
Yeah, MLID info is hit or miss depending on how far into the future his "leaks" actually are. When he quotes multiple sources in the industry about upcoming products and pricing, he's almost always spot-on because his sources at AMD, Nvidia, and board partners are genuine and reliable. For stuff that is on-schedule and already in the hands of board vendors, he's rarely - if ever - wrong, and he's always right about delays for things that are supposed to be with board partners and haven't yet reached them.

Essentially, treat MLID as a spokesperson for hardware vendors who don't want to be on the official record. If he's citing AMD or Nvidia rep discussions, the info he's stating is little more than official marketing anyway and his info is rarely more than a week earlier than the official line.

When he's speculating on closely guarded rumours/leaks for products still in development, he's useless. For anything more than about 6 months out, his 50:50 historic accuracy is little better than a completely blind guess.
Posted on Reply
#8
LabRat 891
ecomorphIf MLID leaks are true, 8800XT will be monolithic and the cancelled high-end one was supposed to be chiplet, but they couldn't make it work in time (if we're lucky, RDNA 5 9900XT will deliver that).

I wonder how this affects my personal expectations of HBM's return to desktop GPUs?
Looking @ AMD's Instinct 'APUs' and news of exponential improvements in yields/costs/bonding of HBM, has set an expectation for me.

Unless UE5 STALKER 2 and MW5:Clans massacre my current Vega(s) (@ 1080p), I'm not looking to buy a 'new' GPU, until HBM returns.
Basically, I'm more interested and enthused by 'the tech' than by 'raw performance' (in games I'm not even interested in).
The games I do play the most of, are older (or, well-on-their-way to being 'optimized').
Chrispy_Yeah, MLID info is hit or miss depending on how far into the future his "leaks" actually are. When he quotes multiple sources in the industry about upcoming products and pricing, he's almost always spot-on because his sources at AMD, Nvidia, and board partners are genuine and reliable. For stuff that is on-schedule and already in the hands of board vendors, he's rarely - if ever - wrong, and he's always right about delays for things that are supposed to be with board partners and haven't yet reached them.

Essentially, treat MLID as a spokesperson for hardware vendors who don't want to be on the official record. If he's citing AMD or Nvidia rep discussions, the info he's stating is little more than official marketing anyway and his info is rarely more than a week earlier than the official line.

When he's speculating on closely guarded rumours/leaks for products still in development, he's useless. For anything more than about 6 months out, his 50:50 historic accuracy is little better than a completely blind guess.
He comes-off as an arse, and I certainly can see his detractors' points. However, I can't disagree here, at all.
He's been accurate-enough to not "write-off, at face value".
Posted on Reply
#9
Chrispy_
LabRat 891{MLID} comes-off as an arse, and I certainly can see his detractors' points. However, I can't disagree here, at all.
He's been accurate-enough to not "write-off, at face value".
He comes off as an arse because when he's right he repeats how he was right at every opportunity he gets and there's a lot of "I told you so" arrogance.
And when he's wrong it never gets mentioned, of course - unless he explicitly said he made a guess and that was always his original stance.

You don't have to like someone to treat their info as valid or not though. He has proven time and time again that he has a sizeable pool of industry insiders who are willing to leak information. You just need to be careful when citing him that he's quoting his sources and not doing his own speculation. I'm not even going to say his speculation is bad, many of his educated guesses are insightful - but that still doesn't make them anything more than guesses.
LabRat 891I wonder how this affects my personal expectations of HBM's return to desktop GPUs?
Looking @ AMD's Instinct 'APUs' and news of exponential improvements in yields/costs/bonding of HBM, has set an expectation for me.
HBM is always going to be expensive compared to packaged GDDR in the same way that RAM will always have a higher cost than NAND.

IMO profit/performance is always the most important metric for GPU manufacturers and as long as GDDR VRAM is good enough for consumer solutions, they will pick that first. It's not like the 4090 or 7900XTX are short of bandwidth - yes, if they had more they might be situationally faster, but overclocking the VRAM alone on a flagship gaming GPU gives minimal gains. HBM exists in the enterprise market because there's a non-gaming customer-base that is willing to pay double to get the memory bandwidth because compute applications are bandwidth-limited and scale almost linearly in some cases.
Posted on Reply
#10
Minus Infinity
KyanThe RX 7600 is monolithic, but yeah, it would be strange to not commit on chiplet design for the whole RDNA 4 line up.
Why? 7600 is a small chip, and it's entry level. MCM costs a lot more and uses smaller nodes. It's purpose is so you don't end up like Nvidia's having a > 600mm^2 monster chip that means lower yields and higher prices. The whole reason high end RDNA 4 is cancelled becuase it's MCM design was a lot more complex than RDNA3's and it was not working as expected and they didn't want to use resources and money to get it right and push back RDNA5. N43/44 are on 4nm and AMD could make a relatively powerful low end card and still be under 200mm^2. Given the latency issues with MCM so far, and the fact the low end is the money making segment why fuck up with unneeded complexity? Being non-MCM means it's on track and will be out middle of next year. AMD also doesn't need to rush MCM for high end as Blackwell isn't coming out in 2024 either.
Posted on Reply
#11
TheinsanegamerN
Minus InfinityWhy? 7600 is a small chip, and it's entry level. MCM costs a lot more and uses smaller nodes. It's purpose is so you don't end up like Nvidia's having a > 600mm^2 monster chip that means lower yields and higher prices. The whole reason high end RDNA 4 is cancelled becuase it's MCM design was a lot more complex than RDNA3's and it was not working as expected and they didn't want to use resources and money to get it right and push back RDNA5. N43/44 are on 4nm and AMD could make a relatively powerful low end card and still be under 200mm^2. Given the latency issues with MCM so far, and the fact the low end is the money making segment why fuck up with unneeded complexity? Being non-MCM means it's on track and will be out middle of next year. AMD also doesn't need to rush MCM for high end as Blackwell isn't coming out in 2024 either.
There's no official word on high end rDNA4 being cancelled, that is just speculation.
Posted on Reply
#12
GodisanAtheist
The current rumor mill says high end RDNA4 (Navi 4c) was a full blown chiplet design (not GCD/MCD like RDNA3) but AMD wasn't able to make the arch play nice with current APIs and thanks to missed milestones/deadlines, they scrapped high end RDNA4, focused on the low cost high volume/margin products, and instead moved on to RDNA5 which ofc is still aiming for a proper chiplet based high end config.

RDNA5 is expected in H1 25.

It doesn't entirely make sense to me that AMD would bet the farm on an unproven chiplet design before they overcame the inherent parallelism issue on modern APIs, but hey I don't pretend to have inside sources or whatever either.
Posted on Reply
#13
Dr. Dro
Another good GPU hamstrung by unstable drivers programmed with zero regard for stability or code cleanliness? Can't wait to beta test it. Nah.
Posted on Reply
#14
Minus Infinity
TheinsanegamerNThere's no official word on high end rDNA4 being cancelled, that is just speculation.
Well no one is speculating it's still coming.
Posted on Reply
#15
rambo420
AssimilatorStandard MLID BS, those "leaks" don't make any sense whatsoever. Why would AMD regress from a full chiplet design on all cards in RDNA3, to partial chiplets in RDNA4?
MLID has always been right, what are you on about? He has been correct 80%+ of the time.
Posted on Reply
Add your own comment
Nov 15th, 2024 13:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts