Monday, September 9th 2024
AMD to Unify Gaming "RDNA" and Data Center "CDNA" into "UDNA": Singular GPU Architecture Similar to NVIDIA's CUDA
According to new information from Tom's Hardware, AMD has announced plans to unify its consumer-focused gaming RDNA and data center CDNA graphics architectures into a single, unified design called "UDNA." The announcement was made by AMD's Jack Huynh, Senior Vice President and General Manager of the Computing and Graphics Business Group, at IFA 2024 in Berlin. The goal of the new UDNA architecture is to provide a single focus point for developers so that each optimized application can run on consumer-grade GPU like Radeon RX 7900XTX as well as high-end data center GPU like Instinct MI300. This will create a unification similar to NVIDIA's CUDA, which enables CUDA-focused developers to run applications on everything ranging from laptops to data centers.
When AMD originally separated CDNA from RDNA, the company wanted to create two separate entities and thought it would be easier to manage. However, it couldn't be further from the truth. Having two separate teams for optimizations is a nightmare both logistically and engineering-wise. Hence, the shift to a monolithic structure of GPU architectures is beneficial to the company in the long term and will ease the development of newer products with both gaming-focused and compute-focused teams at work. This strategy is similar to NVIDIA's CUDA, which has maintained its architecture line in a single lane, with added special accelerators for AI or/or ray tracing, which AMD also plans to do.
Source:
Tom's Hardware
Jack HuynhSo, part of a big change at AMD is today we have a CDNA architecture for our Instinct data center GPUs and RDNA for the consumer stuff. It's forked. Going forward, we will call it UDNA. There'll be one unified architecture, both Instinct and client [consumer]. We'll unify it so that it will be so much easier for developers versus today, where they have to choose and value is not improving.According to Jack Huynh, AMD "made some mistakes with the RDNA side; each time we change the memory hierarchy, the subsystem, it has to reset the matrix on the optimizations. I don't want to do that. So, going forward, we're thinking about not just RDNA 5, RDNA 6, RDNA 7, but UDNA 6 and UDNA 7. We plan the next three generations because once we get the optimizations, I don't want to have to change the memory hierarchy, and then we lose a lot of optimizations. So, we're kind of forcing that issue about full forward and backward compatibility. We do that on Xbox today; it's very doable but requires advanced planning. It's a lot more work to do, but that's the direction we're going."
When AMD originally separated CDNA from RDNA, the company wanted to create two separate entities and thought it would be easier to manage. However, it couldn't be further from the truth. Having two separate teams for optimizations is a nightmare both logistically and engineering-wise. Hence, the shift to a monolithic structure of GPU architectures is beneficial to the company in the long term and will ease the development of newer products with both gaming-focused and compute-focused teams at work. This strategy is similar to NVIDIA's CUDA, which has maintained its architecture line in a single lane, with added special accelerators for AI or/or ray tracing, which AMD also plans to do.
56 Comments on AMD to Unify Gaming "RDNA" and Data Center "CDNA" into "UDNA": Singular GPU Architecture Similar to NVIDIA's CUDA
A major advantage of CUDA is its hardware and software ecosystem.
They will likely just go down a path of making the CUs more flexible in what designs they offer. Like Hopper has 80B transistors to GF4090 76B. Hopper has only 24ROPs compared to the GF4090s 176. If a datacenter GPU need RT or ROP capability, have your design more flexible where you can remove those components and add more of what is wanted. But the design for the components is shared across the entire stack. From datacenters, to embedded, to gaming to a little 2CU iGPU.
So in other words, RDNA5 will really be UDNA1.
Also as mentioned, they concentrated on Ryzen first and that bet paid off, now they are concentrating on Radeon/Instinct.
Lets see how it goes, but it does looks promising.
AMD shouldn't make two different arch in the first place, Vega is good for both workloads just need to tweak the software side and arch
As for not changing the memory architecture and using the Xbox as an example, I wonder if they are going towards a unified system wide memory architecture, where everything shares a single high bandwidth pool similar to data center GPUs and APUs as well as SoCs like Apple's Mx series. Statements like TheDeeGee's usually are made when you don't really know enough about something but want to sound like you do anyway. :)
I'm genuinely interested what you are basing your optimism on, is there a source?
So to me, the last two announcements (abandoning the high end and now this one) kind of align with that rumor.
Edit maybe was this article:
www.techradar.com/computing/gpu/latest-performance-rumor-around-amds-rdna-4-gpus-could-worry-you-but-we-think-theres-no-need-to-panic
Advanced MicroDosing?
thing is nvidia kepler/maxwell and pascal/volta works for them because nvidia still properly support CUDA on their card regardless the card's architecture focus. with AMD when they go RDNA/CDNA route we saw that AMD try to support ROCm mostly on CDNA only. i still remember polaris and vega have some sort of ROCm support but effort is obviously lacking on RDNA and RDNA2.
From Toms: Interesting how he mentions RDNA5 but then says UDNA6, so it might be the case.
But if thats the case, then RDNA5 will simply be more of the same.
AMD surely wants to unify architectures to send one of the teams to work on AI. I really doubt that UDNA will get manpower from both RDNA and CDNA teams.
As much as I hate this, this is a step taken towards making more money - the dumb chatty AI that you rather need to ask the same thing twice to be sure.
AMD goes the way where the most money is. Can't blame them, it's all about the money, after all.
What does this mean for gamers? Will we get HBM memory as with Vega?
If not, they will still need to adjust (optimize) for different memory technology (GDDR vs. HBM) and that's still like dealing with two different architectures ...
I'd welcome having HBM memory instead of GDDR in gaming GPUs, even though I'd need to pay extra money for it.