Wednesday, January 29th 2025
AMD Details DeepSeek R1 Performance on Radeon RX 7900 XTX, Confirms Ryzen AI Max Memory Sizes
AMD today put out detailed guides on how to get DeepSeek R1 distilled reasoning models to run on Radeon RX graphics cards and Ryzen AI processors. The guide confirms that the new Ryzen AI Max "Strix Halo" processors come in hardwired to LPCAMM2 memory configurations of 32 GB, 64 GB, and 128 GB, and there won't be a 16 GB memory option for notebook manufacturers to cheap out with. The guide goes on to explain that "Strix Halo" will be able to locally accelerate DeepSeek-R1-Distill-Llama with 70 billion parameters on the 64 GB and 128 GB memory configurations of "Strix Halo" powered notebooks, while the 32 GB model should be able to run DeepSeek-R1-Distill-Qwen-32B. Ryzen AI "Strix Point" mobile processors should be capable of running DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Llama-14B on their RDNA 3.5 iGPUs and NPUs. Meanwhile, older generation processors based on "Phoenix Point" and "Hawk Point" chips should be capable of DeepSeek-R1-Distill-Llama-14B. The company recommends running all of the above distills in Q4 K M quantization.
Switching gears to the discrete graphics cards, and AMD is only recommending its Radeon RX 7000 series for now, since the RDNA 3 graphics architecture introduces AI accelerators. The flagship Radeon RX 7900 XTX is recommended for DeepSeek-R1-Distill-Qwen-32B distill, while all SKUs with 12 GB to 20 GB of memory—that's RX 7600 XT, RX 7700 XT, RX 7800 XT, RX 7900 GRE, and RX 7900 XT, are recommended till DeepSeek-R1-Distill-Qwen-14B. The mainstream RX 7600 with its 8 GB memory is only recommended till DeepSeek-R1-Distill-Llama-8B. You will need LM Studio 0.3.8 or later and Radeon Software Adrenalin 25.1.1 beta or later drivers. AMD put out first party LMStudio 0.3.8 tokens/second performance numbers for the RX 7900 XTX, comparing it with the NVIDIA GeForce RTX 4080 SUPER and the RTX 4090.When compared to the RTX 4080 SUPER, the RX 7900 XTX posts up to 34% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 27% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 22% higher performance with DeepSeek-R1-Distill-Qwen-14B. Next up, the big face-off between the RX 7900 XTX and the GeForce RTX 4090 with its 24 GB of memory. The RX 7900 XTX is shown to prevail in 3 out of 4 tests, posting up to 13% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 11% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 2% higher performance with DeepSeek-R1-Distill-Qwen-14B. It only falls behind the RTX 4090 by 4% with the larger DeepSeek-R1-Distill-Qwen-32B model.
Catch the step-by-step guide on getting DeepSeek R1 disrilled reasoning models to run on AMD hardware in the source link below.
Source:
AMD Community
Switching gears to the discrete graphics cards, and AMD is only recommending its Radeon RX 7000 series for now, since the RDNA 3 graphics architecture introduces AI accelerators. The flagship Radeon RX 7900 XTX is recommended for DeepSeek-R1-Distill-Qwen-32B distill, while all SKUs with 12 GB to 20 GB of memory—that's RX 7600 XT, RX 7700 XT, RX 7800 XT, RX 7900 GRE, and RX 7900 XT, are recommended till DeepSeek-R1-Distill-Qwen-14B. The mainstream RX 7600 with its 8 GB memory is only recommended till DeepSeek-R1-Distill-Llama-8B. You will need LM Studio 0.3.8 or later and Radeon Software Adrenalin 25.1.1 beta or later drivers. AMD put out first party LMStudio 0.3.8 tokens/second performance numbers for the RX 7900 XTX, comparing it with the NVIDIA GeForce RTX 4080 SUPER and the RTX 4090.When compared to the RTX 4080 SUPER, the RX 7900 XTX posts up to 34% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 27% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 22% higher performance with DeepSeek-R1-Distill-Qwen-14B. Next up, the big face-off between the RX 7900 XTX and the GeForce RTX 4090 with its 24 GB of memory. The RX 7900 XTX is shown to prevail in 3 out of 4 tests, posting up to 13% higher performance with DeepSeek-R1-Distill-Qwen-7B, up to 11% higher performance with DeepSeek-R1-Distill-Llama-8B, and up to 2% higher performance with DeepSeek-R1-Distill-Qwen-14B. It only falls behind the RTX 4090 by 4% with the larger DeepSeek-R1-Distill-Qwen-32B model.
Catch the step-by-step guide on getting DeepSeek R1 disrilled reasoning models to run on AMD hardware in the source link below.
27 Comments on AMD Details DeepSeek R1 Performance on Radeon RX 7900 XTX, Confirms Ryzen AI Max Memory Sizes
Thanks for the link.
Either way, The 32 GB mandatory minimum is a welcome sight. I’m a bit surprised (hence the confusion above) that 48 and 96GB capacities weren’t also mentioned as those capacities should be possible via LPCAMM2.
However, crucial only lists 32 and 64GB modules in their page:
www.crucial.com/memory/ddr5/CT64G75C2LP5XG
So it'd mean either 64 or 128GB for strix halo. I'm too lazy to look into other manufacturers.
Thanks AMD I guess?
But there are work to produce 3D DRAM that wouldn't necessary be HBM in order to increase capacities. but from what i see, its still few years in the making
semiengineering.com/baby-steps-towards-3d-dram/
note that it look they are also working on stacked dram that would use the same bus size as GDDR* and would probably be a drop in solution while we wait
DRAM dies are also stacked in large capacity server DIMMs. That used to be the case for really, really expensive 128 GB DIMMs and up, but now as larger capacity dies exist, it's probably 256 GB and up. Going by the price, I assume it's TSV stacking.
LPDDR dies are also stacked in some designs, for example Apple's M chips. Probably TSV again because speed matters and cost doesn't.
A case of non-TSV stacked dies (with old style wire bonding instead) would be NAND, for several reasons: lower speed, small number of wires due to 8-bit bus, and requirement for low cost. Thanks for the link. Semiengineering posted thisnice overview of current tech in 2021 ... and later I ocassionally checked and found nothing. Yes, we'll wait some more for 3D. Someone will eventually modify the NAND manufacturing tech so that those capacitors, well, quickly charge and discharge. And when they succeed, they will try everything to compress four bits into one cell. What sort of stacked DRAM dou yo mean here? Again, due to high speed, it would have to be TSV stacked, so in a different price category.
Fast forward a few years and a 16 cores with V-Cache + UDNA + CAMM2 should be awesome. HBM remains a pipe dream because their prices rose pretty and TSV stacking remains prohibitively expensive.
Also, DLSS hasn't changed much in its base operation, so it can run on anything with tensor cores. FSR hasn't needed AI cores so far, but FSR 4 does.
My other theory is that Nvidia hasn't touched the RT and tensor cores much since RTX 2000 (judging by performance data). We know very little about what an AI/tensor core actually is and how it works.
I think N41 (partially) got canned because they know once people have >80TF and 24GB (essentially a 4090) most ain't upgrading for a long-long time. Those that wanted that at >$1000+ bought a 4090.
Cutting the price of 4080 from $1200 to $1000 probably also had something to do with it, as I think that's where AMD wanted to compete.
Similar reason for the gap in nV products. Why GB203 limited to <80TF (1 less cluster than half GB202 + PL locks) and doesn't have a 24GB option. Gotta milk needing those upgrades as long as possible...
Hence both wanted to get one more cycle in before that happened...or maybe just able to make it for a larger margin given the move to 3nm and 3GB GDDR7 (256-bit instead of 384-bit for 24GB spec).
Something like a $500 BOM (~GB203/N48 size; 100+ KGD per 20k wafer + ~$300 of 3GB GDDR7) makes a lot more sense than making a slightly-slower 4090 for ~$1200 MSRP.
They would've needed 12288sp @ 3640mhz to match a 4090...That's probably impossible if not not close-to-impossible to yield on 4/5nm for a gpu.
We may see w/ N48 3.4ghz is probably difficult-enough to yield within decent power. I say that because if all N48 products can't hit 3.3ghz+ they've kinda failed; might as well buy a 6800xt/7800xt.
I'll be verrryyy curious if (binned) 3x8-pin designs will be able to hit anywhere around ~3.6ghz(+/-?), as that may have been the N4 goal, both (cancelled) large and (non-cancelled) smalls, with 24gbps ram.
Still think something like a 11264sp+ 3nm design is going to be a lot of people's last stop in this market for the most part. People with a 4090 (unless they have to have the best) probably already don't care.
Making ~1920sp*6/96 ROPs is just sooo much cheaper. It would only require 3900mhz to match 4090 which I think is very doable given how current 5nm GPU designs yield against 2.93/3.24 Apple products.
We don't know how N48 yielded against the 3460-3700mhz Apple products yet, or how much power it uses, but it should be interesting. Both clock yields and the power usage for those clocks on the curve.
This could be telling who has the better idea on 3nm.
nVIDIA is probably shooting for 12288sp@3780mhz/36000 like Apple's efficient clock on N3B, while AMD could perhaps be shooting for 11520sp @ ~3.87/40000+, more-similar to Apple's 4050mhz N3P.
Whatever they do, it'll be a lot cheaper to make than a 4090 or whatever AMD wanted to do with N41...chiplet or monolithic.
At any rate, it's fascinating to see what's possible with this deepseek model; it's almost like pure hardware always wins out in the end versus software/marketing bullshit and artificial limitations!
It's amusing to see the hardware limitations exposed when not locked to their ecosystem.
Long-live the Fine Wine of actually well-matched hardware/vram that always prevails in the end.