Wednesday, February 12th 2025

AMD Radeon RX 9070 XT Could Get a 32 GB GDDR6 Upgrade

AMD's Radeon RX 9000 series GPUs are expected to come with up to 16 GB of GDDR6 memory. However, AMD is reportedly expanding its RX 9070 lineup with a new 32 GB variant, according to sources on Chiphell. The card, speculatively called the RX 9070 XT 32 GB, is slated for release at the end of Q2 2025. The current GDDR6 memory modules used in GPUs carry a capacity of 2 GB per module only, meaning that a design with 32 GB of VRAM would require as many as 16 memory modules on a single card. No 2 GB+ GDDR6 memory modules are available, meaning that the design would require memory module installation on both the front and back of the PCB. Consumers GPUs are not known for this, but it is a possibility with workstation/prosumer grade GPUs employing this engineering tactic to boost capacity,

While we don't have information on the GPU architecture, discussions point to potential modifications of the existing Navi 48 silicon. This release is positioned as a gaming card rather than a workstation-class Radeon PRO 9000 series product. AMD appears to be targeting gamers interested in running AI workloads, which typically require massive VRAM amounts to run locally. Additionally, investing in a GPU with a big VRAM capacity is essentially "future-proofing" for gamers who plan to keep their cards for longer, as recent games have been spiking VRAM usage by a large margin. The combination of gaming and AI workloads may have made AMD reconsider some of its product offerings, potentially giving us the Radeon RX 9070 XT 32 GB SKU. We have to wait for the Q2 to start, and we can expect more details by then.
Sources: Chiphell, via VideoCardz
Add your own comment

45 Comments on AMD Radeon RX 9070 XT Could Get a 32 GB GDDR6 Upgrade

#26
FreedomEclipse
~Technological Technocrat~
:love: :love: :love: :love: :love: :love:

Posted on Reply
#27
wNotyarD
Legacy-ZANot necessarily; remember, nGreedia moved to GDDR7, there is going to be a lot of GDDR6/X chips that still need to be sold.
Wouldn't bet on GDDR6X, that was developed by Micron together with NVIDIA, so I guess that stays exclusive.

I am to see, however, Samsung's GDDR6W. Has anything ever used it? It's been 2 years since Sammy announced it and still nothing?
Posted on Reply
#28
kondamin
wNotyarDWouldn't bet on GDDR6X, that was developed by Micron together with NVIDIA, so I guess that stays exclusive.

I am to see, however, Samsung's GDDR6W. Has anything ever used it? It's been 2 years since Sammy announced it and still nothing?
Samsung hasn't launched anything the last couple of years sooooo....
Posted on Reply
#29
Ultron1337
YESSSS!!! I've been running bunch of baby skynets on my 7800XT and it has been AWESOME! AMD has gotten much better with LLMs, ROCm support out of the box and can't complain about performance.
I specially suggested 32GB 9070XT to AMD on Linkedin, super glad this is closer to reality! 32GB GPU can outperform 24GB models like 7900XTX and 4090 simply because bigger models will not fit 24GB. Smaller model speed is determined by VRAM bandwidth and here 4090 and 7900XTX still have advantage over 9070XT. Since nVidia is using GDDR7 on all new GPUs, it is very difficult for them to make relatively affordable 32GB GPU.
Posted on Reply
#30
geniekid
A 32GB 9070XT doesn't excite me as much as something like an RDNA4 version of a 7900XTX.
Posted on Reply
#31
TheinsanegamerN
geniekidA 32GB 9070XT doesn't excite me as much as something like an RDNA4 version of a 7900XTX.
Same. If the 9070xt is, in fact, competitive with the 5070ti, it will fuel speculation on what a true rdna4 4090 killer would have been.
Posted on Reply
#32
W1zzard
csendesmarkTry to run a 30GB model on a fancy 24 or 16GB RTX with CUDA and compare the experience against a Radeon with 32GB VRAM.
VRAM is valuable real-estate!
At $500-1k a day for developer time your idea probably won't work out
Posted on Reply
#33
csendesmark
W1zzardAt $500-1k a day for developer time your idea probably won't work out
If money and firehazard is not an issue then the 5090 is a fantastic card :D
Posted on Reply
#34
The Norwegian Drone Pilot
QuicksWell at least AMD is not stringy with their VRAM.
But they are however stringy with their features compared to NVIDIA though. And that's where AMDs problem is, or most of it.
Posted on Reply
#35
TonyM
I need a 9080XT 24gb with 4090 power
Posted on Reply
#36
Neo_Morpheus
As usual, gpu is just rumored to exist and people are already calling it trash just because is from AMD.

And as usual, Ngreedia "features" are not called what they really are, proprietary lock-in crap which exist to limit your options and keep you locked into their hardware.

Its sad how todays consumers dont demand platform agnostic tech.
Posted on Reply
#37
Random_User
These are obviously aimed at AI target audience. No games will utilize these gigabytes, due to "comparably" weak GPU. Only LLM/compute will.
Both nVidia, and AMD keep falsely advertise compute SKUs to compute folks, as "gaming" protucts. This does wash away the boundaries of both, gaming and enterprise products. This surely is welcome by both GPU makers, and AI gang, but is horrible for the regular buyer and user, to which these "Gaming" cards should be aimed towards.
Those who use GPUs for work, should buy the corresponding workstation/enterprise products. This inflates the bottom end GPU stack, which gaming GPUs indeed are, thus making them even more beyond the reach, no only by "regular" folks, but for the prosumers themselves. People really like to shoot themselves in their feet.
Posted on Reply
#38
FeelinFroggy
Utter waste of VRAM. No game uses that much VRAM, any VRAM not being used just sits idle doing nothing. All this does is artificially inflates the cost of the card with zero performance improvements. Maybe do better with faster memory than more of it.
Posted on Reply
#39
TPUnique
FeelinFroggyUtter waste of VRAM. No game uses that much VRAM, any VRAM not being used just sits idle doing nothing.
Ok. And what about non-game applications ? Is that also an utter waste for them ?
Posted on Reply
#40
Tigerfox
londisteClamshell usually. Pro and datacenter cards have this often enough. Rarer in consumer space. 3090 and 4060Ti 16GB come to mind from recent times.
Either way, a 32GB 9070XT would be immediately bought up for AI. Given relative lack of performance improvement for gaming the SKU probably does not make much sense.
Exactly what I thought. The reason probably is that there will be no more developement on GDDR6, so there will be no 3GB or even 4GB-modules. Only way to go over 16GB on a 256Bit bus is clamshell with 8 2GB-modules on each side.
AusWolfI don't see the point. Sure, Nvidia's 12 GB lineup is anaemic as f*ck, but this is unnecessary in this class of GPU (unless it's targeted at purely AI purposes, I guess).
The point is to compete with 5070Ti(S)/5080(S) with 24GB via 3GB-modules that will be offered later this year, or to offer something that meets the VRAM-demands of AI.
csendesmarkGaming wise, there is little point moving above 12~16GB VRAM in 2025Q1.
See above, even 16GB can't be called future-proof anymore, but doubling down is the only way for GDDR6.
OctopussWhat's the bloody point?
See above.
PaddieMayneFantastic news i hope they do a 24gb model.
See above, that would only be possible with 3GB-modules, which are not likely to come in GDDR6 anymore.
AusWolfIf they did that with 3 GB memory modules while keeping the Navi 48 core intact, that would be awesome (still not necessary for gaming, but makes a bit more sense than 32 GB).
See above, I very much doubt that there will be any future developement on GDDR6, so 2GB-modules with 20-24Gbps is all we'll get for RDNA4.

Now I'm curious again about 9070XT. If it beats 5070Ti in raster and isn't much slower in RT, I would consider buying team red instead of green. I wanted to wait for 5070Ti 24GB, which now will compete with 9070XT 32GB.
Posted on Reply
#41
Makaveli
OctopussWhat's the bloody point?
Useful for LLM's.

2 x 9070 XT 32GB gives me 64GB of usable VRAM.

Which means I can run 70B or larger models without having to hit system ram.

if price is $750 x 2 = will be cheaper than 3090,4090 x 2 or 7900,7900XTX x 2 for the same use case with more VRAM.
W1zzardAt $500-1k a day for developer time your idea probably won't work out
Valid point but I also think people are just throwing around the AI term without talking about the details and use case which is very important.

Are we talking AI image generation? LLM's and interference speed etc. For somethings even with a ton of VRAM you may not want to leave the CUDA ecosystem and for others cases its fine.

The details matter.
Posted on Reply
#43
Yashyyyk
Modded Skyrim (esp. VR) takes >20GB VRAM, with Nvidia still limiting VRAM, I'd welcome it
Posted on Reply
#44
RAA!
If this comes to fruition then i know what my next card will be. As a heavy VRChat user VRAM is very important when you're in a lobby of 80 people.

My old 12gb 3060 would at times out perform 3070's and in once case a friends (10gb) 3080 cause they would run out of VRAM before I would.
Posted on Reply
#45
freeagent
Lots of VRAM, not enough horsepower.. cool!
Posted on Reply
Add your own comment
Feb 12th, 2025 17:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts