Thursday, July 17th 2025

AMD Radeon AI PRO R9700 GPU Arrives on July 23rd

AMD confirmed today that its RDNA 4‑based Radeon AI PRO R9700 GPU will reach retail on Wednesday, July 23. Built on the Navi 48 die with a full 32 GB of GDDR6 memory and supporting PCIe 5.0, the R9700 is specifically tuned for lower‑precision calculations and demanding AI workloads. According to AMD, it delivers up to 496% faster inference on large transformer models compared to NVIDIA's desktop RTX 5080, while carrying roughly half the price of the forthcoming RTX PRO "Blackwell" series. At launch, the Radeon AI PRO R9700 will only be offered inside turnkey workstations from OEM partners such as Boxx and Velocity Micro. Enthusiasts who wish to install the card themselves can expect standalone boards from ASRock, PowerColor, and other add‑in‑board vendors later in the Q3. Early listings suggest a price of around $1,250, placing it above the $599 RX 9070 XT yet considerably below competing NVIDIA workstation GPUs. Retailers are already accepting pre-orders.

Designed for AI professionals who require more than what consumer‑grade hardware can provide, the Radeon AI PRO R9700 excels at natural language processing, text‑to‑image generation, generative design, and other high‑complexity tasks that rely on large models or memory‑intensive pipelines. It's 32 GB of VRAM allows production-scale inference, local fine-tuning, and multi-modal workflows to run entirely on-premises, improving performance, reducing latency, and enhancing data security compared to cloud-based solutions. Full compatibility with AMD's open ROCm 6.3 platform provides developers with access to leading frameworks, including PyTorch, ONNX Runtime, and TensorFlow. This enables AI models to be built, tested, and deployed efficiently on local workstations. The card's compact dual-slot design and blower-style cooler ensure reliable front-to-back airflow in dense multi-GPU configurations, making it simple to expand memory capacity, deploy parallel inference pipelines, and sustain high-throughput AI infrastructure in enterprise environments.
Sources: AMD Blog, via VideoCardz
Add your own comment

90 Comments on AMD Radeon AI PRO R9700 GPU Arrives on July 23rd

#1
LabRat 891
At launch, the Radeon AI PRO R9700 will only be offered inside turnkey workstations from OEM partners such as Boxx and Velocity Micro
Peculiar.
The initial announcement more than implied that many AIBs (pretty much all of AMD's partners) will be making R9700s. Plus, Gigabyte has an Aorus (gaming) branded R9700 listing up.

BTW: Tech-America, has since pulled the listings.
Posted on Reply
#2
AleksandarK
News Editor
LabRat 891Peculiar.
The initial announcement more than implied that many AIBs (pretty much all of AMD's partners) will be making R9700s. Plus, Gigabyte has an Aorus (gaming) branded R9700 listing up.
In Q3 yes, workstation partners have to sell some first ;)
Posted on Reply
#3
overclockedamd
LastDudeALiveAMD, come on buddy. Everyone knows your GPUs suck, you don't have to remind us how you can't compete unless you've rigged the game.


Sure the competing RTX 4500 Blackwell is going to be 2x the price, but it'll have 5x the performance and 2/3 the power consumption.
And how exactly do they suck? Not sure if this is bait or you are serious.
Posted on Reply
#4
LastDudeALive
overclockedamdAnd how exactly do they suck? Not sure if this is bait or you are serious.
I think advertising a GPU by saying it gets better performance when the "competition" does not have the physical ability to run the workload just makes AMD look desperate and silly. That's like if Intel advertised the Arc A310 as ∞% better than the GTX 1080-Ti (when ray tracing is on).

Why not pick a model that's optimized for RDNA 4 and claim ~50% better performance? Everyone knows that first-party benchmarks and advertising claims are just marketing. We're willing to accept a certain amount of BS. But there's a difference between cherry-picking benchmarks where you perform better, and being so desperate to appear competitive you lose all credibility.
Posted on Reply
#5
dgianstefani
TPU Proofreader
overclockedamdAnd how exactly do they suck? Not sure if this is bait or you are serious.
The lack of CUDA makes AMD DGPUs DOA for most things workstation/scientific. Occasionally there's a metric where AMD is competitive for workstations, but it's the exception, not the rule.

This is regarding productivity OFC, not gaming, where RDNA 4 is reasonably competitive vs Blackwell.

Beyond just the hardware, developer and software support for NVIDIA's CUDA architecture is so many orders of magnitude ahead it's not even funny. AMD has made some steps in the right direction recently, but they have a lot of catching up to do and NVIDIA has insane momentum.

LastDudeALiveI think advertising a GPU by saying it gets better performance when the "competition" does not have the physical ability to run the workload just makes AMD look desperate and silly. That's like if Intel advertised the Arc A310 as ∞% better than the GTX 1080-Ti (when ray tracing is on).

Why not pick a model that's optimized for RDNA 4 and claim ~50% better performance? Everyone knows that first-party benchmarks and advertising claims are just marketing. We're willing to accept a certain amount of BS. But there's a difference between cherry-picking benchmarks where you perform better, and being so desperate to appear competitive you lose all credibility.
Or how AMD are currently comparing the $11700 96 core 9995WX against the $5890 60 core Xeon-3595X, instead of the $7999 64 core 9985WX.

Anything to show a bigger bar chart.
Posted on Reply
#6
overclockedamd
So a 4 trillion dollar company that has rigged the game for decades is ahead so that's why AMD gpus suck. Got it.
Posted on Reply
#7
igormp
Early listings suggest a price of around $1,250
That's actually not that bad, all things considered.

If it actually retails for that price, it will be a great option for those that have the knowledge and bandwidth to sort out some minor ROCm quirks, or that use stacks that support it already.
Posted on Reply
#8
LastDudeALive
dgianstefaniThe lack of CUDA makes AMD DGPUs DOA for most things workstation/scientific. Occasionally there's a metric where AMD is competitive for workstations, but it's the exception, not the rule.

This is regarding productivity OFC, not gaming, where RDNA 4 is reasonably competitive vs Blackwell.

Beyond just the hardware, developer and software support for NVIDIA's CUDA architecture is so many orders of magnitude ahead it's not even funny. AMD has made some steps in the right direction recently, but they have a lot of catching up to do and NVIDIA has insane momentum.

Or how AMD are currently comparing the $11700 96 core 9995WX against the $5890 60 core Xeon-3595X, instead of the $7999 64 core 9985WX.

Anything to show a bigger bar chart.
Many system builders like Puget don't even offer the professional AMD GPUs in their workstations. Even at half the price of the alternative Nvidia GPU, it's just never worth it.
Posted on Reply
#9
lexluthermiester
dgianstefaniThe lack of CUDA makes AMD DGPUs DOA for most things workstation/scientific. Occasionally there's a metric where AMD is competitive for workstations, but it's the exception, not the rule.
For that reason alone, AMD needs to come up with it's own version of CUDA.
Posted on Reply
#10
dgianstefani
TPU Proofreader
LastDudeALiveMany system builders like Puget don't even offer the professional AMD GPUs in their workstations. Even at half the price of the alternative Nvidia GPU, it's just never worth it.
Correct, and many software developers don't even certify for non NVIDIA cards, the install base is just too low to make financial sense. It doesn't help that AMD's support of their architectures seems to depend on mood, sometimes architectures aren't given ROCm support for months after release, or only certain models.

Even a $300 RTX 5060 or a used RTX 2060 will run CUDA software, though you might end up desiring more VRAM.

AMD needs to prove to software companies, developers, and users that they will continue to support cards many years after release, and code will function on a 10 year old CUDA card or a brand new one, like NVIDIA does.
lexluthermiesterFor that reason alone, AMD needs to come up with it's own version of CUDA.
What they're doing instead is trying to write translation layers like SCALE or ZLUDA so that CUDA will work on AMD. It's not working very well. The problem is NVIDIA released CUDA in 2007 and has been steadily improving it since then, while fostering developer and partner support through providing free resources, libraries and education, through great cost. That investment is what has paid off and continues to pay off as now everyone is pulling on the CUDA rope, whereas AMD skimping on support and development has meant they are irrevelant in the prosumer/workstation domain. Support is rarely dropped, even recently the huge fiasco over 32 bit Physx being depreciated (due to 32 bit CUDA being dropped) with Blackwell only goes to show that the concept of an NVIDIA card not natively running everything is incredibly shocking to the community.
Posted on Reply
#11
lexluthermiester
dgianstefaniWhat they're doing instead is trying to write translation layers so that CUDA will work on AMD. It's not working very well.
Yeah, that's why I suggested making their own.
Posted on Reply
#12
dgianstefani
TPU Proofreader
lexluthermiesterYeah, that's why I suggested making their own.
Why would anyone use it? Why go to the effort of learning as a dev, or funding/risking as a business, for what will be a fundamentally worse product, for at least five to ten years? It's a difficult situation, they needed to "make their own CUDA" 15 years ago. The translation layers and hoping for opensource support that a percentage of the market use is the best they can hope for at this point.
Posted on Reply
#13
LastDudeALive
igormpIf it actually retails for that price, it will be a great option for those that have the knowledge and bandwidth to sort out some minor ROCm quirks, or that use stacks that support it already.
That's true, it would be the most affordable 32GB GPU if it's available to consumers, and not just OEMs. Certainly attractive to enthusiasts and DIY types. But that tiny market isn't anywhere near enough to sustain AMD. Just look at how the Arc cards are doing. The A310 is a very good HTPC or media server GPU. But niche usecases like that can't be the business model. Many people would probably still pay the premium for a 5090.
Posted on Reply
#14
Hecate91
overclockedamdSo a 4 trillion dollar company that has rigged the game for decades is ahead so that's why they suck. Got it.
CUDA being open source would've been better off for everyone but it's Nvidia they've gotten 90% of the market from rigging the game.
Posted on Reply
#15
lexluthermiester
dgianstefaniWhy would anyone use it?
Why would anyone use an alternative to something? If AMD can offer one and make a compelling reason to use it, people will. Hell, I know a more than a few people who would love an AMD answer to CUDA just to switch away from NVidia. Personally, I love NVidia's compute, it's served me VERY well. But IF AMD had a compelling alternative, that could or would serve me better, hell yes I'd switch.
Posted on Reply
#16
dgianstefani
TPU Proofreader
Hecate91CUDA being open source would've been better off for everyone but it's Nvidia they've gotten 90% of the market from rigging the game.
Yes NVIDIA should shoot themselves in foot by doing all the work then giving it to everyone for free, them not doing so is "rigging the game".
:laugh:
Making a forward thinking product before anyone else realised it was a good idea, then working on it continuously for ~20 years while retaining support for older architectures, promoting adoption, is "rigging the game".
Mhm.
I suppose AMD should have given the x86-64 to Intel instead of cross licensing it? I mean why patent or protect their IP? AMD should just give away everything for free. That's what they've been doing all this time right?
Businesses operate as businesses, more news at 7.
lexluthermiesterWhy would anyone use an alternative to something? If AMD can offer one and make a compelling reason to use it, people will. Hell, I know a more than a few people who would love an AMD answer to CUDA just to switch away from NVidia. Personally, I love NVidia's compute, it's served me VERY well. But IF AMD had a compelling alternative, that could or would serve me better, hell yes I'd switch.
The key words there are "compelling" and "better". AMD and it's partners would have to do how many millions/billions of man hours of development to get to the point where those words were accurate descriptions? I too enjoy some wishful thinking sometimes but doing so doesn't make me an optimist that these things will or even can happen.

Thankfully the GPU division isn't going to just die, because it's pulled along by AI/CDNA at hyperscale, hence why RDNA is being dropped next gen.
Posted on Reply
#17
Onasi
dgianstefaniI too enjoy some wishful thinking sometimes but doing so doesn't make me an optimist that these things will or even can happen.
Anyone going in 20-fucking-25 “Why doesn’t AMD/Intel/MooreThreads/McDonalds/whatever just create their own CUDA alternative? Are they stupid?” is late by a decade at a minimum. Trying to move the entirety of, uh, GPGPU world to a new unproven solution when they spent years building everything around the incumbent standard is almost impossible. Like, I doubt people understand the sheer magnitude of the task.
Posted on Reply
#18
igormp
lexluthermiesterFor that reason alone, AMD needs to come up with it's own version of CUDA.
That's exactly what ROCm is. It's not great, but it has certainly been improving and getting more traction.
dgianstefaniWhat they're doing instead is trying to write translation layers like SCALE or ZLUDA so that CUDA will work on AMD. It's not working very well.
ZLUDA has been "abandoned" by AMD due to licensing reasons, it's now an independent project. Spectral is also independent from AMD, afaik.
dgianstefaniWhy would anyone use it? Why go to the effort of learning as a dev, or funding/risking as a business, for what will be a fundamentally worse product, for at least five to ten years? It's a difficult situation, they needed to "make their own CUDA" 15 years ago. The translation layers and hoping for opensource support that a percentage of the market use is the best they can hope for at this point.
People are already using it, fwiw. For small scale companies, having someone do the grunt work to get it work can save them money on actual hardware acquisition costs. For large scale companies, it's simply a matter of Nvidia not being able to fulfill their requirements, so they have no option other than buying other products and making it work, which is not exclusive to AMD.
Most of the cases Nvidia is still the option that makes the most sense, but AMD products are getting traction and I do recommend you to get updated on the ongoing progress for that, since your opinion seems to be quite outdated.
LastDudeALiveBut that tiny market isn't anywhere near enough to sustain AMD.
That's not meant to sustain AMD, but rather to enable folks to use an AMD product with their software stack. There's no point in having ROCm working flawlessly and really amazing and effective rack scale hardware solutions if you won't be able to hire anyone that can work on those.
Nvidia has done the same with GeForce and CUDA, enabling students and researchers to dip into the stack with consumer-grade hardware, and then move up the chain and make use of their bigger offerings as time goes, with Pro offerings for their workstations so they can develop stuff, and deploying such things on the big x100 chips, all with the same code.
AMD is now trying to do a similar thing, and UDNA is a clear example of this path. Get consumer-grade hardware that enables the common person to dip their toes into ROCm (this is the part they are mostly lacking), allow them to have a better workstation option paid by their employer (product in the OP), and then deploy this on a big instinct platform.
Hecate91CUDA being open source would've been better off for everyone but it's Nvidia they've gotten 90% of the market from rigging the game.
CUDA is not a single piece of software, and MANY parts of it are open sourced already. Your complaint is clearly from someone that just wants to bash at a company without any experience on the actual stack.
Posted on Reply
#19
LastDudeALive
RedwoodzSaying Nvidia has earned the right is like saying a bank robber earned his loot by tunneling through the floor.
Ahh yes, because a unified programming language that works on everything from 2GB mobile chips to petabyte scale datacenters was just lying around for everyone to use until Nvidia paywalled it. 20 years ago, we all had our personal 500 billion paramater LLMs, until Nvidia stole them from us and forced us to pay extra for their GPUs.
Posted on Reply
#20
dgianstefani
TPU Proofreader
igormpMost of the cases Nvidia is still the option that makes the most sense, but AMD products are getting traction and I do recommend you to get updated on the ongoing progress for that, since your opinion seems to be quite outdated.
My opinion will be outdated when these ROCm alternatives start being viable outside of edge cases, or due to NVIDIA chips/platforms being literally backordered so much that it's a choice of use AMD for now or wait six months to even start because the CTO didn't think ahead six months ago.

Technically having a similar capability ≠ equivalent to.

The market speaks for itself.

RTX 3090s/4090s are still snapped up for close to what people paid for them new, because they're just so useful.

Same goes for two to three generation old RTX Pro cards, which are also valuable on the second hand market because they too, are just so useful.

What's a Vega 64/AMD workstation card from a few generations ago worth now when support has already dropped?
Posted on Reply
#21
igormp
dgianstefaniMy opinion will be outdated when these ROCm alternatives start being viable outside of edge cases
Which ROCm alternatives? ROCm itself is viable in many scenarios. As I said, far from being close to Nvidia, but it's not the shitshow that it used to be anymore.
Heck, even upstream pytorch support is in place and you can make use of it as simple as if you were running CUDA.
dgianstefanior due to NVIDIA chips/platforms being literally backordered so much that it's a choice of use AMD for now or wait six months to even start because the CTO didn't think ahead six months ago.
That's unrelated to Nvidia itself. AMD's Instinct GPUs are also backordered by a LOT. Ofc their capacity planning was a fraction of Nvidia's, but still.
People are looking for accelerators right and left.
dgianstefaniWhat's a Vega 64/AMD workstation card from a few generations ago worth now when support has already dropped?
That's a wrong comparison. Vega 64 is worth as much as Pascal, no one cares about those and newer gens made them look like crap.
AMD only started having worthwhile hardware for compute with the 7000 generation/RDNA3, before that it mas moot talking about it given that it lacked both hardware and software.
Back to your point:
dgianstefaniRTX 3090s/4090s are still snapped up for close to what people paid for them new, because they're just so useful.
So are the 7900xtx'es (which had a lower MSRP than the nvidia equivalent to begin with), and hence why I'm still considering another used 3090 instead of a 4090/7900xtx. But I'm in no rush and will wait for the 24GB battlemage and that r9700 before deciding.

If I were able to snatch a 7900xtx for cheap, I would have already bought a couple of those :laugh:
Posted on Reply
#22
dgianstefani
TPU Proofreader
igormpWhich ROCm alternatives? ROCm itself is viable in many scenarios. As I said, far from being close to Nvidia, but it's not the shitshow that it used to be anymore.
Heck, even upstream pytorch support is in place and you can make use of it as simple as if you were running CUDA.
ROCm is the alternative, the default (no need to even say it) is CUDA.

Viable (if that's true, which isn't clear) ≠ competitive.
igormpThat's a wrong comparison. Vega 64 is worth as much as Pascal, no one cares about those and newer gens made them look like crap.
AMD only started having worthwhile hardware for compute with the 7000 generation/RDNA3, before that it mas moot talking about it given that it lacked both hardware and software.
Back to your point:
Ask a dev whether they'd prefer a Vega 64 or a 1080 Ti. Are you really going to keep pretending they're even close to being equivalent for productivity software?

The 1080 Ti supports CUDA. The Vega 64 supports what?
igormpSo are the 7900xtx'es (which had a lower MSRP than the nvidia equivalent to begin with), and hence why I'm still considering another used 3090 instead of a 4090/7900xtx. But I'm in no rush and will wait for the 24GB battlemage and that r9700 before deciding.

If I were able to snatch a 7900xtx for cheap, I would have already bought a couple of those :laugh:
Another false equivalence. The 7900XTXs are nowhere near as sought after as the 24/32 GB NVIDIA consumer cards for prosumer/workstation, beyond people who don't actually make money with their GPU and just like big VRAM numbers. The release of the 9070XT has made them essentially irrelevant for gaming too, though there's still hangers on due to what, 5% better raster?

Again, CUDA still works on these cards, they're just not getting further developments.

Posted on Reply
#23
igormp
dgianstefaniROCm is the alternative, the default (no need to even say it) is CUDA.
Well, your phrasing made it sound weird:
dgianstefaniMy opinion will be outdated when these ROCm alternatives start being viable outside of edge cases
Makes more sense if you typo'ed and meant "CUDA alternatives" instead.
dgianstefaniViable (if that's true, which isn't clear) ≠ competitive.
Please, do point to whenever I said it's competitive.
It is starting to be viable. Just to be clear, so far we have been talking about AI and whatnot, for which Pytorch already has upstream ROCm support and even vLLM is getting first-grade support after AMD got their shit straight and is helping the devs.

If you were to say something like rendering (which is not my area), I'd have no opinion whatsoever, other than agreeing that Optix simply mops the floor with AMD when it comes to blender.
dgianstefaniAsk a dev whether they'd prefer a Vega 64 or a 1080 Ti. Are you really going to keep pretending they're even close to being equivalent for productivity software?

The 1080 Ti supports CUDA. The Vega 64 supports what?
The 1080ti has no cooperative matrix support, and its performance is subpar, a 2060 manages to be over 2x faster for anything matrices.
Both are shit.

Do you know what folks on a really tight budget prefer? Cheaper and crappy GPUs with tons of VRAM, be it a P40 or a MI25. If not that, geforce pascal makes no sense whatsoever and you'd be better off with a 2060, so vega64 and 1080ti are equally irrelevant.
dgianstefaniAnother false equivalence. The 7900XTXs are nowhere near as sought after as the 24/32 GB NVIDIA consumer cards for prosumer/workstation
7900XTXs are simply almost impossible to find used in my market, and go for way more than 3090s, and priced way too close to 4090s (which doesn't make sense whatsoever).
Looking over on ebay US, 3090s are a bit cheaper than 7900XTXs, with the 4090s being way more expensive than both.
dgianstefanibeyond people who don't actually make money with their GPU and just like big VRAM numbers.
Now I'm confused. We are talking about used products. People making proper money as a business won't even be looking for those so the discussion of used products would be moot.
Hobbyists and small scale stuff would be the ones looking into those, no matter if nvidia or AMD, and that's where the argument makes sense.
dgianstefaniThe release of the 9070XT has made them essentially irrelevant for gaming too, though there's still hangers on due to what, 5% better raster?
What does gaming have to do with anything we've discussed so far?
dgianstefaniAgain, CUDA still works on these cards, they're just not getting further developments.
So? How's that any relevant? I'm not talking about those lacking any sort of software support, rather that the hardware itself is useless, which applies for both vega64 and the 1080ti.


Again, all your points seem to be from someone that has no industry experience in that specific field. I'm not sure what you're even trying to argue for anymore lol
Posted on Reply
#24
dgianstefani
TPU Proofreader
igormpWell, your phrasing made it sound weird:

Makes more sense if you typo'ed and meant "CUDA alternatives" instead.
The way I wrote it made sense enough. ROCm, ZLUDA, SCALE, these alternatives to CUDA (doesn't need to be said, it's the default, as I've already stated).
igormpPlease, do point to whenever I said it's competitive.
If it's not competitive, why bother?
igormpIf you were to say something like rendering (which is not my area), I'd have no opinion whatsoever, other than agreeing that Optix simply mops the floor with AMD when it comes to blender.
CUDA cards mop the floor with every AMD prosumer/productivity/workstation alternative, viable or not. Until that changes, there is essentially no competition. Proving something like ROCm works in a lab or when on a budget does not = the competitive solution the entire rest of the market goes for. Move the needle or be ignored.
igormpThe 1080ti has no cooperative matrix support, and its performance is subpar, a 2060 manages to be over 2x faster for anything matrices.
Both are shit.
It has CUDA, so it is useful, pretty much that simple. 11 GB VRAM and enough cores to do most of the work you do on a 4080, just slower. The point is it's viable, a word you use. It was competitive on release eight years ago, and today it's usable and does the job. Obviously if building new on a $3-400 budget you would pick something like a 40/5060 Ti 16 GB, not a used 1080 Ti. The point I'm making is in a choice between Vega and Pascal, the Pascal developer/professional is still making money today, and has got eight years of usefulness out of their card. The later RTX cards didn't exist when the 1080 Ti released, the Vega 64 did, hence my comparison, so arguing they are better is irrelevant.
igormp7900XTXs are simply almost impossible to find used in my market, and go for way more than 3090s, and priced way too close to 4090s (which doesn't make sense whatsoever).
Looking over on ebay US, 3090s are a bit cheaper than 7900XTXs, with the 4090s being way more expensive than both.
Yes, I know, the 7900XTX doesn't make sense.
igormpNow I'm confused. We are talking about used products. People making proper money as a business won't even be looking for those so the discussion of used products would be moot.
Hobbyists and small scale stuff would be the ones looking into those, no matter if nvidia or AMD, and that's where the argument makes sense.
People making proper money as a business buy RTX Pro cards. The rest buy xx90 cards or whatever they can afford/find.
igormpWhat does gaming have to do with anything we've discussed so far?
They're irrelevant for both gaming/productivity in 2025 when picking a card, for gaming you pick 9070XT, for productivity you pick NVIDIA, that's what it has to do with this discussion.
igormpSo? How's that any relevant? I'm not talking about those lacking any sort of software support, rather that the hardware itself is useless, which applies for both vega64 and the 1080ti.
"Useless". You're bundling both into the same camp because that way you can claim a false equivalence, instead of admitting the 1080 Ti was way more useful as a productivity card over the past eight years, and indeed today.
igormpAgain, all your points seem to be from someone that has no industry experience in that specific field. I'm not sure what you're even trying to argue for anymore lol
Appeal to authority huh?
Posted on Reply
#25
Vya Domus
dgianstefaniOr how AMD are currently comparing the $11700 96 core 9995WX against the $5890 60 core Xeon-3595X, instead of the $7999 64 core 9985WX.

Anything to show a bigger bar chart.
"Anything to show a bigger bar chart"

lol

They are comparing their best vs their best.
Posted on Reply
Add your own comment
Aug 2nd, 2025 03:34 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts