Wednesday, January 8th 2025

ASUS TUF Gaming Radeon RX 9070 XT Comes with Three 8-pin Power Connectors

At the 2025 International CES, ASUS showed off its Radeon RX 9070 XT TUF Gaming graphics card. This card was part of a multi-brand showcase AMD set up in its booth. The card features the latest generation of TUF Gaming board design that the company is debuting with the GeForce RTX 50-series and Radeon RX 90-series. The card features a triple slot cooling solution, with its Axial-Tech fans taking up an entire slot (thicker fans mean lower RPM). The PCB is 2/3 the length of the card, so all airflow from the third fan is vented through the heatsink and out a large cutout on the backplate.

Perhaps the most striking feature of the ASUS TUF Gaming RX 9070 XT is its power connectors. The card calls for three 8-pin PCIe power connectors. We've only seen one other custom RX 9070 XT come with three connectors, and that is the XFX RX 9070 XT Merc 319 Black. The question then arises, what is a small performance-segment GPU going to do with 525 W of power on tap? Most other cards, including the PowerColor Red Devil, come with just two 8-pin connectors (375 W), so does the presence of three connectors mean that the board power of overclocked RX 7090 XT exceed 300 W, and board partners are trying to reduce the load on the 75 W put out by the PCIe slot, by sneaking in a third 8-pin input? This isn't the only oddball power connector configuration we've seen at CES for the RX 9070 series. The ASRock RX 9070 XT Taichi comes with a 16-pin 12V2x6 power connector, although there's no way of telling yet if this is configured for 600 W—it could even be keyed for 300 W.
Add your own comment

77 Comments on ASUS TUF Gaming Radeon RX 9070 XT Comes with Three 8-pin Power Connectors

#51
Vya Domus
PATHGALOREIts not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.
You're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.
Posted on Reply
#52
Tomorrow
Dr. DroBasic models have the 2 connectors and will likely suffer from power limit issues on such base models when overclocked (since I apparently forgot to mention this),
Overclocked to where?. All cards produced these days are already running at their limits. And AIB models with two 8pin will in no way be limited by nonexistent OC potential.
Dr. Drosame situation the RTX 3090 had. That's why NV adopted the 2x6 connector, it eliminates that problem and ensures an ample power supply regardless of model.
Nvidia mainly adopted the 12pin (back then, not 16pin or 2x6) because their PCB was so small that it could not properly fit 3x8pin there without either artificially extending the PCB or using a soldered on addon board on the cooler. It had nothing to do with 2x8pin inability to provide 450W.
hsewNever gonna complain about spreading the power load among more, lower gauge connectors after seeing the 12V debacle of the 40 series…
8pin has never had this problem. Hence load balancing is unnecessary.
oxrufiioxoI like how most the 9070s are bigger than the 5090FE lol.... Although people love them some BFGs....
As if 50 series AIB cards are all two slot models?
JustBenchingDLSS 4 is good, it's people's perception (and I guess nvidias push) of how it should be used that's problematic. Sure you might argue "who needs 240 fps" but I'd say, why not? We have the monitors capable of doing that, we don't have CPUs or GPUs, MFG fixes that.
It's not 240fps. It is the perception of 240fps smoothness, but with latency of what the original framerate is.
FG always requires the use of latency reducing option to be enabled such as Reflex and now Reflex 2.
Without this enabled you get 240fps but it doesn't feel like 240 because of the input delay.
Dr. DroSure, until you realize the clock speeds here are exceeding 3 GHz out of the box and it may perhaps overclock to near 4-ish? At least 3.5.
Show me a card in the last ten years that did +1Ghz OC on air. I think the closest might have been 980 Ti in 2014 as it OC'ed well but even that was not able to do a single Ghz on top of it's boost clocks. At least not on air or without hardmods.
wolfThey'd rather stack on 3x8 pins than do a single 12V-2X6 lol. I wonder if pleasing a vocal minority is really the better option over space efficiency.
"Vocal minority" are people like you asking for 16pin. Most AMD users i know dont want that.
Also this is just typical AIB flexing. Reference designs will have 2x8pin.
And im sure "space efficiency" is paramount on a 3,5 slot behemoth of a card...
PumperWell, that kills any intent I had to even consider switching to AMD for my GPU.
That some AIB models include a third connector? (one you dont even need to use btw).
What an odd reason to write off entire brand.
Chrispy_9070XT is supposed to be a 7900XT-class GPU on a much more power-efficient node.

If it consumes more than 300W then AMD have really f***ed up.
Leaks say stock is ~260W. These are all AIB models that are supposedly up to 330W.
By your logic Nvidia also f***ed up with 5090 going to 600W from 450W on 4090 despite using a more power efficient node with a slightly larger chip.
PATHGALOREIts not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.
Vya DomusYou're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.
Exactly. Near the PCIe connector, there should the least PCB warping.
Posted on Reply
#53
Vayra86
wolfThey'd rather stack on 3x8 pins than do a single 12V-2X6 lol. I wonder if pleasing a vocal minority is really the better option over space efficiency.

And does it even really need 3? 525w on a 9070XT.... talk about pushing far into inefficiency for a hundred mhz or two.
I rather think they are just re using 7900XT(X) coolers and boards. Its not like those sell. It wouldnt surprise me one bit if they max out at 300W tgp
Posted on Reply
#54
JustBenching
TomorrowIt's not 240fps. It is the perception of 240fps smoothness, but with latency of what the original framerate is.
FG always requires the use of latency reducing option to be enabled such as Reflex and now Reflex 2.
Without this enabled you get 240fps but it doesn't feel like 240 because of the input delay.
And reflex being open is a problem, why?
Posted on Reply
#55
Tomorrow
JustBenchingAnd reflex being open is a problem, why?
Did i say it was a problem?

I merely explained that it's not tripling of FPS like some people mistakenly believe who just watch Nvidia marketing.
And that Reflex or AMD's analogue always needs to be used with FG to get the best experience.
it's very good that it's open and not limited to only (select) Nvidia series or cards.

Also im far more impressed with Reflex 2 than i am with DLSS4 MFG. Lossless Scaling (program) already proved that even older cards can generate more than one extra frame so MFG was no surprise to me and more of a natural evolution.
Posted on Reply
#56
Chrispy_
TomorrowBy your logic Nvidia also f***ed up with 5090 going to 600W from 450W on 4090 despite using a more power efficient node with a slightly larger chip.
The 9070XT's Navi48 is a much smaller monolithic chip than Navi31 which is an inefficient MCM design on an older process node than even Nvidia's 40-series.
AMD are gaining the benefits of jumping forwards two process nodes AND reverting back to more efficient monolithic silicon AND dropping the memory bus from 320-bit to 256-bit, and having to power fewer memory controllers and GDDR6 packages.

I have no idea WTF point you're trying to make with the 4090 > 5090 example. 5090 has 32% more cores, is higher clocked, has a 33% wider bus, 33% more memory modules to drive, and those GDDR7 modules, despite being 20% more energy efficient than previous-gen VRAM are actually running 80% faster, so a net power drain compared to the 4090's GDDR6X. It seems like you understand neither my original point, nor what causes energy consumption in modern GPU designs.
Posted on Reply
#57
Tomorrow
Chrispy_The 9070XT's Navi48 is a much smaller monolithic chip than Navi31 which is an inefficient MCM design on an older process node than even Nvidia's 40-series.
Reported as 390mm². I would not call it small. Also comparing to N31 is not the best as MCD's added a lot more die area than was really necessary to that design. It's best to compare to a another monolithic design.
Chrispy_AMD are gaining the benefits of jumping forwards two process nodes
One. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
Posted on Reply
#58
Gasaraki
Vya DomusSame way Nvidia does ? There are also custom 50 series cards which are giant and I'll remind everyone it was AMD who first made a point about releasing more compact reference cards with 7000 serries.
nVidia has a SFF spec of the 3000, 4000, and now the 5000 series cards. It's up to manufacturers if they want to follow them.
Posted on Reply
#59
roki4
oxrufiioxoI like how most the 9070s are bigger than the 5090FE lol.... Although people love them some BFGs....
5090 aib models are still big bricks, this 9070 is Asus OC model so ...
Posted on Reply
#60
Chrispy_
TomorrowOne. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
5nm to 4nm is one, two, or three nodes, depending on how you name, categorise, or measure the damn things.

N5/N5P > 4N > N4/N4P

Regardless of naming or semantics, AMD have a node shrink this generation, Nvidia does not - because Nvidia Blackwell is on the exact same N4 node they were on with Ada.
Posted on Reply
#61
3valatzy
PATHGALOREThe core is too low on the pcb, in a year there's going to be a shit ton of faulty cards.
Vya DomusShorter traces to PCIe slot means better reliability not worse.
PATHGALOREIts not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.
Vya DomusYou're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.
TomorrowExactly. Near the PCIe connector, there should the least PCB warping.
This is a better location. But it will threaten the motherboard, will dump the heat onto it.
Chrispy_5nm to 4nm is one, two, or three nodes, depending on how you name, categorise, or measure the damn things.

N5/N5P > 4N > N4/N4P

Regardless of naming or semantics, AMD have a node shrink this generation, Nvidia does not - because Nvidia Blackwell is on the exact same N4 node they were on with Ada.
4nm is too little, too late in 2025. TSMC and Apple have been working on 3nm for two years already, and on its minor improvement 2nm for several months already!
Posted on Reply
#62
wNotyarD
3valatzy4nm is too little, too late in 2025. TSMC and Apple have been working on 3nm for two years already, and on its minor improvement 2nm for several months already!
Yes, but AMD doesn't want to spend through the roof on the filler generation RDNA4 is. That, and have a foundry node vastly available for mass producing without that much competition from anyone.
Posted on Reply
#63
3valatzy
wNotyarDYes, but AMD doesn't want to spend through the roof on the filler generation RDNA4 is. That, and have a foundry node vastly available for mass producing without that much competition from anyone.
Wow, all Snapdragons are on this node. I wouldn't call it "vastly available". It is a mistake by the management, which should go.
Posted on Reply
#64
wNotyarD
3valatzyWow, all Snapdragons are on this node. I wouldn't call it "vastly available". It is a mistake by the management, which should go.
Again, how much allocation can AMD secure on a more mature node? If even with Qualcomm using the same node (and we're talking about lithography here, not what actual process is in use) there's much more available for AMD to get their products made, that's a win.
Posted on Reply
#65
3valatzy
wNotyarDAgain, how much allocation can AMD secure on a more mature node? If even with Qualcomm using the same node (and we're talking about lithography here, not what actual process is in use) there's much more available for AMD to get their products made, that's a win.
Even if Nvidia also uses the same node? And has much more reserved wafers for themselves ?
Posted on Reply
#66
wNotyarD
3valatzyEven if Nvidia also uses the same node? And has much more reserved wafers for themselves ?
Even with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?
Posted on Reply
#67
oxrufiioxo
wNotyarDEven with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?
20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.
Posted on Reply
#68
AnotherReader
oxrufiioxo20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.
N4 isn't the same node as N3; it's a tweaked N5. N3 is now estimated to cost $18,000 per wafer so the earlier estimates are likely to be inflated.
"So, we are not done with our 5nm and 4nm [technologies]," said Kevin Zhang, Vice President of Business Development at TSMC. "From N5 to N4, we have achieved 4% density improvement optical shrink
TomorrowReported as 390mm². I would not call it small. Also comparing to N31 is not the best as MCD's added a lot more die area than was really necessary to that design. It's best to compare to a another monolithic design.

One. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
While it doesn't seem to be as small as rumoured earlier, other estimates range from 300 mm^2 to 330 mm^2.
Posted on Reply
#69
3valatzy
wNotyarDEven with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?
Wafers which no one needs?
I mean along the absolute performance, it is very important for AMD to fix the idle power consumption, and to fix the drivers from day 0.

I think AMD needs a halo. The halo sells all the rest. Design a monster GPU on 2nm, even be it a paper launch for a year, but claim that performance crown . .
oxrufiioxo20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.
This is quite obvious with the backported Nvidia Blackwell. Something definitely has happened so forced them to return to the earlier process.
Posted on Reply
#70
oxrufiioxo
AnotherReaderN4 isn't the same node as N3; it's a tweaked N5. N3 is now estimated to cost $18,000 per wafer so the earlier estimates are likely to be inflated.




While it doesn't seem to be as small as rumoured earlier, other estimates range from 300 mm^2 to 330 mm^2.
I know it isn't the same but everything I could find TSMC charges about the same for them.
Posted on Reply
#71
AnotherReader
oxrufiioxoI know it isn't the same but everything I could find TSMC charges about the same for them.
The early calculations were erroneous; the true figures are hidden by NDAs, but N3 was estimated to cost 35% more than N5 per wafer. With the latest estimate of 18,000 per N3 wafer, N5 and N4 prices should be around $13000.
Posted on Reply
#72
oxrufiioxo
AnotherReaderThe early calculations were erroneous; the true figures are hidden by NDAs, but N3 was estimated to cost 35% more than N5 per wafer. With the latest estimate of 18,000 per N3 wafer, N5 and N4 prices should be around $13000.
From my understanding N4 started at 13k but they jacked up it's price. I agree though it's all just guessing.
Posted on Reply
#73
3valatzy
AnotherReaderThe early calculations were erroneous; the true figures are hidden by NDAs, but N3 was estimated to cost 35% more than N5 per wafer. With the latest estimate of 18,000 per N3 wafer, N5 and N4 prices should be around $13000.
oxrufiioxoFrom my understanding N4 started at 13k but they jacked up it's price. I agree though it's all just guessing.
Is this price the same for everyone - Apple, AMD, Nvidia, Qualcomm, Mediatek, etc?
Maybe someone gets deep discounts, maybe someone else gets hefty price rises?
Posted on Reply
#74
oxrufiioxo
3valatzyIs this price the same for everyone - Apple, AMD, Nvidia, Qualcomm, Mediatek, etc?
Maybe someone gets deep discounts, maybe someone else gets hefty price rises?
They likely get bulk discounts or repeat customer discounts but that info we will never see. Apparently they are raising them again this month.
Posted on Reply
#75
wolf
Better Than Native
Tomorrow"Vocal minority" are people like you asking for 16pin. Most AMD users i know dont want that.
Thanks for proving my point lol.
Posted on Reply
Add your own comment
Jan 9th, 2025 10:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts