• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS TUF Gaming Radeon RX 9070 XT Comes with Three 8-pin Power Connectors

This. Its really all I can see here. AMD firing on all the wrong cilinders to keep hold of their 10% share... Its utterly ridiculous incompetence and a strategic blunder.
Why is it so hard for people to understand a smaller node design has decreased thermal dissipation? This is not a new thing, it get's worse every gen and will get even more so in the future.
After your 6th comment in this thread talking trash I wonder why you even care?
 
Its not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.
You're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.
 
Basic models have the 2 connectors and will likely suffer from power limit issues on such base models when overclocked (since I apparently forgot to mention this),
Overclocked to where?. All cards produced these days are already running at their limits. And AIB models with two 8pin will in no way be limited by nonexistent OC potential.
same situation the RTX 3090 had. That's why NV adopted the 2x6 connector, it eliminates that problem and ensures an ample power supply regardless of model.
Nvidia mainly adopted the 12pin (back then, not 16pin or 2x6) because their PCB was so small that it could not properly fit 3x8pin there without either artificially extending the PCB or using a soldered on addon board on the cooler. It had nothing to do with 2x8pin inability to provide 450W.
Never gonna complain about spreading the power load among more, lower gauge connectors after seeing the 12V debacle of the 40 series…
8pin has never had this problem. Hence load balancing is unnecessary.
I like how most the 9070s are bigger than the 5090FE lol.... Although people love them some BFGs....
As if 50 series AIB cards are all two slot models?
DLSS 4 is good, it's people's perception (and I guess nvidias push) of how it should be used that's problematic. Sure you might argue "who needs 240 fps" but I'd say, why not? We have the monitors capable of doing that, we don't have CPUs or GPUs, MFG fixes that.
It's not 240fps. It is the perception of 240fps smoothness, but with latency of what the original framerate is.
FG always requires the use of latency reducing option to be enabled such as Reflex and now Reflex 2.
Without this enabled you get 240fps but it doesn't feel like 240 because of the input delay.
Sure, until you realize the clock speeds here are exceeding 3 GHz out of the box and it may perhaps overclock to near 4-ish? At least 3.5.
Show me a card in the last ten years that did +1Ghz OC on air. I think the closest might have been 980 Ti in 2014 as it OC'ed well but even that was not able to do a single Ghz on top of it's boost clocks. At least not on air or without hardmods.
They'd rather stack on 3x8 pins than do a single 12V-2X6 lol. I wonder if pleasing a vocal minority is really the better option over space efficiency.
"Vocal minority" are people like you asking for 16pin. Most AMD users i know dont want that.
Also this is just typical AIB flexing. Reference designs will have 2x8pin.
And im sure "space efficiency" is paramount on a 3,5 slot behemoth of a card...
Well, that kills any intent I had to even consider switching to AMD for my GPU.
That some AIB models include a third connector? (one you dont even need to use btw).
What an odd reason to write off entire brand.
9070XT is supposed to be a 7900XT-class GPU on a much more power-efficient node.

If it consumes more than 300W then AMD have really f***ed up.
Leaks say stock is ~260W. These are all AIB models that are supposedly up to 330W.
By your logic Nvidia also f***ed up with 5090 going to 600W from 450W on 4090 despite using a more power efficient node with a slightly larger chip.
Its not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.
You're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.
Exactly. Near the PCIe connector, there should the least PCB warping.
 
They'd rather stack on 3x8 pins than do a single 12V-2X6 lol. I wonder if pleasing a vocal minority is really the better option over space efficiency.

And does it even really need 3? 525w on a 9070XT.... talk about pushing far into inefficiency for a hundred mhz or two.
I rather think they are just re using 7900XT(X) coolers and boards. Its not like those sell. It wouldnt surprise me one bit if they max out at 300W tgp
 
It's not 240fps. It is the perception of 240fps smoothness, but with latency of what the original framerate is.
FG always requires the use of latency reducing option to be enabled such as Reflex and now Reflex 2.
Without this enabled you get 240fps but it doesn't feel like 240 because of the input delay.
And reflex being open is a problem, why?
 
And reflex being open is a problem, why?
Did i say it was a problem?

I merely explained that it's not tripling of FPS like some people mistakenly believe who just watch Nvidia marketing.
And that Reflex or AMD's analogue always needs to be used with FG to get the best experience.
it's very good that it's open and not limited to only (select) Nvidia series or cards.

Also im far more impressed with Reflex 2 than i am with DLSS4 MFG. Lossless Scaling (program) already proved that even older cards can generate more than one extra frame so MFG was no surprise to me and more of a natural evolution.
 
By your logic Nvidia also f***ed up with 5090 going to 600W from 450W on 4090 despite using a more power efficient node with a slightly larger chip.
The 9070XT's Navi48 is a much smaller monolithic chip than Navi31 which is an inefficient MCM design on an older process node than even Nvidia's 40-series.
AMD are gaining the benefits of jumping forwards two process nodes AND reverting back to more efficient monolithic silicon AND dropping the memory bus from 320-bit to 256-bit, and having to power fewer memory controllers and GDDR6 packages.

I have no idea WTF point you're trying to make with the 4090 > 5090 example. 5090 has 32% more cores, is higher clocked, has a 33% wider bus, 33% more memory modules to drive, and those GDDR7 modules, despite being 20% more energy efficient than previous-gen VRAM are actually running 80% faster, so a net power drain compared to the 4090's GDDR6X. It seems like you understand neither my original point, nor what causes energy consumption in modern GPU designs.
 
The 9070XT's Navi48 is a much smaller monolithic chip than Navi31 which is an inefficient MCM design on an older process node than even Nvidia's 40-series.
Reported as 390mm². I would not call it small. Also comparing to N31 is not the best as MCD's added a lot more die area than was really necessary to that design. It's best to compare to a another monolithic design.
AMD are gaining the benefits of jumping forwards two process nodes
One. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
 
Same way Nvidia does ? There are also custom 50 series cards which are giant and I'll remind everyone it was AMD who first made a point about releasing more compact reference cards with 7000 serries.
nVidia has a SFF spec of the 3000, 4000, and now the 5000 series cards. It's up to manufacturers if they want to follow them.
 
One. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
5nm to 4nm is one, two, or three nodes, depending on how you name, categorise, or measure the damn things.

N5/N5P > 4N > N4/N4P

Regardless of naming or semantics, AMD have a node shrink this generation, Nvidia does not - because Nvidia Blackwell is on the exact same N4 node they were on with Ada.
 
The core is too low on the pcb, in a year there's going to be a shit ton of faulty cards.

Shorter traces to PCIe slot means better reliability not worse.

Its not the traces, but broken solder balls do to gpu sag. It's going to be core reballing festival.

You're still wrong about this, the closer the GPU core is to the PCIe slot the smaller the forces exerted on the solder joints, it's like a lever the further away you go from the pivoting point (the slot) the more the PCB will flex.

Exactly. Near the PCIe connector, there should the least PCB warping.

This is a better location. But it will threaten the motherboard, will dump the heat onto it.

5nm to 4nm is one, two, or three nodes, depending on how you name, categorise, or measure the damn things.

N5/N5P > 4N > N4/N4P

Regardless of naming or semantics, AMD have a node shrink this generation, Nvidia does not - because Nvidia Blackwell is on the exact same N4 node they were on with Ada.

4nm is too little, too late in 2025. TSMC and Apple have been working on 3nm for two years already, and on its minor improvement 2nm for several months already!
 
4nm is too little, too late in 2025. TSMC and Apple have been working on 3nm for two years already, and on its minor improvement 2nm for several months already!
Yes, but AMD doesn't want to spend through the roof on the filler generation RDNA4 is. That, and have a foundry node vastly available for mass producing without that much competition from anyone.
 
Yes, but AMD doesn't want to spend through the roof on the filler generation RDNA4 is. That, and have a foundry node vastly available for mass producing without that much competition from anyone.

Wow, all Snapdragons are on this node. I wouldn't call it "vastly available". It is a mistake by the management, which should go.
 
Wow, all Snapdragons are on this node. I wouldn't call it "vastly available". It is a mistake by the management, which should go.
Again, how much allocation can AMD secure on a more mature node? If even with Qualcomm using the same node (and we're talking about lithography here, not what actual process is in use) there's much more available for AMD to get their products made, that's a win.
 
Again, how much allocation can AMD secure on a more mature node? If even with Qualcomm using the same node (and we're talking about lithography here, not what actual process is in use) there's much more available for AMD to get their products made, that's a win.

Even if Nvidia also uses the same node? And has much more reserved wafers for themselves ?
 
Even if Nvidia also uses the same node? And has much more reserved wafers for themselves ?
Even with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?
 
Even with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?

20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.
 
20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.
N4 isn't the same node as N3; it's a tweaked N5. N3 is now estimated to cost $18,000 per wafer so the earlier estimates are likely to be inflated.

"So, we are not done with our 5nm and 4nm [technologies]," said Kevin Zhang, Vice President of Business Development at TSMC. "From N5 to N4, we have achieved 4% density improvement optical shrink

Reported as 390mm². I would not call it small. Also comparing to N31 is not the best as MCD's added a lot more die area than was really necessary to that design. It's best to compare to a another monolithic design.

One. 5nm to 4nm. N31 GCD was 5nm. MCD's were 6nm.
While it doesn't seem to be as small as rumoured earlier, other estimates range from 300 mm^2 to 330 mm^2.
 
Even with everyone else getting their allocation, will staying with this node net AMD more or less wafers (and savings at scale) than if they moved to a more advanced node?

Wafers which no one needs?
I mean along the absolute performance, it is very important for AMD to fix the idle power consumption, and to fix the drivers from day 0.

I think AMD needs a halo. The halo sells all the rest. Design a monster GPU on 2nm, even be it a paper launch for a year, but claim that performance crown . .


20k per wafer with N3/N4 but over 25k for N2 and apparently they are raising prices again in 2025..... My guess is N3 just isn't suitable for large GPU.

This is quite obvious with the backported Nvidia Blackwell. Something definitely has happened so forced them to return to the earlier process.
 
Last edited:
The early calculations were erroneous; the true figures are hidden by NDAs, but N3 was estimated to cost 35% more than N5 per wafer. With the latest estimate of 18,000 per N3 wafer, N5 and N4 prices should be around $13000.

From my understanding N4 started at 13k but they jacked up it's price. I agree though it's all just guessing.

Is this price the same for everyone - Apple, AMD, Nvidia, Qualcomm, Mediatek, etc?
Maybe someone gets deep discounts, maybe someone else gets hefty price rises?
 
Is this price the same for everyone - Apple, AMD, Nvidia, Qualcomm, Mediatek, etc?
Maybe someone gets deep discounts, maybe someone else gets hefty price rises?

They likely get bulk discounts or repeat customer discounts but that info we will never see. Apparently they are raising them again this month.
 
Back
Top