Tuesday, December 31st 2024
AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors
AMD will continue using traditional PCI Express power connectors for its upcoming Radeon RX 9000 series RDNA 4 graphics cards, according to recent information shared on the Chiphell forum. While there were some expectations that AMD would mimic NVIDIA's approach, which requires the newer 16-pin 12V-2×6 connector for its GeForce RTX 50 series, the latest information suggests a more traditional power approach. While AMD plans to release its next generation of graphics cards in the first quarter, most technical details remain unknown. The company's choice to stick with standard power connectors follows the pattern set by their recent Radeon RX 7900 GRE, which demonstrated that conventional PCI Express connectors can adequately handle power demands up to 375 W. The standard connectors eliminate the need for adapters, a feature AMD could highlight as an advantage. An earlier leak suggested that the Radeon RX 9070 XT can draw up to 330 W of power at peak load.
Intel reportedly cited similar reasons for using standard power connectors in their Arc "Battlemage" graphics cards, suggesting broader industry support for maintaining existing connection standards. NVIDIA's different approach reportedly requires all board partners to use the 12V-2×6 connector for the RTX 50 series, removing the option for traditional PCI Express power connectors. In contrast, AMD's decision gives its manufacturing partners more flexibility in their design choices, and MBA (Made by AMD) reference cards don't enforce the new 12V-2×6 power connector standard. Beyond the power connector details and general release timeframe pointing to CES, AMD has revealed little about the RDNA 4 architecture's capabilities. Only the reference card's physical appearance and naming scheme appear to be finalized, leaving questions about performance specifications unanswered, as early underwhelming performance leaks are somewhat unreliable until final drivers and final optimizations land.
Sources:
Chiphell, via HardwareLuxx
Intel reportedly cited similar reasons for using standard power connectors in their Arc "Battlemage" graphics cards, suggesting broader industry support for maintaining existing connection standards. NVIDIA's different approach reportedly requires all board partners to use the 12V-2×6 connector for the RTX 50 series, removing the option for traditional PCI Express power connectors. In contrast, AMD's decision gives its manufacturing partners more flexibility in their design choices, and MBA (Made by AMD) reference cards don't enforce the new 12V-2×6 power connector standard. Beyond the power connector details and general release timeframe pointing to CES, AMD has revealed little about the RDNA 4 architecture's capabilities. Only the reference card's physical appearance and naming scheme appear to be finalized, leaving questions about performance specifications unanswered, as early underwhelming performance leaks are somewhat unreliable until final drivers and final optimizations land.
133 Comments on AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors
btw “Team Red” used 2x 8 Pin on a dual gpu card that used as much as 600W and was only drivable by the best PSUs, so no they’re a lot of things but not scared. ;)
From the last years flagships only 2080Ti performance can be considered affordable these days.
And that card released seven years ago. 3090Ti performance may become affordable next year if the new cards keep prices in check. That would be five tears and that's still a big "if".
4090 performance wont be affordable until 2030 going by previous examples taking 5+ years. The effect of VRAM limitations cannot be measured in average FPS alone like TPU does. No offense to W1zzard here but the issue is more complex as it requires also frame time analysis for every game at every setting and this is heard to read and takes forever to benchmark.
Still those who have done it like Daniel Owen with 8G and 16GB 4060Ti found the 8GB card having significantly worse frametimes. And it wasn't just four games. In fact in another thread did a breakdown of TPU's own performance tests from this year. This was for 8GB cards and I'll quite it here: That's a 11% increase to 7900 GRE. About average these days. If it reaches 3090Ti then it will be 32% which is way above average these days and can be considered good. Except for you it seems because it's AMD. Who said it was supposed to beat XTX? Stop setting false expectations. Current leaks suggest 7900 GRE performance. Not XT or much less XTX. Doesn't mean it's still not happening. Did terrorist attacks stopped because it was not in the news? No.
If media gets saturated by the same news they tend to fade in the background once the initial panic has died down. You are the perfect embodiment of the person in this meme:
Name me last time a new card offered 100% performance increase.
From what i remember it was 6800XT over 5700XT at 92% according to TPU, but that's also a bit unfair comparison as 5700XT was decidedly a midrange card like RDNA4 will be and 6800XT was a high end card with much higher price and specs.
Before that i could find 4870 over 3870 at 119%.
And going ever further back 8800GTX over 7900GTX but i dont have percentages as it was that long ago.
So while 100% has happened a few times in history it's extremely rare. These days the best we can hope for is around +40% like 1080Ti vs 980Ti or 4090 vs 3090Ti.
I would not say 4090 owners called it's performance increase over 3090Ti as "barely noticeable".
This is just you preempting whatever AMD comes up with as "barely noticeable".
By your own logic Nvidia's performance upgrades are also "barely noticeable " as most dont even reach the rare 40% mark.
4080 going for 500ish is affordable and is the way the things will be cca '27. Or, maybe, just maybe, even '26. Couldn't care less. Current gen NVIDIA offerings are also price/performance rubbish, with 4070 Ti upwards being the pinnacle. Why should I? My point implies the buyer has got an X GPU that offers 100% performance and then, when they are up to upgrade their PC, they buy a Y GPU that offers at least 200% performance for that to be considered a real upgrade in my book. Times when $X GPU of today doubled the performance of $X GPU of yesterday are about 17 years old at this point, give or take. Which doesn't matter because no one said you must buy a new GPU every time something new is released. True. I hate everything about the state of affairs in NVIDIA SKUs, too. However, as an effective monopolist, NVIDIA are in their right to do so. AMD should declare a price war, invent something useful that NVIDIA cards cannot do, or do anything else that's impressive to at least save what they got left of their market share. What they do, however, is releasing products that barely outperform similarly priced NVIDIA GPUs (no more than 20% and not even always the case) in the most AMD favouring scenarios (no upscaling, no RT, no frame generation; things that AMD GPUs of all existing generations do MUCH worse than equally priced NVIDIA SKUs and, what's even funnier, some Intel ones).
Buy a one-trick pony for 500 or a well-rounded GPU for 600? If AMD's plan was to upsell NVIDIA GPUs they overdid on it.
And why should VRAM size increase so drastically between generations anyways, do each pixel on your screen need exponentially more data in order to be rendered?
Let's so some math for a moment;
Consider 4K (3840x2160), now assume we we're rendering a perfect scene with high details, we run 8xMSAA (8 samples per pixel), and we assume every object on-screen has 4 layers of textures, and every sample is interpolating on average 4 texels, and that every object is unique, so every texel is unique, resulting in a whopping 128 average samples per rendered pixel (this is far more than any game would ever do), it will still total just 3037.5 MB uncompressed*. (Keep in mind I'm talking if every piece of grass, rock, etc. is unique.) So when considering a realistic scenario with objects repeating, lots of off-screen nearby objects cached (mip-levels and AF), etc. a ~5 GB of textures, ~1.5 GB of meshes and ~1 GB of temporary buffers would still not fill a VRAM size of 8 GB, let alone 10 GB. Throw in 12-bit HDR, and it would still not be that bad.
*) Keep in mind that with MSAA, lots of the same texels will be sampled. And normal maps are usually much lower resolution and are very highly compressible.
So the only logical conclusion would be that if a game struggles with 10 GB VRAM in 1080p, the game is either very poorly designed, or the driver is buggy. And as we usually see in such comparison, it's usually another bottleneck slowing it down. If a game is acutally running out of VRAM, and the GPU starts swapping just because of that, the FPS wouldn't just drop a few percent, or have slightly higher variance in frame times, it would completely collapse.
When Nvidia releases their usual refreshes of GPUs with slightly higher clocks and memory speeds, and yet they keep scaling in 4K, we can safely conclude that VRAM size isn't a bottleneck.
So whenever these besserwissers on YouTube make their click-bait videos about an outlier game which drops 20-30%, it's bug, not a lack of VRAM.
And btw, "team green" had 2x 8 pin connectors on the RTX 30 series cards and it was tripping the overcurrent limit on power supplies. Yeah I know its cool to call AMD users here stupid, but some of us didn't want to deal with the risks because Nvidia didn't allow AIB's to use a standard connector thats proven to be safe.
It amazes me team green users want to ignore logic so hard to defend their favorite brand they're saying things like " but its not in the news". You do realize things happen without news coverage, right? Nvidia was likely the main company driving to push for a new power connector, since they needed something to fit their weirdly shaped cards. And yes Intel is a part of PCI-SIG, but funny enough they haven't been using the new connector either.
We only know the new revised connector works, there is plenty of evidence the inital version was garbage and didn't even fit into place with a solid enough retention clip and you couldn't bend it to fit in a reasonably sized case. As for someone working for a PSU company saying its fine, that is expected, I'd rather trust third party reviewers to tell me its fine.
So then, I popped my 4 GB 6500 XT into my PC to see it for myself, and honestly, I couldn't notice anything weird... which was... weird.
3080 Ti is pointless. 450+ for a five year old 12GB GPU? What a deal! /s
Until AMD's or Intel's midrange soundly beats 4080 there's little reason for it to cost 500 in '26. You're the one who brought up this ridiculous number. Now you can provide examples?
Sure by that logic i can upgrade from iGPU to 4090 and get 1000% but that's not what average person does. Well rounded with limited VRAM and weak RT perf? No it does not. Frametimes go haywire before anything else. That's the first indication that something is wrong. FPS drops come after that.
Keep scaling? What are you talking about? I provided TPU's own data. Average VRAM usage is 7,6GB this year at 1080p with no DLSS/FG/RT. Sometimes on low settings. This will only continue to increase.
Oh sure just blame the games. It's all a bug...
1. Powerful console SoC for Sony and Microsoft. The business is cyclic so revenues are down right now until the next Xbox and PS.
2. License graphics tech to smartphone companies like Samsung. Nvidia can't or won't do this.
3. Powerful laptop/SFF SoC combining both CPU and GPU IP. Strix Halo is coming and Nvidia is a long way off from creating their own Apple M# competitor.
4. New chip configuration like stacked ICs, interposers and chiplets. Having experience with these two configs paves the way to the future when node shrinks become impossible and monolithic chips are no longer viable. Instinct already uses chiplets.
Not everything GPU related is an RGB desktop gaming rig product. In addition, AMD is working on cool Xilinx follow-ons and their 3D cache chips are awesome. Instinct is also powerful but we know Nvidia has a big head start here. Finally, if AMD and Intel push Nvidia out of the sub $500 discrete GPU space with RDNA4 and Battlemage, market shares will go up.
First post I see is the usual I do not have a problem post with
choose from:
*) operating system XY
*) connector XY
*) product XY
(which implies that the product is totally fine - 100% of the products are free from defect)
The graphic card manufacturers are just lazy. I know other areas where you have to write 8d reports, recall, scrap and pay fines for defective connectors.
I wrote it here and somewhere else about my "defective cables" from my enermax power supply. You may look for that topic and read it. I tried to explain the topic in more detail there.
again, millions of RTX 40 series users with 0 problems with the connector, loud minority won’t change the facts. And fantasies won’t turn into facts. All connectors are safe if properly used, end of story.
But when faster graphics cards with the same amount of VRAM keeps scaling fine, then VRAM isn't the issue, it's the facts.
6. Open source technologies like FSR that run on anything,
7. Open source drivers that come integrated into the Linux kernel, making life on Linux a lot easier with an AMD GPU.
Who said AMD doesn't have anything on its own?
And mishandling isn't the issue, the issue is the connector wasn't idiot proof, the old 8 pin connector is idiot proof because its either not plugged in all the way and the system won't boot or its plugged in and you have a running system, and the 8 pin connector didn't have any issues with melting or burning unless you bought a completely garbage PSU. The definition of need being to have the new connector, too many adapters are just untrustworthy IMO. The 4080 didn't need it with a 320W TDP. You're welcome to go look it up, you never post proof of your claims anyway so why should I even bother? He isn't using the connector in a long term PC up and running for any length of time, IMO a review test bench doesn't count. I want to see a reviewer actually use a card with it in a system, how most people actually use a graphics card. You must be getting quite the workout from those goal posts btw.
Edit- Thanks for the laugh reacts, this just confirms how reasonable and mature Nvidia diehards are, disappointing coming from a mod though.
there was no implication of anything there, except maybe that what you're doing is spreading FUD, those connectors are fine as long as you make sure you connect them properly.
Okay. How about two 3090s that have been running in two workstations in my lab at work for three years now? In fairly shitty cramped Dell cases, by the way. Working off mid as hell PSUs, too. Fucking bizarrely, my workplace still stands and nothing burned down. I am sure it’s just a fluke, though, and my experience is irrelevant. As I have been reliably informed, after all: