Tuesday, December 31st 2024
AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors
AMD will continue using traditional PCI Express power connectors for its upcoming Radeon RX 9000 series RDNA 4 graphics cards, according to recent information shared on the Chiphell forum. While there were some expectations that AMD would mimic NVIDIA's approach, which requires the newer 16-pin 12V-2×6 connector for its GeForce RTX 50 series, the latest information suggests a more traditional power approach. While AMD plans to release its next generation of graphics cards in the first quarter, most technical details remain unknown. The company's choice to stick with standard power connectors follows the pattern set by their recent Radeon RX 7900 GRE, which demonstrated that conventional PCI Express connectors can adequately handle power demands up to 375 W. The standard connectors eliminate the need for adapters, a feature AMD could highlight as an advantage. An earlier leak suggested that the Radeon RX 9070 XT can draw up to 330 W of power at peak load.
Intel reportedly cited similar reasons for using standard power connectors in their Arc "Battlemage" graphics cards, suggesting broader industry support for maintaining existing connection standards. NVIDIA's different approach reportedly requires all board partners to use the 12V-2×6 connector for the RTX 50 series, removing the option for traditional PCI Express power connectors. In contrast, AMD's decision gives its manufacturing partners more flexibility in their design choices, and MBA (Made by AMD) reference cards don't enforce the new 12V-2×6 power connector standard. Beyond the power connector details and general release timeframe pointing to CES, AMD has revealed little about the RDNA 4 architecture's capabilities. Only the reference card's physical appearance and naming scheme appear to be finalized, leaving questions about performance specifications unanswered, as early underwhelming performance leaks are somewhat unreliable until final drivers and final optimizations land.
Sources:
Chiphell, via HardwareLuxx
Intel reportedly cited similar reasons for using standard power connectors in their Arc "Battlemage" graphics cards, suggesting broader industry support for maintaining existing connection standards. NVIDIA's different approach reportedly requires all board partners to use the 12V-2×6 connector for the RTX 50 series, removing the option for traditional PCI Express power connectors. In contrast, AMD's decision gives its manufacturing partners more flexibility in their design choices, and MBA (Made by AMD) reference cards don't enforce the new 12V-2×6 power connector standard. Beyond the power connector details and general release timeframe pointing to CES, AMD has revealed little about the RDNA 4 architecture's capabilities. Only the reference card's physical appearance and naming scheme appear to be finalized, leaving questions about performance specifications unanswered, as early underwhelming performance leaks are somewhat unreliable until final drivers and final optimizations land.
133 Comments on AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors
seasonic.com/12vhpwr-cable/
Edit: Oh hey, what's the difference between this and this? I'm getting lost among all these standards.
Currently, unless you are pushing the most extreme settings at 4K resolution, 16 GB+ is nothing but comfy, that is all. Realistically, you are looking at very few games that can make real use of more than 16 GB, mostly modded Creation Engine games with high resolution texture mods. Last I was playing Fallout 76, I had some texture mods that constantly ran my 3090 into the 24000 MB range. Not exactly optimized, regardless.
This is what I call a waste of effort. I'd get it if GPUs were giveaways but it's expensive tech. Getting less than double your current performance feels like shooting your own foot. Yeah, you might sell your old GPU for good money to offset it but it's also additional effort in the first place, plus it is not a given. It might lie on your shelf forever until someone buys it for the price you're fine with. Who knows.
That's why I recommend everyone who is not getting paid for their calculating power to ditch the idea of upgrading every couple years and enjoy upgrades to the best GPU they can afford when their current one doesn't even remotely catch up with 1080p30. That way the upgrades are less regular which means less effort. The upgrades are more significant which means more joy. Also cheaper to do so.
This is not your gym progress where every percent matters. One can live with an "obsolete" GPU if they don't use it for actual work.
The range is rumored between just faster than the 7900GRE to just slower than the 7900XTX. If the latter that’s a good boost over the 3080. It’s especially good if the power is lower and RT performance is up (if you care about that). Pricing well under $500 would just be the cherry on top. And if you are building an all white rig like I am, more GPUs are coming in white. Finally, the 3080 comes with 10GB while the 9070XT is rumored to come with 16GB.
Building computers can be a hobby with enjoyment just from carrying out the upgrade.
Join the dark side.
I used to be in the same situation as you. Finding Copilot randomly installed on my PC one morning was the last straw.
Just skip me the DLSS drama and the blurring or RT's low FPS that result from that, what else does Nvidia offer? Ah yes, DLAA.
A bad (joke) 16 pin connector? Ancient software or buggy/burden newer app? Less and slower VRAM? Ancient shader caching that's better to be turned off in the driver to not waste storage space/life? The need for a last generation card to get a last generation DLSS for better blurring and the need for this to be implemented by the game developer of course (sorry if the game is old, but hey you can use DLSS2FSR mod :laugh:)? Locked Frame Generation to DLSS so you can't use them individually? Nvidia Experience :D ? New great prices for each new generation (the more you buy, the more you save!!!)?
Yeah I know Radeon 7000's are space heaters and FSR blurring is the worst, anything else? But see rumors say the new 9000 cards will use lower watts and the new FSR will be like DLSS with less blur, so it will be good (for those who use it). So you would need something new as a claim to be against.
_______
And before you say something, I have an RTX 4080 Super and an RX 7900 XTX.
It's yet another standard we didn't need and "no body" asked for, that should have been the end of the discussion. Don't forget it's generally more valuable to have lasting standard than having the "optimal" standard. Can we please cut it with this nonsense?
Each architecture utilizes VRAM differently, comparing VRAM size across vendors is a fool's errand. And large VRAM isn't "future proofing", not unless you want to look at pretty slide shows. And yet most of them buys Nvidia cards anyways when the dust settles… There are many uses of large VRAM outside gaming, but in gaming, it's mostly a gimmick. Graphics cards don't have enough bandwidth to utilize it in a single frame anyways.
In regards to (unofficial) texture mods;
1) Results for most game engines will be undesirable, as more advanced engines are using calibrated LoD algorithms, textures, shaders and sometimes dynamic loading of assets. LoD algorithms mix multiple mip levels of textures, and the results will generally be very wasteful if you just replace (some) textures with higher resolution ones without recalibrating everything, in best case scenarios you're looking at very high VRAM allocation for very little visual "improvement" (most of it wasted due to mip levels), but in many cases there can be flickering, glitching or loading issues/popping in cases with dynamic loading.
2) What is the added benefit?
Unless someone has access to higher quality raw material or create new better assets, we're not really adding truly better textures. In most cases it's just upscaled textures with some noise and filtering added, so there isn't any more information in the texture, just an illusion of higher resolution. It's just as pointless as these "AI" upscaling algorithms - no real information is added. (Most games can(and some already do) achieve the same result with a "detail texture" or noise added in shaders at very little cost)
Or an analogy; it's about as smart as turning up the sharpness on your TV believing you get a better picture. :rolleyes:
As I have stated many times in the past, 600W and $2000+ for a GPU that might increase performance 20-30%...hard, hard, hard pass.
Every revision means money lost for already made inventory.
If it ain't broke.
The manufacturors won't bother to make all these revisions.