Tuesday, December 31st 2024

AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors

AMD will continue using traditional PCI Express power connectors for its upcoming Radeon RX 9000 series RDNA 4 graphics cards, according to recent information shared on the Chiphell forum. While there were some expectations that AMD would mimic NVIDIA's approach, which requires the newer 16-pin 12V-2×6 connector for its GeForce RTX 50 series, the latest information suggests a more traditional power approach. While AMD plans to release its next generation of graphics cards in the first quarter, most technical details remain unknown. The company's choice to stick with standard power connectors follows the pattern set by their recent Radeon RX 7900 GRE, which demonstrated that conventional PCI Express connectors can adequately handle power demands up to 375 W. The standard connectors eliminate the need for adapters, a feature AMD could highlight as an advantage. An earlier leak suggested that the Radeon RX 9070 XT can draw up to 330 W of power at peak load.

Intel reportedly cited similar reasons for using standard power connectors in their Arc "Battlemage" graphics cards, suggesting broader industry support for maintaining existing connection standards. NVIDIA's different approach reportedly requires all board partners to use the 12V-2×6 connector for the RTX 50 series, removing the option for traditional PCI Express power connectors. In contrast, AMD's decision gives its manufacturing partners more flexibility in their design choices, and MBA (Made by AMD) reference cards don't enforce the new 12V-2×6 power connector standard. Beyond the power connector details and general release timeframe pointing to CES, AMD has revealed little about the RDNA 4 architecture's capabilities. Only the reference card's physical appearance and naming scheme appear to be finalized, leaving questions about performance specifications unanswered, as early underwhelming performance leaks are somewhat unreliable until final drivers and final optimizations land.
Sources: Chiphell, via HardwareLuxx
Add your own comment

133 Comments on AMD Radeon "RDNA 4" RX 9000 Series Will Feature Regular 6/8-Pin PCI Express Power Connectors

#76
AcE
freeagentI like the new connector, works good for me :)
Reasonable people will be reasonable. :)))) Fact is millions use the connector just fine, forums like these is like the loud minority and that’s it. And then those criticising the connector are mostly people who never used it and are just fud-ing.
freeagentTeam Red is just soft and scared :D
This is a sale tactic my friend, nothing else. As you see a lot of people like to use the old connectors and with that card the new connector is simply not needed.

btw “Team Red” used 2x 8 Pin on a dual gpu card that used as much as 600W and was only drivable by the best PSUs, so no they’re a lot of things but not scared. ;)
Posted on Reply
#77
Apocalypsee
I'm fine with 8-pin or a couple of those on a GPU. I really hope the pricing of this card is reasonable. I don't want to buy another 3080, I know it's a damn good GPU but 10GB isn't much by the look of things even at 1080p for foreseeable future.
Posted on Reply
#78
Tomorrow
Dirt ChipAs long as one 8pin is enough all is good.
If you need 2 or 3 of those, better go with the new standard Imo.
On most RDNA4 cards it will likely be enough. Even two is ok.
Macro DeviceAin't gonna be much faster than that, even by today's ridiculous standards of +5% being a whopping upgrade. I'd rather skip this generation. 9070 XT is unlikely to be significantly faster than 3090 (3090 Ti if we're feeling really ambitious) and your 3080 isn't really far behind. More sense in waiting for 4080 series or better GPUs to become affordable.
Laugh of the day. Waiting for 4080 performance to be affordable, from Nvidia?
From the last years flagships only 2080Ti performance can be considered affordable these days.
And that card released seven years ago. 3090Ti performance may become affordable next year if the new cards keep prices in check. That would be five tears and that's still a big "if".
4090 performance wont be affordable until 2030 going by previous examples taking 5+ years.
Macro DeviceThis matters in like four games and in five more if we talk absurd use cases (UHD+ texture packs and/or settings so high it's <15 FPS anyway) and 3080 has the edge to stay solid in every other title. Especially the ones where DLSS is the only upscaler that works correctly. I would've agreed if that was a comparison with an 8 GB GPU but 10 GB is nowhere near obsolete, also 320-bit bus really helps a lot.
The effect of VRAM limitations cannot be measured in average FPS alone like TPU does. No offense to W1zzard here but the issue is more complex as it requires also frame time analysis for every game at every setting and this is heard to read and takes forever to benchmark.

Still those who have done it like Daniel Owen with 8G and 16GB 4060Ti found the 8GB card having significantly worse frametimes. And it wasn't just four games. In fact in another thread did a breakdown of TPU's own performance tests from this year. This was for 8GB cards and I'll quite it here:
TomorrowI looked at all the performance benchmark reviews he has posted for this years games.
11 games in total. At 1080p max settings (tho not in all games and without RT or FG) the memory usage is average 7614 MB.
7 games stay below 8GB at those settings. 4 games go over it.
6 games are ran at lowest settings 1080p no RT/FG and despite that half of them (3) still go over 8GB even at these low settings.

Anyone looking at these numbers and seeing how close the average is to the 8GB limit should really be considering twice when buying a 8GB card today.
Next year likely more than half of the tested games will surpass 8GB even at 1080p low no RT/FG and you have to remember that RT and FG both increase VRAM usage even more. To say nothing of frametimes on those 8GB cards. Even if the entire buffer is not used up the frametimes already take a nosedive or in some cases textures simply refuse to load.

Links:
www.techpowerup.com/review/horizon-forbidden-west-performance-benchmark/5.html
www.techpowerup.com/review/homeworld-3-benchmark/5.html
www.techpowerup.com/review/ghost-of-tsushima-benchmark/5.html
www.techpowerup.com/review/senuas-saga-hellblade-2-benchmark/5.html
www.techpowerup.com/review/black-myth-wukong-fps-performance-benchmark/5.html
www.techpowerup.com/review/star-wars-outlaws-fps-performance-benchmark/5.html
www.techpowerup.com/review/warhammer-40k-space-marine-2-fps-performance-benchmark/5.html
www.techpowerup.com/review/final-fantasy-xvi-fps-performance-benchmark/5.html
www.techpowerup.com/review/silent-hill-2-fps-performance-benchmark/5.html
www.techpowerup.com/review/dragon-age-the-veilguard-fps-performance-benchmark/5.html
www.techpowerup.com/review/stalker-2-fps-performance-benchmark/5.html
Macro DeviceThe leaks we got suggest 9070 XT just barely outperforming 7900 GRE which is roughly 3090/3090 Ti area. This is faster than 3080, sure, but it's not a lot of difference.
That's a 11% increase to 7900 GRE. About average these days. If it reaches 3090Ti then it will be 32% which is way above average these days and can be considered good. Except for you it seems because it's AMD.
Knight47This thing will be barely any faster than the 4 years old 6900XT in raster, let alone the 7900XTX that it supposed to beat for half the price.
Who said it was supposed to beat XTX? Stop setting false expectations. Current leaks suggest 7900 GRE performance. Not XT or much less XTX.
TheDeeGeeOld news, you seeing any reports the past months? No.
Doesn't mean it's still not happening. Did terrorist attacks stopped because it was not in the news? No.
If media gets saturated by the same news they tend to fade in the background once the initial panic has died down.
DaworaNew AMD GPU is still slow and bad upgrade..
Buy a new Gpu to get more Vram only is just stupid whitout getting more performance.

Better to go 5070Ti to get performance boost

Its only Amd fans who dont know what means
VRAM allocation and VRAM usage? Right?

Ppls who never ever buy Nvidia write this trash and BS about Vram. Thats how it goes atm in every tech forum
Butthurt fans cant take it when Nvidia is top dog here and topics are full of Vram/price BS from Amd fans

Better to sound rational than talking BS about prices and Vrams 24/7 like some butthurt amd fans
You are the perfect embodiment of the person in this meme:
Macro DeviceWhich doesn't contradict with what I just said. It's barely noticeable. Significant starts from 100%.
Name me last time a new card offered 100% performance increase.

From what i remember it was 6800XT over 5700XT at 92% according to TPU, but that's also a bit unfair comparison as 5700XT was decidedly a midrange card like RDNA4 will be and 6800XT was a high end card with much higher price and specs.

Before that i could find 4870 over 3870 at 119%.
And going ever further back 8800GTX over 7900GTX but i dont have percentages as it was that long ago.

So while 100% has happened a few times in history it's extremely rare. These days the best we can hope for is around +40% like 1080Ti vs 980Ti or 4090 vs 3090Ti.

I would not say 4090 owners called it's performance increase over 3090Ti as "barely noticeable".
This is just you preempting whatever AMD comes up with as "barely noticeable".
By your own logic Nvidia's performance upgrades are also "barely noticeable " as most dont even reach the rare 40% mark.
Posted on Reply
#79
Macro Device
TomorrowFrom the last years flagships only 2080Ti performance can be considered affordable these days.
If you're broke then yes. 3080 Ti goes for 450ish on aftermarket. It's not ideal but it's very reasonable for a GPU that can handle anything at 1080p and, if we don't go for heaviest RT, at 1440p. Or 4K if we don't care for RT at all.
4080 going for 500ish is affordable and is the way the things will be cca '27. Or, maybe, just maybe, even '26.
Tomorrowbecause it's AMD.
Couldn't care less. Current gen NVIDIA offerings are also price/performance rubbish, with 4070 Ti upwards being the pinnacle.
TomorrowName me last time a new card offered 100% performance increase.
Why should I? My point implies the buyer has got an X GPU that offers 100% performance and then, when they are up to upgrade their PC, they buy a Y GPU that offers at least 200% performance for that to be considered a real upgrade in my book. Times when $X GPU of today doubled the performance of $X GPU of yesterday are about 17 years old at this point, give or take. Which doesn't matter because no one said you must buy a new GPU every time something new is released.
TomorrowBy your own logic Nvidia's performance upgrades are also "barely noticeable " as most dont even reach the rare 40% mark.
True. I hate everything about the state of affairs in NVIDIA SKUs, too. However, as an effective monopolist, NVIDIA are in their right to do so. AMD should declare a price war, invent something useful that NVIDIA cards cannot do, or do anything else that's impressive to at least save what they got left of their market share. What they do, however, is releasing products that barely outperform similarly priced NVIDIA GPUs (no more than 20% and not even always the case) in the most AMD favouring scenarios (no upscaling, no RT, no frame generation; things that AMD GPUs of all existing generations do MUCH worse than equally priced NVIDIA SKUs and, what's even funnier, some Intel ones).

Buy a one-trick pony for 500 or a well-rounded GPU for 600? If AMD's plan was to upsell NVIDIA GPUs they overdid on it.
Posted on Reply
#80
efikkan
ApocalypseeI don't want to buy another 3080, I know it's a damn good GPU but 10GB isn't much by the look of things even at 1080p for foreseeable future.
Why do you people have these irrational fears? Where do you get your figures to determine a very good GPU is bad just because of a number you know nothing about?

And why should VRAM size increase so drastically between generations anyways, do each pixel on your screen need exponentially more data in order to be rendered?
Let's so some math for a moment;
Consider 4K (3840x2160), now assume we we're rendering a perfect scene with high details, we run 8xMSAA (8 samples per pixel), and we assume every object on-screen has 4 layers of textures, and every sample is interpolating on average 4 texels, and that every object is unique, so every texel is unique, resulting in a whopping 128 average samples per rendered pixel (this is far more than any game would ever do), it will still total just 3037.5 MB uncompressed*. (Keep in mind I'm talking if every piece of grass, rock, etc. is unique.) So when considering a realistic scenario with objects repeating, lots of off-screen nearby objects cached (mip-levels and AF), etc. a ~5 GB of textures, ~1.5 GB of meshes and ~1 GB of temporary buffers would still not fill a VRAM size of 8 GB, let alone 10 GB. Throw in 12-bit HDR, and it would still not be that bad.
*) Keep in mind that with MSAA, lots of the same texels will be sampled. And normal maps are usually much lower resolution and are very highly compressible.

So the only logical conclusion would be that if a game struggles with 10 GB VRAM in 1080p, the game is either very poorly designed, or the driver is buggy. And as we usually see in such comparison, it's usually another bottleneck slowing it down.
TomorrowThe effect of VRAM limitations cannot be measured in average FPS alone like TPU does. No offense to W1zzard here but the issue is more complex as it requires also frame time analysis for every game at every setting and this is heard to read and takes forever to benchmark.
If a game is acutally running out of VRAM, and the GPU starts swapping just because of that, the FPS wouldn't just drop a few percent, or have slightly higher variance in frame times, it would completely collapse.
When Nvidia releases their usual refreshes of GPUs with slightly higher clocks and memory speeds, and yet they keep scaling in 4K, we can safely conclude that VRAM size isn't a bottleneck.
So whenever these besserwissers on YouTube make their click-bait videos about an outlier game which drops 20-30%, it's bug, not a lack of VRAM.
Posted on Reply
#81
ThomasK
tldr then: if it ain't broke, don't fix it.
Posted on Reply
#82
kapone32
What confuses me about the VRAM argument is that it was one of the main positives about Intel's new card vs the 4060. Now all of a sudden VRAM does not matter. What happened to all the pain communicated when Hogwarts Launched?
Posted on Reply
#83
Hecate91
AcEReasonable people will be reasonable. :)))) Fact is millions use the connector just fine, forums like these is like the loud minority and that’s it. And then those criticising the connector are mostly people who never used it and are just fud-ing.
People not wanting to use it are reasonable too. And many didn't want to use it because the first version was a fire hazard, it got updated to 12v 2x6 for a reason.
AcEThis is a sale tactic my friend, nothing else. As you see a lot of people like to use the old connectors and with that card the new connector is simply not needed.
Not needing a new PSU is a good thing, and the new power connector wasn't needed for any card except the 4090.
AcEbtw “Team Red” used 2x 8 Pin on a dual gpu card that used as much as 600W and was only drivable by the best PSUs, so no they’re a lot of things but not scared. ;)
How many years ago was that?
And btw, "team green" had 2x 8 pin connectors on the RTX 30 series cards and it was tripping the overcurrent limit on power supplies. Yeah I know its cool to call AMD users here stupid, but some of us didn't want to deal with the risks because Nvidia didn't allow AIB's to use a standard connector thats proven to be safe.
Posted on Reply
#84
AusWolf
DavenOr 100 to 150 fps... :)
Do you notice that? Personally, I don't. Anything above 50-80 FPS (depending on the game) is invisible to me, especially with VRR enabled.
DavenI regretfully admit that I have never used Linux. After my last experience trying to install Win 11 2H24 on my laptop, I am so done with Windows. Now I just need to find the time to make the switch.
No problem. Just pop over there. There'll be plenty of people (myself included) to help. :)
Posted on Reply
#85
JustBenching
Hecate91People not wanting to use it are reasonable too. And many didn't want to use it because the first version was a fire hazard,
Since you just called yourself reasonable, can you give me some examples of the 12vh causing fires?
Posted on Reply
#86
Onasi
DavenUnfortunately, Nvidia created this 'standard' because they are planning 600W GPUs.
NVidia didn’t really create any standards, they are not ones who do so. The Ampere connector was proprietary bullshit. The 12VHPWR and the new 12v6 are PCI-SIG created specs with, yeah, input from NVidia and Dell, among others, and were incorporated by Intel into the new ATX spec. The whole point of the whole exercise was minimizing footprint and creating something that is more suited for higher power consumption GPUs. The whole hysteria is baffling to me. It works. We know it works. We know the revised version is impossible to “melt” even on fucking purpose. It was tested. People who know more about power supplies and connectors, like John Gerow, have confirmed so. It’s fine.
AusWolfDo you notice that? Personally, I don't. Anything above 50-80 FPS (depending on the game) is invisible to me, especially with VRR enabled.
I prefer a tad higher, but it depends on the genre and, realistically, anything above 120 for single player games and 240 for MP ones is much of muchness for most players.
Posted on Reply
#87
JustBenching
OnasiNVidia didn’t really create any standards, they are not ones who do so. The Ampere connector was proprietary bullshit. The 12VHPWR and the new 12v6 are PCI-SIG created specs with, yeah, input from NVidia and Dell, among others, and were incorporated by Intel into the new ATX spec. The whole point of the whole exercise was minimizing footprint and creating something that is more suited for higher power consumption GPUs. The whole hysteria is baffling to me. It works. We know it works. We know the revised version is impossible to “melt” even on fucking purpose. It was tested. People who know more about power supplies and connectors, like John Gerow, have confirmed so. It’s fine.
No, it's much more likely that nvidia is shipping cards that will ALL eventually melt (according to the above post) and has to pay the cost of RMA for all of them. Makes sense man :roll:
Posted on Reply
#88
Hecate91
kapone32What confuses me about the VRAM argument is that it was one of the main positives about Intel's new card vs the 4060. Now all of a sudden VRAM does not matter. What happened to all the pain communicated when Hogwarts Launched?
It seems like people quickly forgot about the stuttering, poor frame timing and the awful looking textures required to run Hogwarts Legacy on cards with 8GB VRAM. There is more to game testing than FPS, and games keep progressing yet people keep insisting 8GB is fine, so game devs have to compensate for the majority of hardware configs.
JustBenchingSince you just called yourself reasonable, can you give me some examples of the 12vh causing fires?
Theres plenty of examples of the connector melting, melting means something is getting hot enough to short out or catch fire. Someone else posted vids from Northridge Fix, some of those examples have melted power connectors. I've also seen cards with connectors melted off of the PCB, its dangerous when the connector gets so hot it melts the solder and there is no safety mechanism to shut the system down before it gets that hot.
It amazes me team green users want to ignore logic so hard to defend their favorite brand they're saying things like " but its not in the news". You do realize things happen without news coverage, right?
OnasiNVidia didn’t really create any standards, they are not ones who do so. The Ampere connector was proprietary bullshit. The 12VHPWR and the new 12v6 are PCI-SIG created specs with, yeah, input from NVidia and Dell, among others, and were incorporated by Intel into the new ATX spec. The whole point of the whole exercise was minimizing footprint and creating something that is more suited for higher power consumption GPUs. The whole hysteria is baffling to me. It works. We know it works. We know the revised version is impossible to “melt” even on fucking purpose. It was tested. People who know more about power supplies and connectors, like John Gerow, have confirmed so. It’s fine.
Nvidia was likely the main company driving to push for a new power connector, since they needed something to fit their weirdly shaped cards. And yes Intel is a part of PCI-SIG, but funny enough they haven't been using the new connector either.
We only know the new revised connector works, there is plenty of evidence the inital version was garbage and didn't even fit into place with a solid enough retention clip and you couldn't bend it to fit in a reasonably sized case. As for someone working for a PSU company saying its fine, that is expected, I'd rather trust third party reviewers to tell me its fine.
Posted on Reply
#89
AusWolf
Hecate91It seems like people quickly forgot about the stuttering, poor frame timing and the awful looking textures required to run Hogwarts Legacy on cards with 8GB VRAM. There is more to game testing than FPS, and games keep progressing yet people keep insisting 8GB is fine, so game devs have to compensate for the majority of hardware configs.
Hogwarts Legacy is a funny game. I remember Hardware Unboxed testing it with a 4 GB and an 8 GB 6500 XT, with the 4 GB showing strange artefacts, texture pop-ins and other oddities, while the 8 GB one didn't.

So then, I popped my 4 GB 6500 XT into my PC to see it for myself, and honestly, I couldn't notice anything weird... which was... weird.
Posted on Reply
#90
Onasi
Hecate91I'd rather trust third party reviewers to tell me its fine.
Okay. W1zz says it’s fine. He had no issues with any version of the connector. Across dozens of cards and thousands replugs. Any new goalposts you would like to choose?
Posted on Reply
#91
Tomorrow
Macro DeviceIf you're broke then yes. 3080 Ti goes for 450ish on aftermarket. It's not ideal but it's very reasonable for a GPU that can handle anything at 1080p and, if we don't go for heaviest RT, at 1440p. Or 4K if we don't care for RT at all.
4080 going for 500ish is affordable and is the way the things will be cca '27. Or, maybe, just maybe, even '26.
So now everyone who wont buy a GPU over 450 is broke?
3080 Ti is pointless. 450+ for a five year old 12GB GPU? What a deal! /s
Until AMD's or Intel's midrange soundly beats 4080 there's little reason for it to cost 500 in '26.
Macro DeviceWhy should I? My point implies the buyer has got an X GPU that offers 100% performance and then, when they are up to upgrade their PC, they buy a Y GPU that offers at least 200% performance for that to be considered a real upgrade in my book. Times when $X GPU of today doubled the performance of $X GPU of yesterday are about 17 years old at this point, give or take. Which doesn't matter because no one said you must buy a new GPU every time something new is released.
You're the one who brought up this ridiculous number. Now you can provide examples?
Sure by that logic i can upgrade from iGPU to 4090 and get 1000% but that's not what average person does.
Macro DeviceBuy a one-trick pony for 500 or a well-rounded GPU for 600? If AMD's plan was to upsell NVIDIA GPUs they overdid on it.
Well rounded with limited VRAM and weak RT perf?
efikkanIf a game is acutally running out of VRAM, and the GPU starts swapping just because of that, the FPS wouldn't just drop a few percent, or have slightly higher variance in frame times, it would completely collapse.
When Nvidia releases their usual refreshes of GPUs with slightly higher clocks and memory speeds, and yet they keep scaling in 4K, we can safely conclude that VRAM size isn't a bottleneck.
So whenever these besserwissers on YouTube make their click-bait videos about an outlier game which drops 20-30%, it's bug, not a lack of VRAM.
No it does not. Frametimes go haywire before anything else. That's the first indication that something is wrong. FPS drops come after that.
Keep scaling? What are you talking about? I provided TPU's own data. Average VRAM usage is 7,6GB this year at 1080p with no DLSS/FG/RT. Sometimes on low settings. This will only continue to increase.
Oh sure just blame the games. It's all a bug...
Posted on Reply
#92
Daven
Macro Device...invent something useful that NVIDIA cards cannot do, or do anything else that's impressive...
AMD has done four things along these lines already:

1. Powerful console SoC for Sony and Microsoft. The business is cyclic so revenues are down right now until the next Xbox and PS.
2. License graphics tech to smartphone companies like Samsung. Nvidia can't or won't do this.
3. Powerful laptop/SFF SoC combining both CPU and GPU IP. Strix Halo is coming and Nvidia is a long way off from creating their own Apple M# competitor.
4. New chip configuration like stacked ICs, interposers and chiplets. Having experience with these two configs paves the way to the future when node shrinks become impossible and monolithic chips are no longer viable. Instinct already uses chiplets.

Not everything GPU related is an RGB desktop gaming rig product. In addition, AMD is working on cool Xilinx follow-ons and their 3D cache chips are awesome. Instinct is also powerful but we know Nvidia has a big head start here. Finally, if AMD and Intel push Nvidia out of the sub $500 discrete GPU space with RDNA4 and Battlemage, market shares will go up.
Posted on Reply
#93
_roman_
People should understand that there are manufacturing processes. And some stuff is problematic.

First post I see is the usual I do not have a problem post with
choose from:
*) operating system XY
*) connector XY
*) product XY
(which implies that the product is totally fine - 100% of the products are free from defect)

The graphic card manufacturers are just lazy. I know other areas where you have to write 8d reports, recall, scrap and pay fines for defective connectors.

I wrote it here and somewhere else about my "defective cables" from my enermax power supply. You may look for that topic and read it. I tried to explain the topic in more detail there.
Posted on Reply
#94
AcE
Hecate91People not wanting to use it are reasonable too. And many didn't want to use it because the first version was a fire hazard, it got updated to 12v 2x6 for a reason.
Being non pragmatic isn’t reasonable, sorry. And it was never a “fire hazard” unless you mishandled the connector. A little “burning” isn’t a fire btw. The newer connector is idiot proof, while the other was not idiot proof, that is the difference that the older connector needed you to make sure the cable is properly inserted while the newer one won’t work if you are unable to properly install a cable. :) But even a revision of the current one made this a fact. If you have a 4090 of 2023 or 2024, likelihood is high you have one of the idiotproof ones.
Hecate91Not needing a new PSU is a good thing, and the new power connector wasn't needed for any card except the 4090.
Depends on definition of need. Unprecise, the 4080 also needed it because of how the ref card of Nvidia was constructed so you’re wrong.
Hecate91And btw, "team green" had 2x 8 pin connectors on the RTX 30 series cards and it was tripping the overcurrent limit on power supplies.
Source and proof for that? (X) none you’re making it up. :)

again, millions of RTX 40 series users with 0 problems with the connector, loud minority won’t change the facts. And fantasies won’t turn into facts. All connectors are safe if properly used, end of story.
Posted on Reply
#95
efikkan
TomorrowNo it does not. Frametimes go haywire before anything else. That's the first indication that something is wrong. FPS drops come after that.
Keep scaling? What are you talking about? I provided TPU's own data. Average VRAM usage is 7,6GB this year at 1080p with no DLSS/FG/RT. Sometimes on low settings. This will only continue to increase.
Oh sure just blame the games. It's all a bug...
Graphics cards can successfully swap data that isn't used, but if they start to swap on a frame-by-frame basis, it goes from totally fine to totally unplayable very quickly, there isn't a large margin with a lot of stutter but unaffected averages. By the time it starts swapping truly, the frame rate will drop sharply, and any reviewer will notice this.
But when faster graphics cards with the same amount of VRAM keeps scaling fine, then VRAM isn't the issue, it's the facts.
Posted on Reply
#96
AusWolf
DavenAMD has done four things along these lines already:

1. Powerful console SoC for Sony and Microsoft. The business is cyclic so revenues are down right now until the next Xbox and PS.
2. License graphics tech to smartphone companies like Samsung. Nvidia can't or won't do this.
3. Powerful laptop/SFF SoC combining both CPU and GPU IP. Strix Halo is coming and Nvidia is a long way off from creating their own Apple M# competitor.
4. New chip configuration like stacked ICs and chiplets. Having experience with these two configs paves the way to the future when node shrinks become impossible and monolithic chips are no longer viable.
5. Powerful APUs for handheld consoles and SFF devices,
6. Open source technologies like FSR that run on anything,
7. Open source drivers that come integrated into the Linux kernel, making life on Linux a lot easier with an AMD GPU.

Who said AMD doesn't have anything on its own?
Posted on Reply
#97
Daven
AusWolf5. Powerful APUs for handheld consoles and SFF devices,
6. Open source technologies like FSR that run on anything,
7. Open source drivers that come integrated into the Linux kernel, making life on Linux a lot easier with an AMD GPU.

Who said AMD doesn't have anything on its own?
I was just about to come back and add handhelds. The only handheld with Nvidia is the Switch and that SoC is old, old, old. Even the upcoming Nvidia SoC in the Switch 2 is over two years old.
Posted on Reply
#98
Hecate91
AcEBeing non pragmatic isn’t reasonable, sorry. And it was never a fire hazard unless you mishandled the connector. A little “burning” isn’t a fire btw. The newer connector is idiot proof, while the other was not idiot proof, that is the difference that the older connector needed you to make sure the cable is properly inserted while the newer one won’t work if you are unable to properly install a cable. :)
No it's called being practical, if it isn't broken don't fix it as others in this thread have said. Oh yeah just a little burning, nothing to worry about lol, besides ruining an expensive GPU or your whole house from a connector getting hot enough to melt solder.

And mishandling isn't the issue, the issue is the connector wasn't idiot proof, the old 8 pin connector is idiot proof because its either not plugged in all the way and the system won't boot or its plugged in and you have a running system, and the 8 pin connector didn't have any issues with melting or burning unless you bought a completely garbage PSU.
AcEDepends on definition of need. Unprecise, the 4080 also needed it because of how the ref card of Nvidia was constructed so you’re wrong.
The definition of need being to have the new connector, too many adapters are just untrustworthy IMO. The 4080 didn't need it with a 320W TDP.
AcESource and proof for that? (X) none you’re making it up. :)

again, millions of RTX 40 series users with 0 problems with the connector, loud minority won’t change the facts. And fantasies won’t turn into facts. All connectors are safe if properly used, end of story.
You're welcome to go look it up, you never post proof of your claims anyway so why should I even bother?
OnasiOkay. W1zz says it’s fine. He had no issues with any version of the connector. Across dozens of cards and thousands replugs. Any new goalposts you would like to choose?
He isn't using the connector in a long term PC up and running for any length of time, IMO a review test bench doesn't count. I want to see a reviewer actually use a card with it in a system, how most people actually use a graphics card. You must be getting quite the workout from those goal posts btw.

Edit- Thanks for the laugh reacts, this just confirms how reasonable and mature Nvidia diehards are, disappointing coming from a mod though.
Posted on Reply
#99
Zazigalka
_roman_People should understand that there are manufacturing processes. And some stuff is problematic.

First post I see is the usual I do not have a problem post with
choose from:
*) operating system XY
*) connector XY
*) product XY
(which implies that the product is totally fine - 100% of the products are free from defect)

The graphic card manufacturers are just lazy. I know other areas where you have to write 8d reports, recall, scrap and pay fines for defective connectors.

I wrote it here and somewhere else about my "defective cables" from my enermax power supply. You may look for that topic and read it. I tried to explain the topic in more detail there.
would it make you happier if my card burned down and imply that all have an issue ?
there was no implication of anything there, except maybe that what you're doing is spreading FUD, those connectors are fine as long as you make sure you connect them properly.
Posted on Reply
#100
Onasi
@Hecate91
Okay. How about two 3090s that have been running in two workstations in my lab at work for three years now? In fairly shitty cramped Dell cases, by the way. Working off mid as hell PSUs, too. Fucking bizarrely, my workplace still stands and nothing burned down. I am sure it’s just a fluke, though, and my experience is irrelevant. As I have been reliably informed, after all:
3valatzyThey all melt if you wait it long enough. Can you show a temperature image of the connector to see the extremely high resistance because of its weak mechanical construction?
Posted on Reply
Add your own comment
Jan 20th, 2025 10:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts