• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6400 Tested on PCI-Express 3.0

you're only a gamer if you have a 2000$+ system /s
It's funny because many of the people who cry that "nothing below 5641616176541 fps is good enough" actually play on consoles that are often locked to 30 or 60 fps.
 
It's funny because many of the people who cry that "nothing below 5641616176541 fps is good enough" actually play on consoles that are often locked to 30 or 60 fps.
The funny thing is I do have an high end gaming (well... not just gaming...) system, but I can still enjoy playing on a consolle at 30 or 60 FPS.
 
Looking at some of the lag and such I see even with a 5600XT on a PCIe 4.0 slot limited to 3.0 due to 3600xt, I'm seeing anywhere's between 6-8GB of VRAM used by Guild Wars 2 with the DX11 client and High Textures. Even the DX9 client has the same issue due to texture sizes so you wont be able to use a 64/6500 with that setting. In area's such as Divinity Reach or Lion's Arch, there can be lots of lag with massive FPS drops but some of that could be network related as ping times increase to 500+. At that point, you begin rubber banding due to server load and it's something that's going to require a complete redesign of the game as those maps and such are on my system. Game Dat File is now almost 68GB due to how much I've managed to explore. My nephew who also plays is the same size so the only data being transfered when you enter a map is location and state for other players and npc's on that map.
 
Unless the prospective buyer can't fit that 7950 into their slim case. Or they want new drivers (look at how Halo Infinite runs on the 7950). Or better efficiency / heat / noise. Or modern video decode capabilities.

Gaming performance isn't the only measuring point of a graphics card.
Of course, I wouldn't recommend anyone buy a 10 or even 5 year old video card and consider it miraculous such a thing can even run modern games. The point is the performance per dollar stagnation has reached a point of absolute absurdity, nobody would have ever believed it could reach this point, and that's still the only thing the bulk of this market truly cares about.

The 7770 cost the same $159 as the 6400 and was as good as a GTX 280 IIRC, $650 flagship of 4 years prior (and people said that was a ripoff). Now $159 can get you the same perf as a GPU that sold for $300 a literal decade ago. I know you see the problem.
 
Of course, I wouldn't recommend anyone buy a 10 or even 5 year old video card and consider it miraculous such a thing can even run modern games. The point is the performance per dollar stagnation has reached a point of absolute absurdity, nobody would have ever believed it could reach this point, and that's still the only thing the bulk of this market truly cares about.

The 7770 cost the same $159 as the 6400 and was as good as a GTX 280 IIRC, $650 flagship of 4 years prior (and people said that was a ripoff). Now $159 can get you the same perf as a GPU that sold for $300 a literal decade ago. I know you see the problem.
I agree, the GPU market is in absolute shambles right now - but that isn't the 6400's fault. It's just a symptom of the problem, not the problem itself.

I remember when I bought an X800 XT (a high-end GPU) from pocket money as a high school student with no job. Sure, it was an ex-display unit with a small discount, but still. Now I work night shifts in a full-time job, but I couldn't afford anything better than a 3070 Ti or 6700 XT.
 
The high-end Radeon HD 4890 was just 195$ right before the Radeon HD 5870 launch back in 2009...

I am sorry but AMD will never receive 940€ for an RX 6800 XT 16 GB from me 18 months after its release..
 
Everyone seems to miss the point and the USP for this card. It draws a good bit less than 75 Watts and therefore can be powered solely from the PCIe bus with a modest power supply and no extra connectors. The only competition comes from the non-overclocked versions of the GTX 1650 (NOT Ti)., which has been overpriced and hard to lay hands on. (Not to mention those tend to push 75 W and I'm much surer an RX 6400 will actually run in my rig than a GTX1650) It will fit in a lot of pre-built desktops from major manufacturers and will allow playing a much wider range of games, mainly older, at frame rates and setting levels much better than their integrated graphics defaults. It will save having to buy a new power supply, and allow light/casual gaming much better. I have a Dell Vostro and can't buy new power supply without modding the case. I mostly play RPGs and if you look at those tested the frame rate hit for PCIe 3 is mostly negligible. Yes, it's a niche market, but so are the highest end cards, and I know which niche I think is bigger.
 
This is actually a great way of finding out which games haven't been optimized properly (looking at just 1080p).
i.e. Valhalla, Cyberpunk, Deathloop, DOOM, F1, Far Cry, Forza, God of War, Halo, Watch Dogs.
Waaay too much use of CPU<->GPU transfers for these.

I suppose though it's a little more fair to use a GPU with a huge amount of VRAM, to try eliminate swapping due to VRAM exhaustion.
 
Last edited:
This is actually a great way of finding out which games haven't been optimized properly (looking at just 1080p).
i.e. Valhalla, Cyberpunk, Deathloop, DOOM, F1, Far Cry, Forza, God of War, Halo, Watch Dogs.
Waaay too much use of CPU<->GPU transfers for these.

I suppose though it's a little more fair to use a GPU with a huge amount of VRAM, to try eliminate swapping due to VRAM exhaustion.
That's not a fair assumption, as these games were developed when Navi 24 wasn't even in the news. Developers had no idea that AMD would come up with a PCI-e x4 graphics card targeted at budget-oriented gamers.
 
This is actually a great way of finding out which games haven't been optimized properly (looking at just 1080p).
i.e. Valhalla, Cyberpunk, Deathloop, DOOM, F1, Far Cry, Forza, God of War, Halo, Watch Dogs.
Waaay too much use of CPU<->GPU transfers for these.

I suppose though it's a little more fair to use a GPU with a huge amount of VRAM, to try eliminate swapping due to VRAM exhaustion.
welcome to texture streaming

Remember that these are at maximum settings, too - simply turning down texture settings would reduce the problem significantly
 
welcome to texture streaming

Remember that these are at maximum settings, too - simply turning down texture settings would reduce the problem significantly
Or you know - not gimping it down to ridiculous x4 interface would do the same...
 
welcome to texture streaming

Remember that these are at maximum settings, too - simply turning down texture settings would reduce the problem significantly
Very true. I've been using the RMPcieLinkSpeed=2 registry tweak to limit my 1070Ti to Gen 1.1 x16, and been testing a few rendering apps/games at lowest settings and resolutions, and monitoring Bus Usage in GPU-Z/Afterburner. This makes it so much more obvious who's doing a lot of this, maybe even too much, instead of preloading it all onto GPU VRAM (8GB is a fairly generous amount!). I get that it's a lot more complicated than this though.
 
Very true. I've been using the RMPcieLinkSpeed=2 registry tweak to limit my 1070Ti to Gen 1.1 x16, and been testing a few rendering apps/games at lowest settings and resolutions, and monitoring Bus Usage in GPU-Z/Afterburner. This makes it so much more obvious who's doing a lot of this, maybe even too much, instead of preloading it all onto GPU VRAM (8GB is a fairly generous amount!). I get that it's a lot more complicated than this though.
They have far more than 8GB to load in - as you move between views in the game or things like enemies or terrain go off screen, it's constantly switching to new data.

At a design level it's a tossup - do you focus on preloading as much as possible at the detriment of low-VRAM graphics cards, or do you focus on streaming as fast as possible to work better on the low end hardware?


Until this year pretty much, even budget GPUs had x16 links so it was pretty easy to focus on the streaming options as it worked best for everyone overall
 
Back
Top