• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6400

Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.
i didn't say anything about RX6400 being a crap , i commented on a past comment from David Wang , AMD's senior VicePresident of engineering for Radeon Technologies Group
 
Joined
May 21, 2009
Messages
271 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.
yeah is crap but another reasons like cutted features, performance on pci-e gen 3 (most users have this mainboards) and highly price but for raytracing meh....................

but maybe is usefull for some users like sff but personally i dont buy anything until nvidia respond to this and intel arc stay in market

meanwhile dont give any money to fucking scumbag companies

:)
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I didn't quote him , GameDebate quoted him (i put the link as you surely noticed) and the quote says :
""""Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.”"""
This statement :
1)was made when nVIDIA had launched their mid-range RT-implementations , so his statement definately implies things against nVIDIA
2)He states , NOT thinks( the part i underlined in your comment) , that utilisation of RayTracing games will not proceed unless we(AMD) offer RayTraying from low-end to high-end.
If you don't think that those remarks aren't a clear attempt in order to cut hype from nVIDIA 's Turing then that's your estimation and i certainly have a different one .
Furthermore , today it was proven that not only did they try to cut hype from nVIDIA's Turing back then , but more important , they did it by using false claims , as ,again , was proven by the RT-numbers today.
.... you wrote out their quote in text. In your post. That's literally you quoting him. You're quoting him, even if you're quoting their quoting of his statement, because you're reproducing his words (through their translation). That's the same thing (barring, of course, any misrepresentations in their quote).

As for the rest:
1: Yes. That I entirely agree with this should have been plenty clear from my post.
2: That's literally the same thing. He's commenting on the future. The best he, as any human, can do is to state his opinion and intention. Unless he is literally all-powerful, he does not have the ability to make deterministic statements about the future.
3? You apparently stopped using numbers, but ... yeah. So. Did you read what I wrote? It doesn't seem like it. Because if you did you would see that I absolutely think that it was that, but that I also think that reality has "cut the hype from Nvidia's Turing" much more than this statement. Going back to my 0-100 scale: Nvida was saying "it's 100!", he was saying "probably more like 70-80", and reality has since kicked down the door with a solid 20-30. Was he wrong? In a naïvely absolutist way where the absolute wording of an assessment matters more than its direction and overall gist, sure. But the direction of his argument is far more accurate than Nvidia's statement at the time. So, you're arguing that it was bad that he "cut" the Turing hype, yet ... reality has shown that hype to be complete BS? What are you so annoyed about? That an AMD exec made a vaguely accurate, but overly optimistic prediction that contradicted Nvidia's marketing? After all, if it annoys you that a corporate executive made a dubious claim back then, shouldn't you be even more annoyed at Nvidia execs trumpeting the wonders of RTRT and how it would revolutionize real time graphics? Because their statements at the time are much, much further from reality than the statement you quoted here.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
......
He was right, (probably the we he used in his comment was meant as we as an industry ,not that it matters much) and it's exactly like @Valantar said, he probably meant it like this : "this new feature won't take off until a lot of people have access to it".
.....
Did he said it because they had nothing to compete back then and probably because he wanted to kill the hype Nvidia was trying to generate back then, yes sure.
Did he play a negative roll trying to delay the adoption from devs of a graphics feature that will ultimately help advance the visual fidelity of the industry, OK maybe even that, but don't you think your reaction is a bit too much?
Now as engineering goes, he's an asset for AMD, sure RTG had some judgment laps with Navi 24 and with pricing strategy in general, but achieving 2.81GHz on N6, and having only 107mm² die size for the transistor budget (not to mention that raytracing implementation of RDNA2 although weak on performance it is extremely efficient regarding transistor budget size that it adds) are very good indications for the work that is happening in RTG imo.
Is RDNA2 worst in raytracing than 2018 Turing, sure, it's worst than Turing in other things also, but Nvidia isn't an easy opponent...
Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say :paltruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
who put him representative of nVIDIA ? How altruistic indeed :p !!of course by "we" he meant his own company otherwise he should make it clear that he speaks on behalf of the entire industry :p!!
Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally :pdidn't have any low-end RT-GPUs by then...
 
Joined
Jan 14, 2019
Messages
12,476 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?
OK, let me simplify: The 6500 XT and 6400 do not have a video encoder (and in fact, the GT 710 / 730 duo don't, either). Everything else released in the last couple years does. How many more options do you want?

And so I can too, but it's wasteful (makes CPU work at full blast basically) and depending on system literally impossible without crazy frame skipping. Decoding of popular codecs shouldn't be some "premium" feature. People used to buy MPEG-2 cards in the past, at this rate, we might need VP9 or AV-1 cards again, because AMD shits on their customers.
Every fairly modern CPU can do it at relatively low usage. Needing an AV-1 decode capable GPU for watching 4K Youtube on a PC with a sh*t CPU is a need that you invented.

Or you have BD rips, but don't have space and need to compress with minimal quality losses and don't want it to take days. That's why you get encoding capable GPU or basically anything new, except 6400 or 6500 XT.
If you need that feature, fair enough. I just don't think many people do.

How wrong you are. AMD A4 APUs are the lowest of the low, along with Athlon X2s. Athlon X4s were mid end chips. Comparable to Intel i3s of the time. And at the time lowest end new chip was actually Sempron 140, single core, K10 arch, AM3 chip.
Not really. A 3 years older Core i3 4330 can beat it with ease.

It's not even close to Atoms, those things couldn't play YT, when they were new. Athlon X4 can play YT, but at 1080p only. At 1440p, another codec is used and then it drops some frames. I could also use enhancedx264ify and play everything with GPU alone, but hypothetically AMD could have made a proper card and I could have just dropped it in and played even 8k perfectly fine. It's just needless e-waste to release gimped cards, that are useless.
I wasn't comparing it to Atoms. I was merely stating that you're holding the Athlon X4 in too high regard.

It's my non daily machine for light usage and 1080p on it works fine, so I won't upgrade. But people in the past bought GT 710s to make YT playable on say Pentium D machines. If that's what buyer needs and don't care about gaming at all, then that's fine purchase. And including full decoding capabilities didn't really cost any extra. If GT 710 could do it, then why RX 6400 can't? That's just AMD selling gimped e-waste to dumb people. Mark my words, once GPU Shortage ends, suddenly AMD will stop pulling this crap on their GT 710 equivalent cards and they will rub it to RX 6500 XT customers and customers will suck it up. AMD is straight up preying on stupid and careless with such nonsense. And better yet, no they won't ever give you full decoding capabilities, but will start to market as "premium" feature only for 6900 tier cards. So if you ever need those capabilities, now you will become their "premium" customer. You will be forced to buy overpriced shit with artificial demand. AMD is precisely cashing in during shortage as much as they can and no they aren't hurting due to it, they are making astronomical profits like never before. There's nothing else we can do, other than boycotting shitty products and picking their competitor products instead.
What are you talking about? The 710 can do MPEG-1, MPEG-2, VC-1 and H.264 decode. The 6400 / 6500 XT can do all that, and H.265. The only thing it can't decode is AV-1 - neither can the 710 by the way.

That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more.
I see what you mean, and I kind of agree with the sentiment.
It's only that the 6400 and 6500 XT aren't the only symptoms of this "wild capitalism". I don't think there's any GPU on the market that sells for a reasonable price at the moment. I mean, what other options do you have? A 1030 for £100? A 1050 Ti for £150-200? Or a used 1650 from ebay for £250-300? Hell no! Ampere doesn't even exist in this range. I'd much rather buy something new with warranty for this price.

Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti, if you can find 1050 Ti low profile version, that's literally the same thing, but you can overclock it, record gameplay, stream and VP9 dec/enc, h265 dec/enc. 1050 Ti is just better. 1650 is closer to 6500 XT, but real 6500 XT competitor is 1650 Super.
Because 1. it actually costs more than the 6400, 2. not being a consumer card, it's a bit problematic to find one, 3. the 6400 isn't slower than the 1050 Ti, 4. I have no intention to overclock. It's going into a home theatre PC with a Ryzen 3 in it, so all its job will be to put a 4K 60 Hz image on my TV through HDMI. Being capable of some light gaming is only a plus.

i didn't say anything about RX6400 being a crap , i commented on a past comment from David Wang , AMD's senior VicePresident of engineering for Radeon Technologies Group
Ah OK! I read too much in the post you commented on. :)

Obviously, the 6400 is not going to be for you, and there's nothing wrong with that. I wouldn't want one as my main card, either, to be fair.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
.....
3? You apparently stopped using numbers, but ... yeah. So. Did you read what I wrote? It doesn't seem like it. Because if you did you would see that I absolutely think that it was that, but that I also think that reality has "cut the hype from Nvidia's Turing" much more than this statement. Going back to my 0-100 scale: Nvida was saying "it's 100!", he was saying "probably more like 70-80", and reality has since kicked down the door with a solid 20-30. Was he wrong? In a naïvely absolutist way where the absolute wording of an assessment matters more than its direction and overall gist, sure. But the direction of his argument is far more accurate than Nvidia's statement at the time. So, you're arguing that it was bad that he "cut" the Turing hype, yet ... reality has shown that hype to be complete BS? What are you so annoyed about? That an AMD exec made a vaguely accurate, but overly optimistic prediction that contradicted Nvidia's marketing? After all, if it annoys you that a corporate executive made a dubious claim back then, shouldn't you be even more annoyed at Nvidia execs trumpeting the wonders of RTRT and how it would revolutionize real time graphics? Because their statements at the time are much, much further from reality than the statement you quoted here.
nVIDIA with Turing always promoted RayTracing combined with DLSS , never individually .That's something most of the tech-press tends to ... forget for some reason.
With DLSS enabled the RayTracing is feasable even with cards such as an RTX2060
check at 46:35 :
AMD never did something similar , meaning that their current RT-low end offerings are basically RT-incapable
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say :paltruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
who put him representative of nVIDIA ? How altruistic indeed :p !!of course by "we" he meant his own company otherwise he should make it clear that he speaks on behalf of the entire industry :p!!
Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally :pdidn't have any low-end RT-GPUs by then...
It is entirely reasonable for "we" in a statement like that to be taken as "we as an industry", as AMD isn't a game developer and can thus only ever represent a portion of what is necessary for this to happen. AMD (or Nvidia, or any other single actor) has zero control over whether RT will become the dominant graphical paradigm.
nVIDIA with Turing always promoted RayTracing combined with DLSS , never individually .That's something most of the tech-press tends to ... forget for some reason.
With DLSS enabled the RayTracing is feasable even with cards such as an RTX2060
check at 46:35 :
AMD never did something similar , meaning that their current RT-low end offerings are basically RT-incapable
So ... uh ... Turing was announced in August 2018. Why are you posting a video from January 2019 as proof that "Nvidia always promoted RT combined with DLSS"? That presentation is literally a way for Nvidia to demonstrate that their RT performance would be less terrible once DLSS arrived, as it wasn't out yet at that point. Also, they've routinely promoted both: RT, and RT+DLSS. DLSS was also not mentioned whatsoever at the Turing launch, while RT was a huge focus. So you're entirely wrong in claiming that Nvidia never promoted RT on its own. That is factually untrue. They started adding DLSS to the marketing once it became known just how poor RT performance was, then toned it down after the reception of DLSS was lacklustre, then stepped it up with DLSS 2.0 again.

Also, AMD never did anything similar? Really? Have you heard of FSR? RSR? FSR 2.0? They're doing exactly the same thing: starting off saying "we have hardware RTRT", then adding upscaling, then intermittently promoting one or both. Also, in regards to this actual topic, have you seen AMD promote the 6500 XT's or 6400's RT capabilities heavily? 'Cause from what I can see from the 6500 XT's product page, the only mention of RT is a spec among several other specs ("16 Compute Units & Ray Accelerators"), while FSR has a huge full page width banner. That certainly doesn't look like they're promoting RT heavily, and definitely not without upscaling. Heck, they're promoting FSR explicitly, while RT is barely mentioned.

But back to your initial argument here: you've still not shown how the statement you're so worked up about is any more untrue than Nvidia's initial marketing hype - heck, even that video starts out with a looooong spiel on how RTRT is the future of graphics, a new paradigm, a way to make graphics fundamentally better (it just needs some help along the way!). It takes quite some time in that presentation before DLSS is mentioned.
 
Joined
Jan 14, 2019
Messages
12,476 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I agree with this guy.

The 6400 isn't a good value because it's good. It's a good value because other low power options are either 1. significantly worse (1030), or 2. significantly more expensive with no performance advantage (1650).

 
Joined
May 21, 2009
Messages
271 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
OK, let me simplify: The 6500 XT and 6400 do not have a video encoder (and in fact, the GT 710 / 730 duo don't, either). Everything else released in the last couple years does. How many more options do you want?
this gpus if stay based in gk208* and have nvenc but nvidia cut nvenc capabilities with gp108 like gt 1030


on this side gt 710 and gt 730 gk208 are better than rx 6400

:)
 
Joined
Nov 26, 2021
Messages
1,697 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Great review W1zzard! Thanks for adding the GT 1030. A few things are clear from this review and previous ones.
  • The 6500 XT is being hampered by the PCIe bottleneck. Its performance relative to the 6400 is below what it would have been without the self-imposed bottlenecks
  • Similarly, AMD's APUs have been prevented from reaching their full potential due to the lack of L3 cache for the IGP. The 680M in the 6800HS clocks close to the 6400, but the 6400 is nearly 30% faster
I used numbers from a recent 3090 Ti review for the 6800 and 6900 XT. I chose the 6800 and 6900 XT, because the difference in CUs between these two is the same as the 6400 and 6500 XT. It isn't a perfect comparison, because the 6800 disables an entire shader engine and is, in some ways, more of a cut than the 6400. For clock speeds, I used the reviews for the reference 6800 and 6900 XT as well as the recent review for the Sapphire 6500 XT. The methodology has changed since the earlier reviews, but I'm hoping that the ratios will be the same. All numbers for the 6400 are at 1080p while the resolution for the 6800 comparison is listed in the table. The "6900 vs 6800 v2" column is estimating the relative speed of a 6900XT if its clock speed increase was the 20% of the 6500/6400 vs the 3% in reality. This is, of course, a maximal estimate, and doesn't account for L3 misses. Table 2 has been sorted by the 6500/6400 speedup.

GPUClock Speed
RX 64002285
RX 6500 XT2742
RX 68002225
RX 6900 XT2293

Game64006500 XT6500 vs 6400Resolution68006900 XT6900 vs 68006900 vs 6800 v2performance increase
Deathloop26.530.11.142160104.1119.11.141.3317%
Far Cry 641481.17216064.580.31.241.4524%
Doom Eternal61.473.31.192160130.4161.21.241.4421%
Battlefield V75.1921.232160110.2128.31.161.3611%
Elden Ring34.742.61.23216051.160.91.191.3913%
Forza Horizon 527.133.51.24216070.188.41.261.4719%
Metro Exodus43.955.61.271440133.5165.71.241.4514%
Divinity Original Sin II68.9891.292160104.7130.51.251.4512%
Dying Light 231.2411.31144085.4108.51.271.4813%
The Witcher 346.762.11.331080178.8230.51.291.5013%
CyberPunk 207719.225.61.33144065.981.81.241.458%
Borderlands 342.356.91.35216056.472.11.281.4911%
Red Dead Redemption 224.833.61.351080100127.71.281.4910%
Control28.438.81.37144082.3103.81.261.477%
Guardians of the Galaxy29.740.91.38216069.184.31.221.423%
F1 202149.885.71.722160137.81721.251.45-16%

Other than the weird numbers for F1 2021, the trend is clear. In many games, performance could have been increased by 10 to 20 percent with a wider PCIe connection and a slightly larger L3 or a 96-bit bus.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,476 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
this gpus if stay based in gk208* and have nvenc but nvidia cut nvenc capabilities with gp108 like gt 1030



on this side gt 710 and gt 730 gk208 are better than rx 6400

:)
I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)
 
Joined
Oct 27, 2020
Messages
793 (0.53/day)
Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say :paltruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
who put him representative of nVIDIA ? How altruistic indeed :p !!of course by "we" he meant his own company otherwise he should make it clear that he speaks on behalf of the entire industry :p!!adva
Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally :pdidn't have any low-end RT-GPUs by then...
I don't know the man to know if he's the altruistic type.jk
Usually character types advancing in multinational companies have specific characteristics that i don't value.
But he's a professional with a career, you could see him in a few years jumping to Apple or whatever just like Raja went from AMD to Intel.
So he is part of an industry (if i remember correctly, he'd worked at SGI then Artx, i may be wrong, i am too bored to look it up) and plays an important role in the second largest graphics solution provider, also it was 2018 and AMD hadn't even launched RDNA1 (Q3 2019) so if with we meant AMD, then the message translates as follows:
"Utilisation of ray tracing games will not proceed unless AMD can offer ray tracing in all product ranges from low end to high end"
Something lost in translation don't you think?
Anyway like i said in my first reply it's not even that important.
Recently AMD when Intel revealed ARC, they throw shade at Intel's low end 128EU chip pointing out that although using the same N6 process their chip was faster while at the same time much smaller.
I remember the days when higher transistor counts meant that the design had more features or it was more forward looking and the marketing teams using it to promote their tech, AMD on the contrary done the opposite, so the shade was for Intel's engineering team specifically, kinda below the belt and i bet this was not welcomed at all!
Didn't this bothered you more?
 
Last edited:
Joined
May 21, 2009
Messages
271 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)
yeah this list dont show gk208, in before versions appear but back to theme nvidia begin cut nvenc in desktop with gp108 aka gt 1030

for this reason dont buy gt 1030, at that moment i had gt 630 gk208 for my linux wine youtube channel (very usefull in this time)



:)
 
Joined
Nov 17, 2016
Messages
152 (0.05/day)
I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)
A few reasons IMO:

  1. there was mass hysteria about the 6500 xt price from tech reviewers, even though it had a relatively honest one, not a fake one like with other cards
  2. those tech reviewers whip up hysteria to get clicks on their videos
  3. nobody cared about the GT 1030: https://www.techpowerup.com/reviewdb/Graphics-Cards/NVIDIA/GT-1030/ because it came out in 2017, you had the RX 550 for the same price with encoding, the RX 560 for $20 more, the 1050, the 1050 Ti, etc. a huge raft of cheap cards, and the 1030 was an irrelevant product.
  4. encoding is probably more important now, with more people doing streaming, tiktok, etc.
  5. the RX 6400 is $180 but should be $80. While that reflects the rest of the market so is somewhat defensible, the absolutely higher price means you have a reasonable expectation of more features. It's just fairer to the consumer that you give them a more complete product.
Of course there is a reason, this is just recycled laptop hardware, but let's not pretend it's totally unreasonable to be more demanding about the ONlY current gpu under $200, then when there were about 20 of them
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)
I have a vague memory of it being discussed as a drawback of it at launch, but (and I'm guessing here) probably nobody took it seriously as a gaming or "do-it-all" GPU back when it launched, while now people are doing so for Navi 24 GPUs thanks to silly pricing. At $80 people were probably just happy to see it kind of do 3D.
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
1050 ti is exactly the same price in Indonesia also.

The 6400 is 40% faster than the 1050 ti, so I'd take the 6400. Plus newer, and more efficient.

It helps that I have a quicksync pcie4 cpu. But even without quicksync, as a budget gaming card at like $80, the rx 6400 is obviously hugely superior, because, like, it's a budget card for gaming, not a workstation card. And as mentioned it's MUCH faster.

The 1050 ti is two generations old, so by this point it should be selling for less, not more. Inflation isn't normal for old PC parts, they get steadily cheaper till they become ewaste.

The argument about why they are the same price is just market forces. 1050 ti = Nvidia, better brand recognition, better features. 6400 = AMD, worse features, better performance. Price ends up the same.

If there is reduced demand/more supply, they will fall.

At the moment, $200 for a $100 GPU is too much for me, so I skip.
I don't see your 40% claims:

Literally the same, situation will be even more in favour for 1050 Ti in gen 3 system.
 
Joined
Nov 26, 2021
Messages
1,697 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't see your 40% claims:

Literally the same, situation will be even more in favour for 1050 Ti in gen 3 system.
Take a look at the Hardware Unboxed video i linked back on page 5. On average across their 12 games at their chosen settings (tuned for decent low end performance, not Ultra or anything silly), the 6400 averages 60fps (48 1% lows) vs 43fps (35 1% lows) for the 1050 Ti. This average is significantly pulled down by a couple of titles where the 6400 performs atrociously badly (Rainbow 6 Siege, F1 2021, Doom Eternal) - it's typically faster still. That's a 28% (27% 1% lows) advantage for the 6400. I haven't watched your video so I don't know which games are tested or how many, but I tend to trust HWUB's results over random youtubers.


Edit: 40% is a bit misleading, depending on the wording, as percentages are relative to your starting point. I say 28% faster with the 1050 Ti as 100%, which is the same as being 40% slower than the 6400's 100%. That the 6400 is 28% faster and the 1050 Ti is 40% slower is saying the same thing.
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Yep. Several times. But more importantly, you're responding to people using that term while clearly switching between several meanings of it, making your arguments miss the point.
Whatever then, I know terminology well. No need to be annoying.

Then why on earth bring in MXM adapters, if your point (?) was "this is a bog-standard, regular GPU design"? Sorry, but this does not compute. You made a claim that the desktop Navi 24 GPUs were equivalent to third-party vendors designing MXM-to-PCIe adapters for mobile GPUs in desktop use cases. This claim is false. Period.
Becasue that used to be how laptop cards were connected. As far as I know there was MXM 1060, so it's quite recent stuff and because Alibaba specials used MXM to PCIe adapters to basically create what RX 6400 and RX 6500 XT are. Conceptually the same shit.

... not on a 25W mobile GPU. Which is what the 6300M is. "A few watts" is quite a notable difference in that scenario. And, if you look at the post you quoted, I said "even a watt or two makes a difference", which is less than what you're saying here. The 6500M is 35-50W, where such a difference is less important, but can still allow for slightly higher clocks or better efficiency still.
This is 6400, a watt or two difference is nothing.

... and? TDP is a board-level designation that is essentially invented by chipmakers and/or OEMs. I'm talking about chip-level power savings from a design perspective. Every single one of those GT 710s have the same chip on board (though if some of them have more output hardware it's likely that that consumes a tad more power).
And that shows that you don't save anything. Those connectors are in milliwatt range.

... so: PCB design software lets you add in a ready-made PCIe x16 connector design. It's already there, already done, and adding it is trivial - they then just position it correctly along the PCB perimiter and connect the relevant traces to the right pins. Removing those pins from that ready-made connector design, or god forbid making it from scratch for the nth time, would take much more time. A box design? The equivalent would be whether to copy in the company logo or to draw it from scratch. Which do you think they do? Also, graphical design is just a tad simpler than PCB design. Not that that makes it easy or of low value, but the cost of screwing up a cardboard box design is rather lower than screwing up a modified connector on a thousand PCBs.
lol I can saw off that connector, dude, there's no real design there to be done. You can literally change it in Paint and it will work.


... other GPUs with hardware encode/decode are too CPU heavy? I still think you're somehow failing to understand what was said: they said that there are plenty of other GPU alternatives on the market with those encode/decode blocks included, and that one option without them is thus not much of an issue. I don't necessarily agree entirely, but I don't see it as that bad either. But I sincerely hope you can actually understand what was said now.
[/QUOTE]
And they are either much slower, old (therefore you lack encoding/decoding) or more expensive. Only 1050 Ti is truly competitive, it's the only alternative. And I wrote that CPU recording is too heavy on CPU. Do you even read what I write? You might as well not reply if you don't.

.... there is no difference between a "mobile GPU" and "desktop GPU" in this regard. That's my whole point. You're talking about chips, not full GPU designs. And chips are always used in both mobile and desktop. There is nothing unique about this happening here - the only unique thing is the specific design characteristics of this die.
Which is literally what makes mobile chip. You have no point to make, other than troll and nitpick.

If you look at GPU sales in that same period, you'll see that that Ryzen mindshare essentially hasn't translated into Radeon mindshare at all - despite AMD becoming much more competitive in GPUs in the intervening period, their market share has been stagnant. And, of course, the RX 5700 XT debacle was in the middle of this, whic definitely soured broad opinions on Radeon GPUs.
Oh well.

Because most 1050 Ti stock is likely produced years ago, especially the silicon. And, of course, even if it's brand new, producing a 1050 Ti die on Samsung 14nm is much cheaper than producing a Navi 24 die on TSMC 6nm, even if other materials costs are probably similar-ish.
Weren't their production restarted? Even 730 made a legendary come back.

And, of course, literally every single design cost for the 1050 Ti is long since amortized, which has a major impact on margins. If you have two entirely identical products, where one is brand-new, and the other has been in production for 3,4,5,6 years? The new one has to pay for materials, production costs, marketing costs, board design costs, silicon tape-out and QC costs, driver development costs, and more. The older one? Materials and production costs - everything else is long since paid off (though it might theoretically still be marketed). Drivers are likely still in development (hopefully!), but development costs will most likely be much lower due to driver maturity and developer familiarity with the hardware. There are good reasons why older hardware is more affordable than newer hardware, beyond just inflation.
Even more reasons not to make e-waste like RX 6400 then.

already posted in this thread


40%

idk what site that is you posted

TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.

Every fairly modern CPU can do it at relatively low usage. Needing an AV-1 decode capable GPU for watching 4K Youtube on a PC with a sh*t CPU is a need that you invented.
Because people haven't been doing this with Core 2 Duo machines. You are talking nonsense. Wanna watch Netflix, Youtube, Vimeo, Twitch, you need AV-1. One day your sh*t i7 won't cut it anymore and you will want cheap shit that can decode, not crap like RX 6400.

If you need that feature, fair enough. I just don't think many people do.
You would be surprised by how common that need is.


Not really. A 3 years older Core i3 4330 can beat it with ease.
lol userbenchmark, they still use Athlon 64 logo for AM3+ chips. Even if hypothetically Userbench was any good, then 26% difference is definitely not beating it with ease. More like beating, but still quite close.

I wasn't comparing it to Atoms. I was merely stating that you're holding the Athlon X4 in too high regard.
It's just an example, pretty close to what old computer can have performance wise. Ignore it if you will.

What are you talking about? The 710 can do MPEG-1, MPEG-2, VC-1 and H.264 decode. The 6400 / 6500 XT can do all that, and H.265. The only thing it can't decode is AV-1 - neither can the 710 by the way.
So what? My point was about how back in the day lowest of the low card had all capabilities of decoding/encoding for low price. I'm not saying that GT 710 is superior to your RX 6400. What a way to miss the entire point.


I see what you mean, and I kind of agree with the sentiment.
It's only that the 6400 and 6500 XT aren't the only symptoms of this "wild capitalism". I don't think there's any GPU on the market that sells for a reasonable price at the moment. I mean, what other options do you have? A 1030 for £100? A 1050 Ti for £150-200? Or a used 1650 from ebay for £250-300? Hell no! Ampere doesn't even exist in this range. I'd much rather buy something new with warranty for this price.
RX 6600, the only "value" out there. Has full decoding/encoding, is good value, but yeah 400 EUR. On lower end, there are 1050 Ti, T600, T1000 deals, not great, but better than RX 6400. To be fair, RX 6600 is the only card today that doesn't suck.


Because 1. it actually costs more than the 6400, 2. not being a consumer card, it's a bit problematic to find one, 3. the 6400 isn't slower than the 1050 Ti, 4. I have no intention to overclock. It's going into a home theatre PC with a Ryzen 3 in it, so all its job will be to put a 4K 60 Hz image on my TV through HDMI. Being capable of some light gaming is only a plus.
I dunno about you, but I can buy it at any store. If you can find it, then there's no argument for RX 6400. T600 is better.
 
Joined
Nov 17, 2016
Messages
152 (0.05/day)
TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.

?

The TPU review shows 3% difference to a 1650


The 1650 is 35% faster than a 1050 ti


My source shows the 1050 ti at 43 fps, the rx 6400 at 60 fps.

60/43 = 139.53% = 40% faster.

The database you refer to is just done in a spreadsheet or something. Not game tested.

It seems clear that the 1650 when it came out was substantially faster than 1050 ti. The 6400 has the performance (or 2-3 fps less) of a 1650.

I don't think this is that complicated.

On PCIE3, sure, it gets closer to the 1050 ti, especially at 1% lows. But nowhere near on PCIE4
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Whatever then, I know terminology well. No need to be annoying.
Your arguments show something else. And if you find people countering your arguments "annoying", that's on you, not me.
Becasue that used to be how laptop cards were connected. As far as I know there was MXM 1060, so it's quite recent stuff
MXM had its heyday around 2010, with ever-dwindling use since then. Also, please note that the 1060 launched 6 years ago. That's quite a while - and even back then it was exceedingly rare.
and because Alibaba specials used MXM to PCIe adapters to basically create what RX 6400 and RX 6500 XT are. Conceptually the same shit.
But that's the thing: they aren't whatsoever. Not even close. I'm arguing against you saying this because it is utter and complete nonsense.

Again with the spoon-feeding:
- On the one hand you have a GPU die, on a package. It has specifications and requirements around which a PCB is made to hold its required ancillary components - RAM, VRMs, display pipeline processing, etc., and through which traces are run to components and connectors.
- On the other hand, you have a pre-made mobile GPU board, complete with RAM, VRMs, ancillary components, and an interface (MXM, i.e. mobile PCIe x8 with some extras), to which an adapter board is made taking the display signals sent over the interface and making them into outputs, providing a power input, and running traces from the MXM slot to a PCIe slot.

Now, which of these is the most apt comparison to the RX 6500 XT and 6400? The former. Because it is exactly what they are. Conceptually they have nothing in common with an MXM adapter+MXM GPU. Nothing at all. The made-for-mobile part here is the design of the silicon itself, which is mainly a way of explaining and understanding its strange design tradeoffs. That doesn't mean that AMD didn't plan desktop implementations from the get-go - they clearly did, as they have come to market much faster than the mobile variants. But the design has some tell-tale signs towards being mainly designed for pairing with AMD's 6000-series APUs in a laptop.
This is 6400, a watt or two difference is nothing.
I'm not talking about this specific implementation, I'm talking about the rationale behind designing the Navi 24 die as weirdly as they did. I'm not saying it makes sense, I'm saying that this is the likely explanation.
And that shows that you don't save anything. Those connectors are in milliwatt range.
Connectors, PHYs, controllers, and more, it all adds up. And in a strictly made-for-purpose design, such cuts are sometimes made. Again: not saying it makes sense in a larger context, only trying to understand the rationales surrounding this design.
lol I can saw off that connector, dude, there's no real design there to be done. You can literally change it in Paint and it will work.
.... do you think PCBs are designed in MSPaint?

And yes, obviously you can cut it off. Depending on your skills and tools, that might be faster than doing this properly in the design phase, especially as this will then entail additional QC.
And they are either much slower, old (therefore you lack encoding/decoding) or more expensive. Only 1050 Ti is truly competitive, it's the only alternative. And I wrote that CPU recording is too heavy on CPU. Do you even read what I write? You might as well not reply if you don't.
... so now there aren't that many options? Because, to refresh your memory, this whole branch of the discussion started out with someone paraphrasign you, saying "You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?" So ... is this bad because it's much worse than everything else, or is it bad because it's failing to provide a much-needed, missing function in this market segment? It's one or the other, as those options are mutually exclusive.
Which is literally what makes mobile chip. You have no point to make, other than troll and nitpick.
No. On a silicon level, for PC hardware, there is no such thing as a "mobile chip". You can say it's a mobile-first design, you can say it prioritizes mobile-friedly features, but mobile chip literally doesn't work, as it excludes non-mobile use cases. There are no PC silicon manufacturers (CPUs, APUs, GPUs) who don't implement their silicon in both mobile and desktop versions. The same silicon. Navi 24 is a mobile-first design, quite clearly. It is not a "mobile chip". And this isn't nit-picking, as the difference between the two is meaningful and entirely undermines your arguments.
That's the most sensible response to being proven wrong I've seen from you in a while. Thanks!
Weren't their production restarted? Even 730 made a legendary come back.
They might have become, but what you're quoting already accounts for that.
Even more reasons not to make e-waste like RX 6400 then.
No, AMD should just have made a better die design. It's sad, really - this die has massive potential, but it's flawed in some really bad ways.
TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.
There's something wrong with the TPU DB entry there, as it doesn't align with TPU's own benchmark results - they show it ~matching the 1650, yet the database shows the 1650 as 22% faster. The entry is wrong, somehow.

Also, it's kind of ...odd? to reference the database result rather than actual test results when you're in the comments thread for that review. Just saying.
 
Joined
Jan 14, 2019
Messages
12,476 (5.78/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Because people haven't been doing this with Core 2 Duo machines. You are talking nonsense. Wanna watch Netflix, Youtube, Vimeo, Twitch, you need AV-1. One day your sh*t i7 won't cut it anymore and you will want cheap shit that can decode, not crap like RX 6400.
Which sh*t i7 (I've got two)? I've just watched a 4K Youtube video on my 4765T and integrated graphics (HD 4600) without any lag or dropped frame. It's a 9 year-old 35 Watt CPU, mind you. if this thing can do it, then anything above an i5 2500 can do it, and you can literally pick one up for pennies.

lol userbenchmark, they still use Athlon 64 logo for AM3+ chips. Even if hypothetically Userbench was any good, then 26% difference is definitely not beating it with ease. More like beating, but still quite close.
That "quite close" i3 is a lower-mid-tier CPU that was released 3 years before the Athlon X4 and still beats it.

It's just an example, pretty close to what old computer can have performance wise. Ignore it if you will.
What performance? FM2 was never competitive against contemporary Intel CPUs.

So what? My point was about how back in the day lowest of the low card had all capabilities of decoding/encoding for low price. I'm not saying that GT 710 is superior to your RX 6400. What a way to miss the entire point.
I didn't miss your point. I only stated that it's irrelevant.

RX 6600, the only "value" out there. Has full decoding/encoding, is good value, but yeah 400 EUR. On lower end, there are 1050 Ti, T600, T1000 deals, not great, but better than RX 6400. To be fair, RX 6600 is the only card today that doesn't suck.
I agree about the 6600, but I don't agree about the nvidia ones you mentioned, as they are too expensive here in the UK. The 1050 Ti is selling at about the same price as the 6400, which is quite frankly a ripoff. Quadros aren't only expensive, but hard to find, too.

I dunno about you, but I can buy it at any store. If you can find it, then there's no argument for RX 6400. T600 is better.
I saw it at one store on a "bought to order" basis a while ago, but it was just shy of £250 or so. The 6400 for £160 is a much better deal.
 
Joined
Feb 25, 2018
Messages
4 (0.00/day)
Interesting discussion. I agree with people that the lack of a video encoder is bad. Also I don't like the limitation of PСIE 4.0 X4. Prices are not the best.

But I can't agree that the card is bad just because it's slow. It is not far from 1050 or 1060 or 470, but why should this be bad?

Fast cards overtake past generations. performance cards are catching up with past generations etc. If someone is not satisfied with this performance, he can pay more.

Why $1500 cards if they are the same as $150 cards? The less you pay, the more you sacrifice. First, we refuse 4K, then 2K, then ULTRA, and so we gradually reach low 1080p at the lowest price and minimum consumption.

You can say that we have already seen it on 1060, yes, but people who buy 1060 and 6400 do not overlap. If you ever bought a 1060, your goal is at least 3050 and 3060. Anyone who buys a 6400 as a gaming solution used to buy GT 620, 730 and similar cards. If there are still such players, they have something to replace, the performance gain will be large.
 

Trov

New Member
Joined
Apr 26, 2022
Messages
5 (0.01/day)
Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti
Huh?
Every part of that statement is false.

-T600 at least in the US is selling for $250+, which is nearly $100 more than RX 6400
-T2000 is equivlent to GTX 1650. T1000 is downclocked from that, and T600 has fewer cores from that. T600 performs nearly the same as 1050 Ti.
-Where are you seeing it's slower than 1050 Ti? Just a few pages before yours shows that RX 6400 is faster than 1650 in most cases.

It is only slower than 1050 Ti in two specific games (Doom Eternal and Rainbow 6 Siege) and only on PCIe 3.0. It nearly matches or beats the 1650 in all other cases.

You also compare it to 1050 Ti price...are you comparing specifically to 1050 Ti Low Profile editions? In the US also those are going for $250 or more.


My XFX RX 6400 arrives tomorrow. I will install it in my Lenovo Thinkstation P330 Tiny and compare it to a Quadro T600 and Quadro T1000. It is a PCIe 3.0 x8 system with an 8th gen i5. I will also be comparing thermals and fan noise at various loads (both my T600 and T1000 happily reach 83C throttling temp). The TechPowerUp review mentions great thermals and silent fans, but their card had a dual slot heatsink on it, so I will evaluate the single-slot cooling version. Lastly I will also give Linux performance a try, at least for the strange outlier of Doom Eternal running notably worse on RX 6400. I am curious if it is a driver issue, AMD famously has very good Linux drivers so it is worth a shot.

I also have an Optiplex 7010 SFF which has an Ivy Bridge i7 in it, to test its suitability as a "throw it in an old off-lease $50 PC" option, and see how it stacks up on that outdated machine vs those two Quadros and a GTX 1650 Low Profile.


I had assembled the Optiplex + 1650 at a time when it was possible to get both for a total of less than $200, and used it as an HTPC that could also play games. Years later, for Low Profile cards, there is still no good successor to the Low Profile 1650 that is in a viable price range (so, RTX A2000 for $800+ is out of the question).
Therefore I got the P330 Tiny as it was a good deal, as with the Turing Quadros single-slot low-profile GPU tech looks like it had advanced enough for that small 1L PC to reach and even beat that Optiplex in performance. Unfortunately I had bought the Quadros just a week before the RX 6400 launched for $100 less than the T600 and $250 less than the T1000. If the RX 6400 can keep its thermals in check I think it will handily prove to be a better option than both of those. Since this machine's 8th Gen Intel has QuickSync I dont think I am going to mind the missing encoders even for HTPC purposes.
 
Last edited:
Top