• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6400

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,829 (2.94/day)
Location
Jyväskylä, Finland
System Name 4K-gaming / media-PC
Processor AMD Ryzen 7 5800X / Intel Core i7-6700K
Motherboard Asus ROG Crosshair VII Hero / Asus Z170-A
Cooling Arctic Freezer 50 / Thermaltake Contac 21
Memory 32GB DDR4-3466 / 16GB DDR4-3000
Video Card(s) RTX 3080 10GB / RX 6700 XT
Storage 3.3TB of SSDs / several small SSDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D AF White / DeepCool CC560 WH
Audio Device(s) Creative Omni BT speaker
Power Supply EVGA G2 750W / Fractal ION Gold 550W
Mouse Logitech MX518 / Logitech G400s
Keyboard Roccat Vulcan 121 AIMO / NOS C450 Mini Pro
VR HMD Oculus Rift CV1
Software Windows 11 Pro / Windows 11 Pro
Benchmark Scores They run Crysis
ReLive is AMD's streaming/recording feature in Radeon Software.
Ah. Never saw any need for that feature as I've always used Afterburner for recording and OBS back when I streamed.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Ah. Never saw any need for that feature as I've always used Afterburner for recording and OBS back when I streamed.
Wait, Afterburner has a recording feature? Isn't that just an OC tool?
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
AMD offered 5700X and 5600 a while ago and wanted to get back to me as soon as they have the tracking number, that was two weeks ago
If I remember correctly you said in a previous post that you live in Germany.
Isn't Germany supposed to be a greatly structured country with higher organization that many other European countries?
Or is this just AMD's German office fault?
Just asking, the most memorable thing I remember about Germany is Norm Macdonald's comedy scetch:
jk
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.52/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I don't even know what that is, so I doubt I would need that feature.
AMD's Shadowplay. Screen recording/streaming function
 
Joined
May 21, 2009
Messages
270 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
Wait, Afterburner has a recording feature? Isn't that just an OC tool?
yeah can record too since some versions ago

in my case use nvenc with ffmpeg 4.3 on xubuntu 22.04, other options could be simple screen recorder, vokoscreeen, kazam
or obs as your said

:)
 
Joined
May 8, 2021
Messages
1,978 (1.52/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
This needs pointing out: you know that GPU dice are used essentially universally across both mobile and desktop segments, right? That pretty much every consumer GPU die AMD and Nvidia has made for several generations has both mobile and desktop variants? And no, this isn't akin to a PCIe MXM adapter - it's a dGPU, with the GPU package soldered to a made-for-purpose PCB.
Not really. AMD's APUs, especially mobile are cut down versions and not similar to desktop chips. Meanwhile, Intel basically lowers TDP and that's mobile chip. nVidia used to rebrands their mobile chips, use different dies and in other ways cripple GPUs and thus they weren't similar to desktop parts.


Imagine they just integrated adapter part into board and you can't literally see MXM card. BTW MXM is dead, but they still use some kind of interconnect for mobile chips. It's that converted for desktop usage, but you clearly see how the GPU itself is mobile chip due to various bizarre limitations.

Just because the GPU die is purpose-built for mobile usage doesn't make the dGPU PCBs anything other than regular old dGPU PCBs. Also, PCB design for a GPU this simple, with this little I/O and this low power consumption is dead simple.
Power consumption is mainly decided by amount of cores and their voltage/clock speed.


As for the PCIe fingers being fully populated: design elements like that are copy-pasted into literally every design; the PCB design software has that layout saved, ready for adding to any design. Removing pins is far more work than just leaving them there but not connected to anything, and the cost of the copper + gold plating is so low, even across thousands of units, to not matter whatsoever.
I think that you are overestimating difficulties. You can literally saw-off those pins and card will work just fine. There were 4X cards in the past and they didn't cost any extra compared to 16X or 1X cards. Motherboards even had 4X slots.
 
Last edited:

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,829 (2.94/day)
Location
Jyväskylä, Finland
System Name 4K-gaming / media-PC
Processor AMD Ryzen 7 5800X / Intel Core i7-6700K
Motherboard Asus ROG Crosshair VII Hero / Asus Z170-A
Cooling Arctic Freezer 50 / Thermaltake Contac 21
Memory 32GB DDR4-3466 / 16GB DDR4-3000
Video Card(s) RTX 3080 10GB / RX 6700 XT
Storage 3.3TB of SSDs / several small SSDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D AF White / DeepCool CC560 WH
Audio Device(s) Creative Omni BT speaker
Power Supply EVGA G2 750W / Fractal ION Gold 550W
Mouse Logitech MX518 / Logitech G400s
Keyboard Roccat Vulcan 121 AIMO / NOS C450 Mini Pro
VR HMD Oculus Rift CV1
Software Windows 11 Pro / Windows 11 Pro
Benchmark Scores They run Crysis
Joined
May 8, 2021
Messages
1,978 (1.52/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Did you actually read my answer?
"You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?"

Oh, and don't "bruh" me. Thanks.
I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?

I am watching 4K Youtube on my HTPC as I'm typing this and it does not have an AV-1 decode capable GPU.
And so I can too, but it's wasteful (makes CPU work at full blast basically) and depending on system literally impossible without crazy frame skipping. Decoding of popular codecs shouldn't be some "premium" feature. People used to buy MPEG-2 cards in the past, at this rate, we might need VP9 or AV-1 cards again, because AMD shits on their customers.

I'm not sure. I used to convert lots of videos to be able to watch them on multiple devices (traditional mobile phones, PSP, DVD players...), but I don't think that's the norm anymore. Pretty much any device can play anything nowadays, which means average home users don't really need to convert anything.
Or you have BD rips, but don't have space and need to compress with minimal quality losses and don't want it to take days. That's why you get encoding capable GPU or basically anything new, except 6400 or 6500 XT.

Because that relatively recent hardware in question (Athlon X4) was was lower end than the lowest of all ends when it was released - like basically everything on the FM2 platform.
How wrong you are. AMD A4 APUs are the lowest of the low, along with Athlon X2s. Athlon X4s were mid end chips. Comparable to Intel i3s of the time. And at the time lowest end new chip was actually Sempron 140, single core, K10 arch, AM3 chip.

Complaining that it doesn't play Youtube is like complaining that your first gen single core Atom takes half an hour to load the Windows desktop. You could go to a computer recycling centre, pick up a Sandy Bridge mobo+CPU combo that was made ten years ago for basically free, and be a lot happier.
It's not even close to Atoms, those things couldn't play YT, when they were new. Athlon X4 can play YT, but at 1080p only. At 1440p, another codec is used and then it drops some frames. I could also use enhancedx264ify and play everything with GPU alone, but hypothetically AMD could have made a proper card and I could have just dropped it in and played even 8k perfectly fine. It's just needless e-waste to release gimped cards, that are useless.

And obviously, don't buy a 6400 or 6500 XT to play Youtube videos if you're one of the 3 people on the planet who's still using an Athlon X4 and doesn't want to swap it for something equally cheap but miles better.
It's my non daily machine for light usage and 1080p on it works fine, so I won't upgrade. But people in the past bought GT 710s to make YT playable on say Pentium D machines. If that's what buyer needs and don't care about gaming at all, then that's fine purchase. And including full decoding capabilities didn't really cost any extra. If GT 710 could do it, then why RX 6400 can't? That's just AMD selling gimped e-waste to dumb people. Mark my words, once GPU Shortage ends, suddenly AMD will stop pulling this crap on their GT 710 equivalent cards and they will rub it to RX 6500 XT customers and customers will suck it up. AMD is straight up preying on stupid and careless with such nonsense. And better yet, no they won't ever give you full decoding capabilities, but will start to market as "premium" feature only for 6900 tier cards. So if you ever need those capabilities, now you will become their "premium" customer. You will be forced to buy overpriced shit with artificial demand. AMD is precisely cashing in during shortage as much as they can and no they aren't hurting due to it, they are making astronomical profits like never before. There's nothing else we can do, other than boycotting shitty products and picking their competitor products instead.


"Harvested laptop chips": That's a good theory and I can see where it's coming from (lack of video encoder, PCI-e x4), but where are those laptop GPUs? And if the theory is right, so what?
Nothing much, but explains why 6400 and 6500 XT are so bizarrely limited.

"Nerf the de-/encoder": They didn't nerf anything. Navi 24 is a completely new chip. Just like nvidia never nerfed the 16-series by cutting out the RT cores. Those RT cores were never there in the first place.
Nope, that's exactly nerfing. Adapting pointless laptop chip for desktop usage and brand it as fully capable 6000 series is exactly nerfing. And who knows, if those Navi 24s can't actually decode, I certainly don't have microscope for that, but it would be interesting to see. Wouldn't be surprised if they did.

"Toxic capitalism": Making money is the ultimate goal of every for-profit company. You can only do that by cutting costs and increasing prices. AMD designed Navi 24 to be as small as possible with this in mind. I agree that they cut too many corners with the x4 bus, and that the 6500 XT is an inefficient, power-hungry monster of a card for its performance level
That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more.


, but other than that, the 6400 isn't that bad, especially when you consider the competition which is sadly the now £100 GT 1030 or the almost non-existent low profile GTX 1650 for £300 on ebay.
Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti, if you can find 1050 Ti low profile version, that's literally the same thing, but you can overclock it, record gameplay, stream and VP9 dec/enc, h265 dec/enc. 1050 Ti is just better. 1650 is closer to 6500 XT, but real 6500 XT competitor is 1650 Super.
 
Joined
Dec 26, 2006
Messages
3,842 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
Personally I find 5700X way more interesting..
Me too, BUT I want to see how the 5500 compares, since it would be a good budget cpu for sons PC which is running an old fx-8320 :|
 

Ruru

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
12,829 (2.94/day)
Location
Jyväskylä, Finland
System Name 4K-gaming / media-PC
Processor AMD Ryzen 7 5800X / Intel Core i7-6700K
Motherboard Asus ROG Crosshair VII Hero / Asus Z170-A
Cooling Arctic Freezer 50 / Thermaltake Contac 21
Memory 32GB DDR4-3466 / 16GB DDR4-3000
Video Card(s) RTX 3080 10GB / RX 6700 XT
Storage 3.3TB of SSDs / several small SSDs
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Corsair 4000D AF White / DeepCool CC560 WH
Audio Device(s) Creative Omni BT speaker
Power Supply EVGA G2 750W / Fractal ION Gold 550W
Mouse Logitech MX518 / Logitech G400s
Keyboard Roccat Vulcan 121 AIMO / NOS C450 Mini Pro
VR HMD Oculus Rift CV1
Software Windows 11 Pro / Windows 11 Pro
Benchmark Scores They run Crysis
Me too, BUT I want to see how the 5500 compares, since it would be a good budget cpu for sons PC which is running an old fx-8320 :|
Yeah, agree there. I just personally want more cores at last (had a B450 & 2600 before my current B550 & 3600). :)
 
Joined
Nov 15, 2020
Messages
920 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
Niche. Pricey for what it is.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Not really. AMD's APUs, especially mobile are cut down versions and not similar to desktop chips. Meanwhile, Intel basically lowers TDP and that's mobile chip. nVidia used to rebrands their mobile chips, use different dies and in other ways cripple GPUs and thus they weren't similar to desktop parts.
APUs are not GPUs. Nor are Intel CPUs GPUs. Also, what you're describing is (a slight misrepresentation of) how the chip industry has always operated: any chip that can be used for multiple purposes is used for those purposes as long as it makes sense economically. AFAIK, Nvidia has never used "different dies" for mobile chips (outside of a few rare edge cases). Chips are binned during production and different bins are used for different purposes.
Imagine they just integrated adapter part into board and you can't literally see MXM card.
What you are describing is a PCIe dGPU AIC. Literally nothing else than that.
BTW MXM is dead, but they still use some kind of interconnect for mobile chips.
Yes, it's called PCIe. MXM is dead because essentially all dGPU-equipped laptops have the GPU integrated directly into the motherboard.
It's that converted for desktop usage, but you clearly see how the GPU itself is mobile chip due to various bizarre limitations.
Okay, the problem here is that you're mixing up two quite different understandings of "GPU" - the one denoting the die itself, and the one denoting a full implementation including RAM, VRMs, and other ancillary circuitry. It is entirely true that this GPU - the die, and its physical characteristics - is primarily designed for mobile use. I've gone into this at quite some length in both this thread and others. What isn't made for mobile use is its desktop implementations. And, crucially, just because the die is primarily designed for mobile use doesn't make it any kind of special case - as I said above, chips are binned and used for different purposes. That's how the chip industry works.

What people are discussing in terms of implementation, which you keep derailing with this odd "it's like an MXM adapter" nonsense, is the design choices made by AMD when this die was designed, and the tradeoffs inherent to this. When I and others say it was designed for mobile, that means the it has a featureset balanced against a specific use case, and has zero overprovisioning for other use cases (mainly desktop, but also potentially others). The board implementation is essentially irrelevant to this, and it doesn't relate in any way to MXM adapters, as - as you point out - MXM is dead, and isn't relevant to how mobile GPUs are implemented today.

So: what we have here are completely normal, entirely regular desktop AIC GPU boards with nothing particularly unique about them, designed around a die that seems to have its design goals set for a highly specific mobile use case with some inherent major drawbacks due to this.
Power consumption is mainly decided by amount of cores and their voltage/clock speed.
Jesus, dude, seriously? Do I need to spoon feed you everything? Yes, that is the main determinant for power consumption. But if your goal is as low power consumption as possible, then you also start cutting other things. Such as memory bandwidth and PCIe, which both consume power - and in a 25W mobile GPU, even a watt or two saved makes a difference. These are of course also cost-cutting measures at the same time. As I've pointed out plenty of times: it's clear that this die is purpose-built for cheap, low power laptops with entry-level gaming performance.
I think that you are overestimating difficulties. You can literally saw-off those pins and card will work just fine. There were 4X cards in the past and they didn't cost any extra compared to 16X or 1X cards. Motherboards even had 4X slots.
I never said it was difficult. I said it takes time and work to remove them from the design; time and work that costs money while providing no material benefits, and savings so small that they don't matter in the end. Thus the simplest and cheapest solution is not to bother doing so.
I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?
They were literally saying that there are so many other options that having one option without them shouldn't matter. That was pretty easy to understand IMO.
Nothing much, but explains why 6400 and 6500 XT are so bizarrely limited.
But you're misusing the term "harvested" here. "Harvested" implies they are chips that failed/were binned too low for some use case. That does not seem to be the case here - the 6500 XT seems to be a fully enabled die, but clearly one that is binned for high clocks rather than low power. The 6400 seems to be a middle-of-the-road bin, with a few CUs fused off. Neither appear "harvested", as there's no major cuts made (4 CUs isn't a lot, and everything else is intact). They're just different versions of the same chip, all equally valid implementations, and none bear the "we'd rather use these somehow than scrap them" characteristic for harvested chips.
That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more.
While I mostly agree with you in principle (though I'd go a lot further than you do here on some points), I think this is not the most suitable case for this critique. Navi 24 is more of a strangely unbalanced design than it is a cash grab - a cash grab would try to lure people in somehow, while this just behaves strangely instead. A cash grab needs to present itself as good value or enticing, which this doesn't do - and AMD doesn't have the GPU market mindshare to really change that. Its high entry price (on the desktop) takes this from bad to worse, but that's universal across the GPU market right now, and in no way unique to these two GPU models. And yes, AMD, just like other major corporations, are in it for profits first and foremost - as a publicly traded US corporation they are legally obligated to do so. This system is an absolute travesty in so many ways, but it plays out in much more nefarious ways than these unbalanced GPU designs.

There are also valid reasons for some cost increases - compared to even two years ago, many raw materials (copper, aluminium, many others) now cost 2-3x what they used to. International shipping has also increased massively, which also affects MSRPs. Neither of those are sufficient to explain the inflated MSRPs of these or other GPUs on the market currently, but they go some way towards explaining them. Another explanation is AIO partners resisting scraped-down marigns - many GPU AIO partners have reportedly had essentially zero margin on their products, with MSRPs previously being so low they have struggled to break even on their own designs. Again this doesn't necessarily justify increased MSRPs -after all, the GPU makers themselves have very high margins overall, in the 40% range.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
@W1zzard very good review however lack of 720p, this card suffer with many games at 1080p

raytracing in this card level is a joke but results are..................











decode and encode capabilities are more usefull than fucking joke raytracing

:)

I have to disagree on that .
For me , those RT-numbers are of extreme importance since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :
“Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.” ( https://www.game-debate.com/news/26...until-even-low-end-radeon-gpus-can-support-it )
I never forgot that pompous statement and always waited for AMD to release their RT-capable low-end GPUs .
And ... voila !!! the RX6500XT & RX6400 in all their ""low-end RayTracing glory"" , so ,according to mr Wang's statement , i guess now the time has finally:p come for the ""utilisation of ray tracing games to proceed"" !!
Who doesn't want Ray Tracing games@10-15fps :laugh:after all ??
I want to apologise in advance from AMD fans , but i always thought that this was a "cheap & easy" statement back then , and i don't tend to forget such things.
I had to wait for more than 2 years in order to verify/validate that statement , but the passage of time always comes ... mr.Wang.
So , yeah , as i said in the beginning ,those RT numbers are of extreme importance from historical point of view.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I have to disagree on that .
For me , those RT-numbers are of extreme importance since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :

I never forgot that pompous statement and always waited for AMD to release their RT-capable low-end GPUs .
And ... voila !!! the RX6500XT & RX6400 in all their ""low-end RayTracing glory"" , so ,according to mr Wang's statement , i guess now the time has finally:p come for the ""utilisation of ray tracing games to proceed"" !!
Who doesn't want Ray Tracing games@10-15fps :laugh:after all ??
I want to apologise in advance from AMD fans , but i always thought that this was a "cheap & easy" statement back then , and i don't tend to forget such things.
I had to wait for more than 2 years in order to verify/validate that statement , but the passage of time always comes ... mr.Wang.
So , yeah , as i said in the beginning ,those RT numbers are of extreme importance from historical point of view.
So ... you saw an executive make a comment that essentially says "this new feature won't take off until a lot of people have access to it", and that struck you as so strange that you roam forums years later looking for ways to laugh at it? I mean, you do you, but that's a pretty tame statement right there. About as self-evident as "you're likely to get wet when it's raining" or "people will be hungry until they are fed". Is that pompous? I'd say the opposite - it's a plain statement of fact. Does it have a slightly negative tone, in response to a competitor with a new feature? Sure. But so what? It's still true. RT still hasn't taken off after all, and is still pretty rare. These GPUs won't change that - rather, you should rather change that quote to say "until we can offer good ray tracing in all product ranges". So if anything, the statement itself is too optimistic. Pompous? Not at all. Cheap and easy? I frankly don't know what that means in this context. It's definitely an obvious response to a new, exotic, high-end feature, and doesn't bring anything new or useful to the table. But ... again, who cares? There are so many more valid and important reasons to criticize corporate behaviour and PR nonsense than this right here. AMD has a long, storied history of terrible GPU PR. This isn't even a blip on the radar compared to that.

On the other hand, from a technical perspective, the RT hardware is integrated into the RDNA2 shader core, so unless they were to design an explicitly RT-less subvariant (expensive and unlikely), or add an unnecessary software block, it's just a feature that's there in hardware. It's nowhere near sufficiently powerful for RTRT graphics, but it might be useful for RT spatial audio or other low-intensity tasks (like MS has spoken of for the Xbox). That's a ways out, and likely to be very rare if it ever appears, but having the potential is IMO still better than artificially blocking it off.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
So ... you saw an executive make a comment that essentially says "this new feature won't take off until a lot of people have access to it", and that struck you as so strange that you roam forums years later looking for ways to laugh at it? I mean, you do you, but that's a pretty tame statement right there. About as self-evident as "you're likely to get wet when it's raining" or "people will be hungry until they are fed". Is that pompous? I'd say the opposite - it's a plain statement of fact. Does it have a slightly negative tone, in response to a competitor with a new feature? Sure. But so what? It's still true. RT still hasn't taken off after all, and is still pretty rare. These GPUs won't change that - rather, you should rather change that quote to say "until we can offer good ray tracing in all product ranges". So if anything, the statement itself is too optimistic. Pompous? Not at all. Cheap and easy? I frankly don't know what that means in this context. It's definitely an obvious response to a new, exotic, high-end feature, and doesn't bring anything new or useful to the table. But ... again, who cares? There are so many more valid and important reasons to criticize corporate behaviour and PR nonsense than this right here. AMD has a long, storied history of terrible GPU PR. This isn't even a blip on the radar compared to that.

On the other hand, from a technical perspective, the RT hardware is integrated into the RDNA2 shader core, so unless they were to design an explicitly RT-less subvariant (expensive and unlikely), or add an unnecessary software block, it's just a feature that's there in hardware. It's nowhere near sufficiently powerful for RTRT graphics, but it might be useful for RT spatial audio or other low-intensity tasks (like MS has spoken of for the Xbox). That's a ways out, and likely to be very rare if it ever appears, but having the potential is IMO still better than artificially blocking it off.
First of all , who among the AMD tech enthusiasts didn't pay attention to that statement during the time it was made ?
if you make a research you'll see a great amount of media reproducing that statement back then , and that statement wasn't made by a simple employee , but from the VicePresident of engineering in Radeon . How many people are above a Vice President of engineering in RTG ? (not AMD in general , but rather in Radeon Technologies which is right in the area of topic : GPUs)
So this statement was done by one of the highest(if not the highest) inside RTG , which is what matters , since we are not talking CPUs but rather GPUs.

Also yes , back then it was a cheap&easy statement , because :
1)This could cut the hype from nVIDIA's RayTracing implementations , implying that AMD cares for gamers , thus they will release low-end RT for the masses.
2)Didn't have any immediate cost for AMD , since very few people would bother to remember something which was said more than 2 years ago.
But as i said , personally i get very(VEEEERY) intrigued when i hear such pompous/low-cost statements , and i patiently wait to see what will happen.
And what happened exactly ? 10-15fps with RT-enabled is what happened.
Have you compared RTX2060 's RT-performance Vs those 2 Radeon GPUs ? it's like day & night , yet , nVIDIA was burried back then by the press for their RT performance even for a somewhat capable GPU such as the RTX2060.
Based on RTX2060's RT-performance Vs these 2 , does anyone think that nVIDIA wasn't capable back then to release a similar RT-performer with an RX6400 ?? of course they could have .
But they didn't launch such a product , instead they wisely chose to release a GTX-line for low-end.

So , i'm asking : now that AMD did something that nVIDIA could easily have done 2 years ago , what does that mean ?
that the ... ""utilisation of ray tracing games is ready to proceed"" ??
If yes , then this means that the ""utilisation of ray tracing games"" was ready to proceed back from day-1 , back from the Turing days , since AMD released a low-end RT product which is several times worse than nVIDIA's RTX2060
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
First of all , who among the AMD tech enthusiasts didn't pay attention to that statement during the time it was made ?
if you make a research you'll see a great amount of media reproducing that statement back then , and that statement wasn't made by a simple employee , but from the VicePresident of engineering in Radeon . How many people are above a Vice President of engineering in RTG ? (not AMD in general , but rather in Radeon Technologies which is right in the area of topic : GPUs)
So this statement was done by one of the highest(if not the highest) inside RTG , which is what matters , since we are not talking CPUs but rather GPUs.

Also yes , back then it was a cheap&easy statement , because :
1)This could cut the hype from nVIDIA's RayTracing implementations , implying that AMD cares for gamers , thus they will release low-end RT for the masses.
2)Didn't have any immediate cost for AMD , since very few people would bother to remember something which was said more than 2 years ago.
But as i said , personally i get very(VEEEERY) intrigued when i hear such pompous/low-cost statements , and i patiently wait to see what will happen.
And what happened exactly ? 10-15fps with RT-enabled is what happened.
Have you compared RTX2060 's RT-performance Vs those 2 Radeon GPUs ? it's like day & night , yet , nVIDIA was burried back then by the press for their RT performance even for a somewhat capable GPU such as the RTX2060.
Based on RTX2060's RT-performance Vs these 2 , does anyone think that nVIDIA wasn't capable back then to release a similar RT-performer with an RX6400 ?? of course they could have .
But they didn't launch such a product , instead they wisely chose to release a GTX-line for low-end.

So , i'm asking : now that AMD did something that nVIDIA could easily have done 2 years ago , what does that mean ?
that the ... ""utilisation of ray tracing games is ready to proceed"" ??
If yes , then this means that the ""utilisation of ray tracing games"" was ready to proceed back from day-1 , back from the Turing days , since AMD released a low-end RT product which is several times worse than nVIDIA's RTX2060
I still think you're making a mountain out of ... not even a molehill, more like a grain of sand here. "Competitor's exec comments vaguely negatively about the exclusivity about a company's new feature" is ... standard. Expected. Dime-a-dozen. Something that happens every day of the week, everywhere, all the time. You could say that people picked up on it, but ... so what? Fanboys will be fanboys. Press will report on statements made. That's their job, quite literally. That doesn't necessarily make those statements interesting or notable outside of that moment. If what you're looking for is to rile up fanboys, please go on Reddit or somewhere more suited to that, rather than poisoning interesting discussions on the forums. Because pointing this out as if it's somehow unusual or especially worthy of comment is rather absurd.

As for your two numbered points:
1: So what? Who cares? Competitor PR is meant to compete. Counteracting hype surrounding a competing product is expected. Literally everyone does that. Nvidia does that every time AMD adds a new feature, AMD does it every time Nvidia adds a feature. If bog-standard PR behaviour like this bothers you that much, you're better off just not paying attention at all.
2: Again: so what? Would it have mattered more if it through some absurd mechanism cost AMD a lot to say this? No. What you seem to be implying is that AMD should have put their money where their mouth was, which ... isn't that what they've done? RDNA2 overall has decently capable (Turing-level-ish) RTRT support, which clearly wasn't free for AMD. That Navi 24 is too small a die for it to be useful doesn't change the fact that they've clearly invested heavily in making this feature work for them as well.

You've yet to show how this was pompous or anything but stating the obvious. If anything, you're demonstrating that this was a conservative assessment. That's the opposite ov being pompous. We now have bottom-to-top RT support, yet it still hasn't taken off. So what he said was necessary was too optimistic, even! This, again, is completely obvious - it takes years and years for even widely supported graphics features to take off, let alone ones that entirely change how graphics work - but you're claiming that this is somehow pompous? Maybe look up that word? He was effectively overly optimistic about the adoption rate of a competitor's new tech. That is hardly pompous.

As for the RTX 2060 comparison: while you're not technically wrong, it's par for the course to be more heavily criticized for base level performance of an exclusive new feature in its first generation than for base level performance in subsequent generations. This is in part due to press attention spans, but also due to new features being expensive (especially to end users), and thus coming with an expectation on a return on investment, while adding said feature to low-end parts in later generations is a lower stakes endeavor. That doesn't make the RT support in Navi 24 any more useful for RTRT graphics, but it also makes the "meh, sure, it has RT support on paper, but don't expect it to be useful" response completely expected and understandable. It would have been exactly the same if AMD were the first to deliver RT and Nvidia then delivered a low-end RT-supporting card the next generation.

So: we have an executive of a competitor trying to counteract hype surrounding a company's new tech (expected), making a so-obvious-it-hurts statement (expected), that in hindsight turned out to even be too optimistic an assessment (perhaps not expected, but certainly not pompous). Is this worth making a fuss over?


I mean, the unusable RT support on these GPUs is really not their biggest problem. Not by a long shot. Even the RTX 3050, RX 6600, and RTX 2060 deliver barely playable RTRT results in most games at 1080p. What would you expect from GPUs with half the resources of a 6600, or less? You could always make an argument saying that AMD ought to have disabled RT support on this because it's not useful in real life, but that would be something entirely different from what you're doing here.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
I still think you're making a mountain out of ... not even a molehill, more like a grain of sand here. "Competitor's exec comments vaguely negatively about the exclusivity about a company's new feature" is ... standard. Expected. Dime-a-dozen. Something that happens every day of the week, everywhere, all the time. You could say that people picked up on it, but ... so what? Fanboys will be fanboys. Press will report on statements made. That's their job, quite literally. That doesn't necessarily make those statements interesting or notable outside of that moment. If what you're looking for is to rile up fanboys, please go on Reddit or somewhere more suited to that, rather than poisoning interesting discussions on the forums. Because pointing this out as if it's somehow unusual or especially worthy of comment is rather absurd.

As for your two numbered points:
1: So what? Who cares? Competitor PR is meant to compete. Counteracting hype surrounding a competing product is expected. Literally everyone does that. Nvidia does that every time AMD adds a new feature, AMD does it every time Nvidia adds a feature. If bog-standard PR behaviour like this bothers you that much, you're better off just not paying attention at all.
2: Again: so what? Would it have mattered more if it through some absurd mechanism cost AMD a lot to say this? No. What you seem to be implying is that AMD should have put their money where their mouth was, which ... isn't that what they've done? RDNA2 overall has decently capable (Turing-level-ish) RTRT support, which clearly wasn't free for AMD. That Navi 24 is too small a die for it to be useful doesn't change the fact that they've clearly invested heavily in making this feature work for them as well.

You've yet to show how this was pompous or anything but stating the obvious. If anything, you're demonstrating that this was a conservative assessment. That's the opposite ov being pompous. We now have bottom-to-top RT support, yet it still hasn't taken off. So what he said was necessary was too optimistic, even! This, again, is completely obvious - it takes years and years for even widely supported graphics features to take off, let alone ones that entirely change how graphics work - but you're claiming that this is somehow pompous? Maybe look up that word? He was effectively overly optimistic about the adoption rate of a competitor's new tech. That is hardly pompous.

As for the RTX 2060 comparison: while you're not technically wrong, it's par for the course to be more heavily criticized for base level performance of an exclusive new feature in its first generation than for base level performance in subsequent generations. This is in part due to press attention spans, but also due to new features being expensive (especially to end users), and thus coming with an expectation on a return on investment, while adding said feature to low-end parts in later generations is a lower stakes endeavor. That doesn't make the RT support in Navi 24 any more useful for RTRT graphics, but it also makes the "meh, sure, it has RT support on paper, but don't expect it to be useful" response completely expected and understandable. It would have been exactly the same if AMD were the first to deliver RT and Nvidia then delivered a low-end RT-supporting card the next generation.

So: we have an executive of a competitor trying to counteract hype surrounding a company's new tech (expected), making a so-obvious-it-hurts statement (expected), that in hindsight turned out to even be too optimistic an assessment (perhaps not expected, but certainly not pompous). Is this worth making a fuss over?


I mean, the unusable RT support on these GPUs is really not their biggest problem. Not by a long shot. Even the RTX 3050, RX 6600, and RTX 2060 deliver barely playable RTRT results in most games at 1080p. What would you expect from GPUs with half the resources of a 6600, or less? You could always make an argument saying that AMD ought to have disabled RT support on this because it's not useful in real life, but that would be something entirely different from what you're doing here.
I don't expect anything my friend ,
of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such a statement.
So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.
You are wondering why i'm reacting :
well , this statement made everyone to react since this statement was reproduced everywhere back then , the only difference is that i kept remembering this statement in order to see if it will actually be validated or just a hype of the moment in order to undermine nVIDIA 's "mid-range RT-capable GPUs" .
Anyone who thinks that this statement was meaningless ,they should also have said that by the time that this statement was made , but no one undermined that statement back then. I'm simply one the very few who waited for 2 years to actually check the validity of such statement ,and you are critisizing me for doing such a thing ??

Unfortunately , as i said ,i get very intrigued by such statements ,thus i'm incapable to forget them.
I pay attention at them during their hype , but i also pay attention when the hype has ended and the time of truth has come...
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.52/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Okay, the problem here is that you're mixing up two quite different understandings of "GPU" - the one denoting the die itself, and the one denoting a full implementation including RAM, VRMs, and other ancillary circuitry. It is entirely true that this GPU - the die, and its physical characteristics - is primarily designed for mobile use. I've gone into this at quite some length in both this thread and others. What isn't made for mobile use is its desktop implementations. And, crucially, just because the die is primarily designed for mobile use doesn't make it any kind of special case - as I said above, chips are binned and used for different purposes. That's how the chip industry works.
Did I even claim that card is GPU?

What people are discussing in terms of implementation, which you keep derailing with this odd "it's like an MXM adapter" nonsense, is the design choices made by AMD when this die was designed, and the tradeoffs inherent to this. When I and others say it was designed for mobile, that means the it has a featureset balanced against a specific use case, and has zero overprovisioning for other use cases (mainly desktop, but also potentially others). The board implementation is essentially irrelevant to this, and it doesn't relate in any way to MXM adapters, as - as you point out - MXM is dead, and isn't relevant to how mobile GPUs are implemented today.

So: what we have here are completely normal, entirely regular desktop AIC GPU boards with nothing particularly unique about them, designed around a die that seems to have its design goals set for a highly specific mobile use case with some inherent major drawbacks due to this.
Which was my main point

Jesus, dude, seriously? Do I need to spoon feed you everything? Yes, that is the main determinant for power consumption. But if your goal is as low power consumption as possible, then you also start cutting other things. Such as memory bandwidth and PCIe, which both consume power - and in a 25W mobile GPU, even a watt or two saved makes a difference. These are of course also cost-cutting measures at the same time. As I've pointed out plenty of times: it's clear that this die is purpose-built for cheap, low power laptops with entry-level gaming performance.
Few connectors barely even consume a few watts. Savings are nill. It's just a limitations of laptop oriented GPU, more than anything else. There were GT 710s with 4 video connectors and those cards shared literally the same TDP.

I never said it was difficult. I said it takes time and work to remove them from the design; time and work that costs money while providing no material benefits, and savings so small that they don't matter in the end. Thus the simplest and cheapest solution is not to bother doing so.
Considering that they removed art from box, had some other moronic things done, it would make sense to give it 4x connector. I doubt that there wouldn't be savings, but it's just AMD either being lazy or intentionally misleading. Hell, there were 1X GT 710s, 4X GT 710s.

They were literally saying that there are so many other options that having one option without them shouldn't matter. That was pretty easy to understand IMO.
Except other options are too CPU heavy on lower end hardware and nobody will pay for capture card, if their budget for GPU is RX 6400 level. Loss of ReLive is really bad.

But you're misusing the term "harvested" here. "Harvested" implies they are chips that failed/were binned too low for some use case. That does not seem to be the case here - the 6500 XT seems to be a fully enabled die, but clearly one that is binned for high clocks rather than low power. The 6400 seems to be a middle-of-the-road bin, with a few CUs fused off. Neither appear "harvested", as there's no major cuts made (4 CUs isn't a lot, and everything else is intact). They're just different versions of the same chip, all equally valid implementations, and none bear the "we'd rather use these somehow than scrap them" characteristic for harvested chips.
Don't see how using mobile GPU for desktop doesn't count as "harvest". In normal times, low end desktop GPU would be made, without having to resort to mobile chip harvesting.

While I mostly agree with you in principle (though I'd go a lot further than you do here on some points), I think this is not the most suitable case for this critique. Navi 24 is more of a strangely unbalanced design than it is a cash grab - a cash grab would try to lure people in somehow, while this just behaves strangely instead. A cash grab needs to present itself as good value or enticing, which this doesn't do - and AMD doesn't have the GPU market mindshare to really change that. Its high entry price (on the desktop) takes this from bad to worse, but that's universal across the GPU market right now, and in no way unique to these two GPU models. And yes, AMD, just like other major corporations, are in it for profits first and foremost - as a publicly traded US corporation they are legally obligated to do so. This system is an absolute travesty in so many ways, but it plays out in much more nefarious ways than these unbalanced GPU designs.
I disagree, Ryzen created a lot of mindshare among "enthusiasts" (at least the ones that claim to be ones). Obviously not as much as nVidia or Intel, but they have it. And despite moderately poor press, they are banking on their customers not noticing cut down features. Will it work? I don't know, but I see how AMD is being somewhat misleading, for people, who aren't aware about missing features and crippled hardware.

There are also valid reasons for some cost increases - compared to even two years ago, many raw materials (copper, aluminium, many others) now cost 2-3x what they used to. International shipping has also increased massively, which also affects MSRPs. Neither of those are sufficient to explain the inflated MSRPs of these or other GPUs on the market currently, but they go some way towards explaining them. Another explanation is AIO partners resisting scraped-down marigns - many GPU AIO partners have reportedly had essentially zero margin on their products, with MSRPs previously being so low they have struggled to break even on their own designs. Again this doesn't necessarily justify increased MSRPs -after all, the GPU makers themselves have very high margins overall, in the 40% range.
Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't expect anything my friend ,
Well that's a large part of your problem right there, and it goes a long way towards explaining why you're so surprised by something that isn't otherwise worthy of note.
of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such as statement.
Wasn't it? What he said was that RTRT won't take off until we have bottom-to-top support. We're literally days off the price of entry hitting €160. And RTRT still hasn't taken off. So while on the one hand it's too early to judge - it's not like these low-end cards have had the time to make any kind of impact - on the other he's also right, or at least going in the right direction. Remember, the relevant spectrum of opinions at the time this was said was either Nvidia's "RTRT is here today and it's amazing", vs. this "RTRT won't really take off until we all have it". Which of those is more wrong? The former. Period. That doesn't mean that RTRT doesn't exist, or isn't at times amazing, but ... it's overall a niche thing still. It hasn't taken off. So the only thing wrong with that statement is not being sufficiently pessimistic.
So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.
But that isn't what he said, at least if you quoted him correctly. All he said is that he didn't think RT would "proceed" (a wording I'd like to see a second opinion of the translation of, as it's rather weird, and the original source of the interview is in Japanese - that looks like poor quality machine translation to me) until it could be offered across the board. In the source you provided the statement is also directly linked to AMD's refusal to enable software-only RTRT in pre-RDNA2 GPUs (which Nvidia did for 1000-series GPUs, which nobody ever used beyond benchmarks due to terrible performance). Given that he's making this statement in opposition to something, that something is then the assumption that it will do so. Yet ... it hasn't. So, while he might have been wrong about the time frame (though technically it's far too early to judge, even if I don't believe it's accurate either), he wasn't wrong in and of itself. His position is more accurate than the opposing one.
You are wondering why i'm reacting :
well , when this statement made everyone to react since this statement was reproduced everywhere back then , the only difference is that i kept remembering this statement in order to see if it will actually be validated or just a hype of the moment in order to undermine nVIDIA 's "mid-range RT-capable GPUs" .
... and? Is it not validated? It's only very recently that RTRT support has been available across the budget range. Has it taken off yet? No. So ... he was either right, or pointing in the right direction, but too optimistic. I mean, isn't the failure of RT taking off until now more of a refutation of Nvidia's early hype than this statement?
Anyone who thinks that this statement was meaningless ,they should also have said that by the time that this statement was made , but no one undermined that statement back then. I'm simply one the very few who waited for 2 years to actually check the validity of such statement ,and you are critisizing me for doing such a thing ??
Because making a cautionary statement about a new tech is, frankly, a sensible thing to do. We should all be doing so, all the time, every time someone promises us a revolutionary new feature. Being cautious and expecting it to take time before it takes off (if ever!) is solely a good thing.
Unfortunately , as i said ,i get very intrigued by such statements ,thus i'm uncapable to forget them.
I pay attention at them during their hype , but i also pay attention when the hype has ended and the time of truth has come...
The problem is, you're entirely misrepresenting the direction of the statement he's made here. You're presenting it as if he said "once RTRT is available across the range, it will take off". That's a positive statement, a statement making a claim to something necessarily coming true at a given point. What he said was, to paraphrase slightly, "RTRT won't take off until it's available across the range". That's not a positive statement, it's a cautionary statement, saying that something will (might) only come true if certain preconditions are met. It is also a statement, crucially, made in the context of another actor effectively stating that "RTRT is here right now and is revolutionizing graphics". That RTRT is now available across the board, yet still hasn't taken off? That's proof that his statement didn't go far enough. He wasn't wrong, he was too cautious, too optimistic of things working out - effectively too close to Nvidia's stance!

Put it this way: on a scale from 0 to 100, where 0 is "RTRT will never be the dominant graphics paradigm" and 100 is "RTRT arrived with the arrival of Turing, and revolutionized real-time graphics", this statement is, at best, at 50 - though arguably it's more like a 70-80, as it's implicitly promising wide RTRT adoption within a relatively close time frame - just not right then. Reality, the truth? Maybe 20? 30? Predicting the future is impossible outside of chance and broad statistics, but there's no indication currently that RTRT will become the dominant graphics paradigm any time soon. So if anything, the statement you're referencing was pointing in the right direction compared to its contemporaries (considering Nvidia were firmly at 100), but too optimistic still.

Also, speaking of hype, what was the most hyped up thing at that time - Nvidia's Turing RT support, or this statement? I'd expect the former to have had anywhere from 100x to 100 000x the attention, both in press and enthusiast discussions. Considering that, it seems you aren't paying attention to when the hype ends and the truth becomes visible? Because the truth is that RTRT still isn't more than a niche.
 
Joined
Jan 14, 2019
Messages
12,354 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I don't expect anything my friend ,
of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such a statement.
So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.
You are wondering why i'm reacting :
well , this statement made everyone to react since this statement was reproduced everywhere back then , the only difference is that i kept remembering this statement in order to see if it will actually be validated or just a hype of the moment in order to undermine nVIDIA 's "mid-range RT-capable GPUs" .
Anyone who thinks that this statement was meaningless ,they should also have said that by the time that this statement was made , but no one undermined that statement back then. I'm simply one the very few who waited for 2 years to actually check the validity of such statement ,and you are critisizing me for doing such a thing ??

Unfortunately , as i said ,i get very intrigued by such statements ,thus i'm incapable to forget them.
I pay attention at them during their hype , but i also pay attention when the hype has ended and the time of truth has come...
So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.
 
Joined
Dec 26, 2006
Messages
3,842 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
I have to disagree on that .
For me , those RT-numbers are of extreme importance since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :

I never forgot that pompous statement and always waited for AMD to release their RT-capable low-end GPUs .
And ... voila !!! the RX6500XT & RX6400 in all their ""low-end RayTracing glory"" , so ,according to mr Wang's statement , i guess now the time has finally:p come for the ""utilisation of ray tracing games to proceed"" !!
Who doesn't want Ray Tracing games@10-15fps :laugh:after all ??
I want to apologise in advance from AMD fans , but i always thought that this was a "cheap & easy" statement back then , and i don't tend to forget such things.
I had to wait for more than 2 years in order to verify/validate that statement , but the passage of time always comes ... mr.Wang.
So , yeah , as i said in the beginning ,those RT numbers are of extreme importance from historical point of view.
I don't think I even have a game that has Ray Tracing lol hmmmm maybe Borderlands 3 does?? That's my newest game.
 
Joined
Nov 17, 2016
Messages
152 (0.05/day)
Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.

1050 ti is exactly the same price in Indonesia also.

The 6400 is 40% faster than the 1050 ti, so I'd take the 6400. Plus newer, and more efficient.

It helps that I have a quicksync pcie4 cpu. But even without quicksync, as a budget gaming card at like $80, the rx 6400 is obviously hugely superior, because, like, it's a budget card for gaming, not a workstation card. And as mentioned it's MUCH faster.

The 1050 ti is two generations old, so by this point it should be selling for less, not more. Inflation isn't normal for old PC parts, they get steadily cheaper till they become ewaste.

The argument about why they are the same price is just market forces. 1050 ti = Nvidia, better brand recognition, better features. 6400 = AMD, worse features, better performance. Price ends up the same.

If there is reduced demand/more supply, they will fall.

At the moment, $200 for a $100 GPU is too much for me, so I skip.
 
Joined
Sep 2, 2014
Messages
259 (0.07/day)
Location
Emperor's retreat/Naboo Moenia
System Name Order66
Processor Ryzen 7 3700X
Motherboard Asus TUF GAMING B550-PLUS
Cooling AMD Wraith Prism (BOX-cooler)
Memory 16GB DDR4 Corsair Desktop RAM Vengeance LPX 3200MHz Red
Video Card(s) GeForce RTX 3060Ti
Storage Seagate FireCuda 510 1TB SSD
Display(s) Asus VE228HR
Case Thermaltake Versa C21 RGB
Audio Device(s) onboard Realtek
Power Supply Corsair RM850x
Software Windows10 64bit
......................................
But that isn't what he said, at least if you quoted him correctly. All he said is that he didn't think RT would "proceed" (a wording I'd like to see a second opinion of the translation of, as it's rather weird, and the original source of the interview is in Japanese - that looks like poor quality machine translation to me) until it could be offered across the board.
I didn't quote him , GameDebate quoted him (i put the link as you surely noticed) and the quote says :
""""Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.”"""
This statement :
1)was made when nVIDIA had launched their mid-range RT-implementations , so his statement definately implies things against nVIDIA
2)He states , NOT thinks( the part i underlined in your comment) , that utilisation of RayTracing games will not proceed unless we(AMD) offer RayTraying from low-end to high-end.
If you don't think that those remarks aren't a clear attempt in order to cut hype from nVIDIA 's Turing then that's your estimation and i certainly have a different one .
Furthermore , today it was proven that not only did they try to cut hype from nVIDIA's Turing back then , but more important , they did it by using false claims , as ,again , was proven by the RT-numbers today.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Did I even claim that card is GPU?
Yep. Several times. But more importantly, you're responding to people using that term while clearly switching between several meanings of it, making your arguments miss the point.
Which was my main point
Then why on earth bring in MXM adapters, if your point (?) was "this is a bog-standard, regular GPU design"? Sorry, but this does not compute. You made a claim that the desktop Navi 24 GPUs were equivalent to third-party vendors designing MXM-to-PCIe adapters for mobile GPUs in desktop use cases. This claim is false. Period.
Few connectors barely even consume a few watts. Savings are nill.
... not on a 25W mobile GPU. Which is what the 6300M is. "A few watts" is quite a notable difference in that scenario. And, if you look at the post you quoted, I said "even a watt or two makes a difference", which is less than what you're saying here. The 6500M is 35-50W, where such a difference is less important, but can still allow for slightly higher clocks or better efficiency still.
It's just a limitations of laptop oriented GPU, more than anything else. There were GT 710s with 4 video connectors and those cards shared literally the same TDP.
... and? TDP is a board-level designation that is essentially invented by chipmakers and/or OEMs. I'm talking about chip-level power savings from a design perspective. Every single one of those GT 710s have the same chip on board (though if some of them have more output hardware it's likely that that consumes a tad more power).
Considering that they removed art from box, had some other moronic things done, it would make sense to give it 4x connector. I doubt that there wouldn't be savings, but it's just AMD either being lazy or intentionally misleading. Hell, there were 1X GT 710s, 4X GT 710s.
... so: PCB design software lets you add in a ready-made PCIe x16 connector design. It's already there, already done, and adding it is trivial - they then just position it correctly along the PCB perimiter and connect the relevant traces to the right pins. Removing those pins from that ready-made connector design, or god forbid making it from scratch for the nth time, would take much more time. A box design? The equivalent would be whether to copy in the company logo or to draw it from scratch. Which do you think they do? Also, graphical design is just a tad simpler than PCB design. Not that that makes it easy or of low value, but the cost of screwing up a cardboard box design is rather lower than screwing up a modified connector on a thousand PCBs.
Except other options are too CPU heavy on lower end hardware and nobody will pay for capture card, if their budget for GPU is RX 6400 level. Loss of ReLive is really bad.
... other GPUs with hardware encode/decode are too CPU heavy? I still think you're somehow failing to understand what was said: they said that there are plenty of other GPU alternatives on the market with those encode/decode blocks included, and that one option without them is thus not much of an issue. I don't necessarily agree entirely, but I don't see it as that bad either. But I sincerely hope you can actually understand what was said now.
Don't see how using mobile GPU for desktop doesn't count as "harvest". In normal times, low end desktop GPU would be made, without having to resort to mobile chip harvesting.
.... there is no difference between a "mobile GPU" and "desktop GPU" in this regard. That's my whole point. You're talking about chips, not full GPU designs. And chips are always used in both mobile and desktop. There is nothing unique about this happening here - the only unique thing is the specific design characteristics of this die.
I disagree, Ryzen created a lot of mindshare among "enthusiasts" (at least the ones that claim to be ones). Obviously not as much as nVidia or Intel, but they have it. And despite moderately poor press, they are banking on their customers not noticing cut down features. Will it work? I don't know, but I see how AMD is being somewhat misleading, for people, who aren't aware about missing features and crippled hardware.
If you look at GPU sales in that same period, you'll see that that Ryzen mindshare essentially hasn't translated into Radeon mindshare at all - despite AMD becoming much more competitive in GPUs in the intervening period, their market share has been stagnant. And, of course, the RX 5700 XT debacle was in the middle of this, whic definitely soured broad opinions on Radeon GPUs.
Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.
Because most 1050 Ti stock is likely produced years ago, especially the silicon. And, of course, even if it's brand new, producing a 1050 Ti die on Samsung 14nm is much cheaper than producing a Navi 24 die on TSMC 6nm, even if other materials costs are probably similar-ish. And, of course, literally every single design cost for the 1050 Ti is long since amortized, which has a major impact on margins. If you have two entirely identical products, where one is brand-new, and the other has been in production for 3,4,5,6 years? The new one has to pay for materials, production costs, marketing costs, board design costs, silicon tape-out and QC costs, driver development costs, and more. The older one? Materials and production costs - everything else is long since paid off (though it might theoretically still be marketed). Drivers are likely still in development (hopefully!), but development costs will most likely be much lower due to driver maturity and developer familiarity with the hardware. There are good reasons why older hardware is more affordable than newer hardware, beyond just inflation.
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
I have to disagree on that .
For me , those RT-numbers are of extreme importance since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :

I never forgot that pompous statement and always waited for AMD to release their RT-capable low-end GPUs .
And ... voila !!! the RX6500XT & RX6400 in all their ""low-end RayTracing glory"" , so ,according to mr Wang's statement , i guess now the time has finally:p come for the ""utilisation of ray tracing games to proceed"" !!
Who doesn't want Ray Tracing games@10-15fps :laugh:after all ??
I want to apologise in advance from AMD fans , but i always thought that this was a "cheap & easy" statement back then , and i don't tend to forget such things.
I had to wait for more than 2 years in order to verify/validate that statement , but the passage of time always comes ... mr.Wang.
So , yeah , as i said in the beginning ,those RT numbers are of extreme importance from historical point of view.
First I thought, oh it's just sarcasm because your delivery was a little bit exaggerated but you meant it.
The article dates from 2018...
He was right, (probably the we he used in his comment was meant as we as an industry ,not that it matters much) and it's exactly like @Valantar said, he probably meant it like this : "this new feature won't take off until a lot of people have access to it".
Now with the consoles (major product segment for AMD) and having all the product lines supporting raytracing there is less resistance from game developers to allocate money/time/effort for this feature.
If a game supports FSR and you use performance mode, so 540p internal -1080p on screen or you tone down settings a little bit and use FSR quality mode (720p) and the raytracing utilization isn't heavy, you can achieve very playable frame rates.
It's not a good option at all I will agree, but the option is there.
Switch users (an extremely successful console) are perfectly fine with similar resolutions and worst picture quality when playing on TV, so I'm sure there are a lot people out there that they won't mind about RX6400 RT graphics performance if the game is good (TPU readers isn't typical users)
Did he said it because they had nothing to compete back then and probably because he wanted to kill the hype Nvidia was trying to generate back then, yes sure.
Did he play a negative roll trying to delay the adoption from devs of a graphics feature that will ultimately help advance the visual fidelity of the industry, OK maybe even that, but don't you think your reaction is a bit too much?
Now as engineering goes, he's an asset for AMD, sure RTG had some judgment laps with Navi 24 and with pricing strategy in general, but achieving 2.81GHz on N6, and having only 107mm² die size for the transistor budget (not to mention that raytracing implementation of RDNA2 although weak on performance it is extremely efficient regarding transistor budget size that it adds) are very good indications for the work that is happening in RTG imo.
Is RDNA2 worst in raytracing than 2018 Turing, sure, it's worst than Turing in other things also, but Nvidia isn't an easy opponent...
 
Top