# AMD Radeon RX 6400



## W1zzard (Apr 25, 2022)

The AMD Radeon RX 6400 is targeting entry-level gamers wanting to spend less than $200 for their graphics card. Priced at $160, the RX 6400 is the most affordable new release in a long time, but does it have the graphics horsepower to deliver an AAA gaming experience?

*Show full review*


----------



## AdmiralThrawn (Apr 25, 2022)

Yikes 46 fps on witcher 3 1080p. I know it's entry level but that game is over 7 years old. Not the best from AMD, performance might be better with a 6gb model.

Additionally I just noticed Forza 5, a console game. Only gets 20-30 fps. I feel like this card is lacking but who knows it could be good for some games like apex or Terraria.


----------



## defaultluser (Apr 25, 2022)

Well, the 1650 is still a better slot-powwered option,.


AdmiralThrawn said:


> Yikes 46 fps on witcher 3 1080p. I know it's entry level but that game is over 7 years old. Not the best from AMD, performance might be better with a 6gb model.
> 
> Additionally I just noticed Forza 5, a console game. Only gets 20-30 fps. I feel like this card is lacking but who knows it could be good for some games like apex or Terraria.



Yeah, a 96-bit bus would have been essential  to keep that already-castrated x4 connection fed.

There seems to be power to spare in these things...unless the target here is a replacement fr RX 550?  Maybe if they included it in the more expensive 6500?


----------



## GoldenX (Apr 25, 2022)

I'm pretty sure if AMD allowed proper VRAM overclocking, this thing could do a lot better. It's the main performance limitation after running it in PCIe 3.0.
The 6500 XT is rock solid at 2400MHz, it could go higher, I hope someone manages to unlock that somehow.


----------



## ModEl4 (Apr 25, 2022)

@W1zzard out of curiosity, in your 6500XT Pulse review (stock clocks, no OC) 1650 super was only 1% faster in 1080p, now is 6% faster, did you change something in the review process?



			https://tpucdn.com/review/sapphire-radeon-rx-6500-xt-pulse/images/relative-performance_1920-1080.png


----------



## W1zzard (Apr 25, 2022)

ModEl4 said:


> did you change something in the review process?


Yeah, new games, new drivers


----------



## ModEl4 (Apr 25, 2022)

ModEl4 said:


> @W1zzard out of curiosity, in your 6500XT Pulse review (stock clocks, no OC) 1650 super was only 1% faster in 1080p, now is 6% faster, did you change something in the review process?
> 
> 
> 
> https://tpucdn.com/review/sapphire-radeon-rx-6500-xt-pulse/images/relative-performance_1920-1080.png


Thanks, sorry I just saw the new games.


----------



## catulitechup (Apr 25, 2022)

@W1zzard very good review however lack of 720p, this card suffer with many games at 1080p

raytracing in this card  level is a joke but results are..................



























decode and encode capabilities are more usefull than fucking joke raytracing


----------



## ARF (Apr 25, 2022)

The whole AMD management in the Graphics division should be fired and replaced by normal employees.


----------



## catulitechup (Apr 25, 2022)

ARF said:


> The whole AMD management in the Graphics division should be fired and replaced by normal employees.



only can think about this:


----------



## ModEl4 (Apr 25, 2022)

Lol what a dud, in a PCI-express 3 system should be -10%, so RX570 is going to be around 30% faster in 1080p.
Why in the early design stages AMD team thought that this is going to be acceptable for the desktop segment I don't know. It would be preferable not to launch a desktop Navi 24 model at all, and have only mobile solutions.
The performance will not be so easily tracked, no PCI-express 3 deficit , you would have the APU option for encoding and less negativity all around, now even the mobile suffers from all this negativity potentially influencing the future OEM RX6400 based contracts.


----------



## docnorth (Apr 25, 2022)

My (maybe unfounded) guess is that the performance loss on pci-e 3 will be smaller or negligible. IIRC 1030gt runs on pci-e 3 x4, of course with only 1/3 of rx 6400 performance to be fair. But still 1650 performance for 159 usd in 2022 is rather disappointing. A slightly larger GPU using let’s say 15 more watts could maybe reach rx 570 levels and still remain slot-powered.

P.S. @W1zzard I still hope you can find time and check pci-e scaling. Thanks anyway for the review.


----------



## W1zzard (Apr 25, 2022)

docnorth said:


> P.S. @W1zzard I still hope you can find time and check pci-e scaling. Thanks anyway for the review.











						AMD Radeon RX 6500 XT PCI-Express Scaling
					

The AMD Radeon RX 6500 XT comes with only a narrow PCI-Express x4 interface. In this article, we took a closer look at how performance is affected when running at PCI-Express 3.0; also included is a full set of data for the academically interesting setting of PCI-Express 2.0.




					www.techpowerup.com
				




Results should be similar, not sure if worth doing another round of testing


----------



## RedBear (Apr 25, 2022)

ModEl4 said:


> Why in the early design stages AMD team thought that this is going to be acceptable for the desktop segment I don't know.


It was acceptable because the GPU market was willing to accept pretty much anything. They just had to raise a bit the official price in order for AMD and its partners to get a nice fat margin from the extra Navi 24 chips that they didn't need for laptops. Reputation was the only casualty.


----------



## ghazi (Apr 25, 2022)

So AMD released a 53W RX 470 at a $20 discount after 6 years. Bravo.


----------



## Valantar (Apr 26, 2022)

As I've said elsewhere the MSRP of this is about $40-50 too high, but other than that I don't see what people are complaining about. It's a low end GPU, and it delivers passable performance at excellent efficiency. It's not a 1080p Ultra card, but expecting it to be is just silly. As long as prices creep down to reasonable levels, this is an excellent entry level card, perfect for the 400 tier. If anything it just highlights how stupidly configured the 6500 XT is, managing just 28% more performance for a massive power increase.

Now for them to deliver the fully enabled Navi 24, <75W RX 6500 non-XT most people in this market would really want. Sure, you can downclock an XT, but that's a hassle, and it needs a 6-pin. Slot powered full Navi 24 (at a decent price) would be fantastic.


----------



## ModEl4 (Apr 26, 2022)

RedBear said:


> It was acceptable because the GPU market was willing to accept pretty much anything. They just had to raise a bit the official price in order for AMD and its partners to get a nice fat margin from the extra Navi 24 chips that they didn't need for laptops. Reputation was the only casualty.


I know, although I said early design stages not post 2020 era, unless AMD team was forecasting somehow the inflation which is in contrast with what  RX6800XT/6800 pricing strategy indicates.
You say reputation was the only casualty, this isn't a negligible casualty, they made a wrong judgment call imo.
The should have forecasted production to suffice for mobile contracts only or launch with competitive desktop pricing (RX 6400 launched now and it doesn't seem to be acceptable in the same way RX 6500XT wasn't acceptable forcing AMD partners to sell at SRP or below in some cases in Europe when at the same time were selling 6800XT/6800 with +50% from SRP (and Nvidia solutions also +50% in ASUS/MSI/GB case) So no, I don't think partners gained anything from the price strategy only AMD had some financial gains, partners lose potential margins and reputation (GB 3 fan 6500XT design lol)


----------



## Selaya (Apr 26, 2022)

why is OCing completely locked on this fucking gpu??????
amd becoming the new intel FML

EDIT: given the primary target audience for this card, would it be possible to add some RTX A2000 results to the charts?


----------



## tussinman (Apr 26, 2022)

ghazi said:


> So AMD released a 53W RX 470 at a $20 discount after 6 years. Bravo.


Yeah I shook my head a week ago when the rumor was that this card was going to be somewhere inbetween the GTX 960 and gtx 970 level performance. 

Thats like 2014 era mid-range performance on a 150+ dollar 2022 card


----------



## docnorth (Apr 26, 2022)

Selaya said:


> EDIT: given the primary target audience for this card, would it be possible to add some RTX A2000 results to the charts?


+1. Well RTX a2000 was marginally below 3050 without OC and again marginally above 3050 with OC.


----------



## Selaya (Apr 26, 2022)

Valantar said:


> As I've said elsewhere the MSRP of this is about $40-50 too high, but other than that I don't see what people are complaining about. It's a low end GPU, and it delivers passable performance at excellent efficiency. It's not a 1080p Ultra card, but expecting it to be is just silly. As long as prices creep down to reasonable levels, this is an excellent entry level card, perfect for the 400 tier. If anything it just highlights how stupidly configured the 6500 XT is, managing just 28% more performance for a massive power increase.
> 
> Now for them to deliver the fully enabled Navi 24, <75W RX 6500 non-XT most people in this market would really want. Sure, you can downclock an XT, but that's a hassle, and it needs a 6-pin. Slot powered full Navi 24 (at a decent price) would be fantastic.


considering the (lack of) featureset of this card ... no.
this is definitely a <$100 card.

i mean, for fucking god's sake YOU CANT EVEN OVERCLOCK THIS CARD
thats ... something. thats less features than a gt 1030.


----------



## Lightofhonor (Apr 26, 2022)

I reviewed this on my YouTube channel. It fairs pretty well in eSports titles and on low quality settings for newer games.


----------



## simlife (Apr 26, 2022)

"Remember, there were times when entry-level graphics card cost around $100"

yeah the same time 11 bucks a hour was consider decent in low cost non city's  now be 14-15+ or wtf and over a year fyi that is a ton of money before any overtime if you want to do math... companys cant pay enginers and taxes and ect ect ect cant  and do 100...  keep in mind tech is CHEAP now.. the neo geo console was 649.99 dollars in the 90s when if you made 11 bucks you were doing decent for yourself... perspective boy perspective..



Valantar said:


> As I've said elsewhere the MSRP of this is about $40-50 too high, but other than that I don't see what people are complaining about. It's a low end GPU, and it delivers passable performance at excellent efficiency. It's not a 1080p Ultra card, but expecting it to be is just silly. As long as prices creep down to reasonable levels, this is an excellent entry level card, perfect for the 400 tier. If anything it just highlights how stupidly configured the 6500 XT is, managing just 28% more performance for a massive power increase.
> 
> Now for them to deliver the fully enabled Navi 24, <75W RX 6500 non-XT most people in this market would really want. Sure, you can downclock an XT, but that's a hassle, and it needs a 6-pin. Slot powered full Navi 24 (at a decent price) would be fantastic.


"Remember, there were times when entry-level graphics card cost around $100"

yeah the same time 11 bucks a hour was consider decent in low cost non city's now be 14-15+ or wtf and over a year fyi that is a ton of money before any overtime if you want to do math... companys cant pay enginers and taxes and ect ect ect cant and do 100... keep in mind tech is CHEAP now.. the neo geo console was 649.99 dollars in the 90s when if you made 11 bucks you were doing decent for yourself...( and a low end computer could cost like 2k again a TON for the time) perspective boy perspective..

you have to pay the warehouses, driver if not the brick and mortor ppl.. before the tech and actual  cost of copper ect ect of everything they use...(just to have a physicals item that works is a base cost then the ppl who ship or stores who stock it... this is the 1030 again... 100000% just dont go super cheap ppl dont do that when they buy plains or cars their is a mimmum cost you want
"its to high becuse its power is to low"... if you said it before then you are wrong before... unless you think ppl should make 30k or less to stock or ship stuff like food other items


----------



## Tetras (Apr 26, 2022)

ghazi said:


> So AMD released a 53W RX 470 at a $20 discount after 6 years. Bravo.



awesome  (* a 53W RX470 -$20 -20% performance)

the next gen will be different


----------



## mechtech (Apr 26, 2022)

2017 RX 570 called









						Sapphire Radeon RX 570 Pulse 4 GB Review
					

Pulse is a new graphics card series by Sapphire, which does away with some rarely needed features in order to achieve better pricing. The Radeon RX 570 Pulse we review today is only $10 more expensive than the RX 570 base price and still comes with a backplate and an overclock out of the box.




					www.techpowerup.com
				




it said bahahahahaha



Valantar said:


> As I've said elsewhere the MSRP of this is about $40-50 too high, but other than that I don't see what people are complaining about. It's a low end GPU, and it delivers passable performance at excellent efficiency. It's not a 1080p Ultra card, but expecting it to be is just silly. As long as prices creep down to reasonable levels, this is an excellent entry level card, perfect for the 400 tier. If anything it just highlights how stupidly configured the 6500 XT is, managing just 28% more performance for a massive power increase.
> 
> Now for them to deliver the fully enabled Navi 24, <75W RX 6500 non-XT most people in this market would really want. Sure, you can downclock an XT, but that's a hassle, and it needs a 6-pin. Slot powered full Navi 24 (at a decent price) would be fantastic.


W1zz needs to add a GT1030 to the list to compare this card too


----------



## Mistral (Apr 26, 2022)

This seems to be an excellent card if you desperately need to something to plug a monitor into and have no other choice.


----------



## etayorius (Apr 26, 2022)

Thanks for the review. It's an ok card for the price which i still a bit over priced, but it is what it is. If this card would had cost no more than $120 USD i would bought one for my kids PC which as a PhenomII x4 965BE, 6GB DDR3 and R7 240 2GB. They just it mainly to play Roblox, Fortnite and Genshin.


----------



## Jism (Apr 26, 2022)

This is a cool GPU for OC contests. Solder your own additional VRM's, a 6 pin power header, larger capacitors and happy clocking.


----------



## unknownk (Apr 26, 2022)

I guess it would be fine for home theater use or for an old DELL office machine with no 6 pin PCIE cable.


----------



## AusWolf (Apr 26, 2022)

Thanks for the review! 

Like I thought, this card is everything the 6500 XT should have been: a low-power HTPC GPU with single-slot options, and light gaming capabilities. One could say that it's still too expensive, but considering that low profile versions of the 1650 go for £250-300 on ebay, £160-170 for a brand new equivalent is actually quite good. I've already ordered the Sapphire Pulse RX 6400 for my HTPC (1. It has HDMI 2.1, 2. I'm curious). If anyone's interested how it goes, let me know.


----------



## R0H1T (Apr 26, 2022)

Should've made this GDDR6x or at least 128bit wide bus, it's like the low end is still stuck in the 2010's or something


----------



## beautyless (Apr 26, 2022)

RX 6400 spec table in the first page is wrong. Cores number must be 768. (3/4 of RX 6500XT)

PS. This GPU recall my memory of the graphic in name only but not suit to play most game => "Radeon HD 2400XT".


----------



## ARF (Apr 26, 2022)

W1zzard said:


> AMD Radeon RX 6500 XT PCI-Express Scaling
> 
> 
> The AMD Radeon RX 6500 XT comes with only a narrow PCI-Express x4 interface. In this article, we took a closer look at how performance is affected when running at PCI-Express 3.0; also included is a full set of data for the academically interesting setting of PCI-Express 2.0.
> ...



At least, you will show at what bandwidth PCIe setting the performance tanks considerably. Please do it.

I mean there should be a tutorial educatory review which enlightens the potential buyers why not to buy this low-performing card.


----------



## AusWolf (Apr 26, 2022)

R0H1T said:


> Should've made this GDDR6x or at least 128bit wide bus, it's like the low end is still stuck in the 2010's or something


With GDDR6, it has the exact same bandwidth as the 1650 with GDDR5. I don't think it needs more with this class of GPU.



ARF said:


> At least, you will show at what bandwidth PCIe setting the performance tanks considerably. Please do it.
> 
> I mean there should be a tutorial educatory review which enlightens the potential buyers why not to buy this low-performing card.
> 
> ...


I don't think the impact is so severe with the 6400 (at least I wouldn't hope so), but I'm quite curious too! 

My 6400 is going into a pci-e 3.0 motherboard, although I won't be playing games on it.


----------



## Voodoo Rufus (Apr 26, 2022)

For a low power GPU that has limited gaming capabilities, the lack of AV1 decode is disappointing, since the AMD APUs can't do it. Only the higher end GPUs and I think the Xe iGPU on the newer Intel CPUs can do it. If you want a future-proof HTPC streaming media card, this was a missed check box.


----------



## R0H1T (Apr 26, 2022)

AusWolf said:


> With GDDR6, it has the exact same bandwidth as the 1650 with GDDR5. I don't think it needs more with this class of GPU.


This is pretty much a 750ti/1050ti class GPU except that Maxwell GPU was released over 8 years back & Pascal one over 5 years back! According to TPU charts it's still not 2x as fast as 750Ti & barely faster than 1050Ti, I think AMD should really do better after so many years especially in this segment. This belongs to 1030GT level right now & shouldn't cost a penny above 100 USD, the segment which it's released into right now is horrendously overpriced ~ granted Nvidia also haven't released anything of that kind like a 3050ti without power connector but the point remains!


----------



## thelawnet (Apr 26, 2022)

R0H1T said:


> This is pretty much a 750ti/1050ti class GPU except that Maxwell GPU was released over 8 years back & Pascal one over 5 years back! According to TPU charts it's still not 2x as fast as 750Ti & barely faster than 1050Ti, I think AMD should really do better after so many years especially in this segment. This belongs to 1030GT level right now & shouldn't cost a penny above 100 USD, the segment which it's released into right now is horrendously overpriced ~ granted Nvidia also haven't released anything of that kind like a 3050ti without power connector but the point remains!


this completely destroys the 1030, which also lacks encoder and has 2gb of ram.

this is a 1050 level, in any case, the 1030 is a silent 20W card, this is not.


----------



## ExcuseMeWtf (Apr 26, 2022)

I'd see what encoding/decoding and display connectivity similar offerings from nV/Intel will end up having. It's too expensive for what it is anyways.


----------



## The red spirit (Apr 26, 2022)

defaultluser said:


> There seems to be power to spare in these things...unless the target here is a replacement fr RX 550?


Frankly, does RX 6400 beat RX 550 or RX 560? 6500 XT actually failed to beat 5500 XT.



Selaya said:


> amd becoming the new intel FML


Started happening ever since Ryzen 5000 series were launched. AMD at that point was already rotten.


----------



## ExcuseMeWtf (Apr 26, 2022)

> Frankly, does RX 6400 beat RX 550 or RX 560


RX 560 let alone 550, is significantly slower than 1650, so yeah, it beats those.


----------



## ET3D (Apr 26, 2022)

TPU, thanks for the comparison to the 560 2GB. I have a low profile 460 2GB in my HTPC, and was considering the 6400 as an upgrade. This pretty much cements that decision. I didn't expect it to be over 2x faster. This should allow 1080p vs. 720p gaming.


----------



## thelawnet (Apr 26, 2022)

A couple of points about the review:

* all the benchmarks say 6400 xt rather than 6400
* i assume the 1650 being compared to is the obsolete 1650 gddr5, not the faster 1650 gddr6, which is one of the best selling GPUs, at least in my market ?

The 1650 (gddr6) runs about $30 more expensive, so there is that, but otherwise it seems like it is so much better thanthis thing in every respect.


----------



## ET3D (Apr 26, 2022)

thelawnet said:


> The 1650 (gddr6) runs about $30 more expensive



From what I'm seeing it's more like $50 more expensive, or about the current price of the 6500 XT.


----------



## LabRat 891 (Apr 26, 2022)

Couldn't help but notice the 6400 in this review and the 6500XT only differ in clocks, and that OCing is locked out.
Any chance of a modded 6500XT BIOS being able to be flashed onto the 6400s?


----------



## Luminescent (Apr 26, 2022)

New AMD and Nvidia GPUs will be targeted at entry-level gamers, wanting to spend less than $200 for their graphics card to turn on the computer and play minesweeper.
In the other news, you can now play Crysis on your phone, once a graphically intensive game on PC now easily handled by even the cheapest 200$ phone.


----------



## R0H1T (Apr 26, 2022)

thelawnet said:


> *this completely destroys the 1030*, which also lacks encoder and has 2gb of ram.
> 
> *this is a 1050 level*, in any case, the 1030 is a silent 20W card, this is not.


Right & that's also what a 3-4 year old card?

Like I said 750Ti/1050Ti segment & barely faster than those two all these years down the road, yes Nvidia themselves haven't replaced those cards with 3xxx GPU's but AMD really should've done better than this one!


----------



## Jo3yization (Apr 26, 2022)

Can anyone clarify the performance differences between the aero model on this review and LP model for Witcher 3 here? 








Skip to 8:48 am I missing something? Thanks!

*Edit* Nvm its clearly because this review all games are on 'maximum quality' settings aka Ultra,, makes it hard to gauge real-world performance given the huge difference between max & high(what makes more sense to actually run). Appreciate the testing but yeah, here I was thinking this card is no good for gaming on high 1080p /w 60fps.


----------



## DaHans (Apr 26, 2022)

AusWolf said:


> Thanks for the review!
> 
> Like I thought, this card is everything the 6500 XT should have been: a low-power HTPC GPU with single-slot options, and light gaming capabilities. One could say that it's still too expensive, but considering that low profile versions of the 1650 go for £250-300 on ebay, £160-170 for a brand new equivalent is actually quite good. I've already ordered the Sapphire Pulse RX 6400 for my HTPC (1. It has HDMI 2.1, 2. I'm curious). If anyone's interested how it goes, let me know.


I have the same use case in mind. I already wanted a HDMI 2.1 a year ago and bought a RTX 3060. 
Why did you buy the Sapphire one? 
I'm thinking about the Powercolor RX6400. This one has 0db fan stop.


----------



## ET3D (Apr 26, 2022)

LabRat 891 said:


> Couldn't help but notice the 6400 in this review and the 6500XT only differ in clocks, and that OCing is locked out.
> Any chance of a modded 6500XT BIOS being able to be flashed onto the 6400s?



You noticed wrongly, as the 6400 also has only 12 CUs compared to the 6500 XT.

Still, they're the same chip, so unlocking one to the other could be technically possible when the lower end SKU uses good silicon. Big 'if', though.


----------



## Valantar (Apr 26, 2022)

simlife said:


> "Remember, there were times when entry-level graphics card cost around $100"
> 
> yeah the same time 11 bucks a hour was consider decent in low cost non city's now be 14-15+ or wtf and over a year fyi that is a ton of money before any overtime if you want to do math... companys cant pay enginers and taxes and ect ect ect cant and do 100... keep in mind tech is CHEAP now.. the neo geo console was 649.99 dollars in the 90s when if you made 11 bucks you were doing decent for yourself...( and a low end computer could cost like 2k again a TON for the time) perspective boy perspective..
> 
> ...


... is this directed at me? Because it doesn't seem applicable to what I'm saying. I get that much of what I've said on this is in another thread, but, well, you're preaching to the choir. I'm perfectly in line with price increases being understandable. What isn't understandable is this entry-tier GPU being priced at $160 - that's simply too high. This is a lower tier than cards like the 1050 and 1050 Ti that launched at lower dollar prices just six years ago, and there hasn't been _that_ much inflation since then. Remember, the 1050 Ti launched at $139 and the 1050 at $109, for 50-tier products in late 2016. I'm arguing this should be around $120, for a 40-tier product ~5 years later. For reference, $139 in 2016 USD is ~166 today, while $106 in 2016 USD is ~$130. Accounting for materials price increases you might argue that $130 is as such a fair price for a lower tier card today, though I'd still say $120 would be fair. The 6500 XT ought to be around $160-180.


mechtech said:


> W1zz needs to add a GT1030 to the list to compare this card too


Absolutely! That would be a very relevant comparison.


The red spirit said:


> Frankly, does RX 6400 beat RX 550 or RX 560? 6500 XT actually failed to beat 5500 XT.


... did you look at the review? Both RX 550 and 560 are in the test results, and deliver ~32% and ~42% of the RX 6400's performance respectively at 1080p.


I forgot to say above, but thank you for a great review once again @W1zzard! Great to see these low end cards being tested, as that's somewhat rare. Still, I have to say I think your conclusion is a tad harsh - after all, you're running a test suite where everything is set at Ultra, which is notoriously inefficient. Most likely much better results can be had with near-imperceptible image quality drops in many games. Also, aren't locked-down OC controls the norm for slot-powered cards? I seem to remember that being pretty normal as a safeguard to avoid burning out the 12V traces in your motherboard.

That being said, there are two follow-up reviews that I would find _very_ interesting in light of this: PCIe scaling to compare against the 6500 XT (testing the assumption that lower performance equals less of a bottleneck), as well as testing at lowered settings with the aim of finding what settings hit 60fps (or if it fails to) across the test suite at 1080p. Ideally the PCIe scaling test would also be run at non-ultra settings to account for this not being a realistic use case for this product. I completely undertstand this being a ton of work for a low-prestige product, but I at least would read the heck out of both of those reviews.


----------



## Overvoltage (Apr 26, 2022)

There is no need to be prejudiced against this video card. You don't need to put it in a regular PC. I have an SFF with an external power supply (200 W) with 1 slot for a video card. Now in stores there are 3 solutions that are suitable for me to replace the video core of the APU. GT 730 (slower than APU), GT 1030 (same as APU) and 6400 (much faster than APU). Yes, this card is for medium to low settings at 1080p, but it's the best I can get myself. And I will take it.


----------



## ppn (Apr 26, 2022)

lineup 6600 > 5700, but 6500 < 5500, this is simply wrong.
this should not be 6400 but 6300 instead. this is a RX 470 from 2016, and costs the same $179, only $10 off.


----------



## W1zzard (Apr 26, 2022)

thelawnet said:


> all the benchmarks say 6400 xt rather than 6400


Whoops, fixed



beautyless said:


> RX 6400 spec table in the first page is wrong. Cores number must be 768. (3/4 of RX 6500XT)


Fixed



thelawnet said:


> i assume the 1650 being compared to is the obsolete 1650 gddr5, not the faster 1650 gddr6, which is one of the best selling GPUs, at least in my market ?


Yeah it's the GDDR5 version. Not obsolete, both are actively shipping at this time. GDDR6 adds a few percent: https://www.techpowerup.com/review/gigabyte-geforce-gtx-1650-oc-gddr6/27.html



Valantar said:


> Also, aren't locked-down OC controls the norm for slot-powered cards? I seem to remember that being pretty normal as a safeguard to avoid burning out the 12V traces in your motherboard.


Maybe for AMD, which is still a lame excuse given all the safeguards we have
https://www.techpowerup.com/review/nvidia-rtx-a2000/37.html
https://www.techpowerup.com/review/palit-geforce-gtx-1050-ti-kalmx/33.html



LabRat 891 said:


> Any chance of a modded 6500XT BIOS being able to be flashed onto the 6400s?


You make that sound so easy  afaik BIOS can't be modded anymore due to digital signature. Soft PP tables might be an option though


----------



## Chrispy_ (Apr 26, 2022)

It's the price the RX570 launched at five years ago this week (let's ignore the fact you could get new RX570 cards for $99 after the first ETH crash).

It occasionally matches the RX570 but most of the time barely manages half the performance :\


----------



## The red spirit (Apr 26, 2022)

ARF said:


> At least, you will show at what bandwidth PCIe setting the performance tanks considerably. Please do it.
> 
> I mean there should be a tutorial educatory review which enlightens the potential buyers why not to buy this low-performing card.
> 
> ...


Counterargument: those games were tested at highest quality settings. Those are irrelevant for lower end cards. 6500 XT indeed runs Cyberbug reasonably okay at sane settings.


----------



## W1zzard (Apr 26, 2022)

The red spirit said:


> those games were tested at highest quality settings. Those are irrelevant for lower end cards


The idea is to have a valid comparison with other cards. No doubt, you can get 60 FPS at lowest settings with upscaling from 480p and it'll look worse than XB1


----------



## The red spirit (Apr 26, 2022)

Valantar said:


> Also, aren't locked-down OC controls the norm for slot-powered cards? I seem to remember that being pretty normal as a safeguard to avoid burning out the 12V traces in your motherboard.


Totally not normal. There's nothing to guard card from as they can self adjust frequency/voltage if needed. That's handled by TDP value in vBIOS. Not even in past it was normal.



W1zzard said:


> The idea is to have a valid comparison with other cards. No doubt, you can get 60 FPS at lowest settings with upscaling from 480p and it'll look worse than XB1


6500 XT runs Cyberbug at 1080p low or medium well without that stupid FSR. 6400 runs it at low with ~40 fps. I understand that you collect data for fair comparison, but it's really useless for anyone looking to actually buy card like this. That's like expecting to run games at ultra on GT 730. That's just not what target audience does with those cards. Since ultra settings are notorious for hammering performance for no good reason, why not collect data with high settings instead?


----------



## W1zzard (Apr 26, 2022)

The red spirit said:


> why not collect data with high settings instead?


Because people demand highest settings in reviews for pretty much all cards. Also faster cards like 3080+ will end up CPU limited otherwise

I agree that if I had an army of benchmark slaves I would have retested all cards on lower settings, which takes about two weeks, 10 hours a day. Just not practical for a review like this


----------



## The red spirit (Apr 26, 2022)

W1zzard said:


> Because people demand highest settings in reviews for pretty much all cards. Also faster cards like 3080+ will end up CPU limited otherwise


I don't think that's true. That could also be a great opportunity to avoid corporate sabotage like Gameworks that made Radeons perform a lot worse than they should.


----------



## W1zzard (Apr 26, 2022)

The red spirit said:


> That could also be a great opportunity to avoid corporate sabotage like Gameworks that made Radeons perform a lot worse than they should.


It's called ray tracing now


----------



## thelawnet (Apr 26, 2022)

mechtech said:


> W1zz needs to add a GT1030 to the list to compare this card too


the 1030 is a bit slower than the 550 2GB, which is in the list.

AFAIK, the 1030 is much better selling , so the 1030 would be a slightly more useful, but we can make a good guess 'just a bit lower than the 550'


----------



## The red spirit (Apr 26, 2022)

W1zzard said:


> It's called ray tracing now


Didn't work out well for anyone involved in it though. I still remember 2080 Ti struggling at 1080p.


----------



## thelawnet (Apr 26, 2022)

W1zzard said:


> Because people demand highest settings in reviews for pretty much all cards. Also faster cards like 3080+ will end up CPU limited otherwise
> 
> I agree that if I had an army of benchmark slaves I would have retested all cards on lower settings, which takes about two weeks, 10 hours a day. Just not practical for a review like this



How do you come up with all those numbers? I mean new patches, new drivers, etc., presumably you don't actually retest the old cards for each review? But OTOH I guess 3 or 4 years down the line the position has often changed by 10% for a given card due to updates, so?


----------



## W1zzard (Apr 26, 2022)

mechtech said:


> W1zz needs to add a GT1030 to the list to compare this card too


I have a GT1030, and tried to find it for this review, but no luck

edit: omg found it



finishing pcie 3.0 run first, then running gt1030



thelawnet said:


> How do you come up with all those numbers? I mean new patches, new drivers, etc., presumably you don't actually retest the old cards for each review? But OTOH I guess 3 or 4 years down the line the position has often changed by 10% for a given card due to updates, so?


I retest everything every few months and keep drivers/games/patches constant until the next retest. Last retest was done mid-March


----------



## Dragokar (Apr 26, 2022)

I do like the reviews here, but in this case I think the title is a bit misleading. Yea, you are reviewing the AMD RX 6400, but you are specifically reviewing the MSI AERO model, and this should be in the title.

It is nothing major, the card is still crap for me, simply because of the lack of de/encode for some codecs, but I would like to see the card name in the review title(link).


----------



## ET3D (Apr 26, 2022)

Valantar said:


> aren't locked-down OC controls the norm for slot-powered cards?


Last time I tried on my RX 460, all the options were available in the drivers (though granted that was a couple of years ago). I used them to reduce power further. If that's not available now, it's a pity, though the 6400 is more efficient than the 460 out of the box.


----------



## thelawnet (Apr 26, 2022)

Dragokar said:


> I do like the reviews here, but in this case I think the title is a bit misleading. Yea, you are reviewing the AMD RX 6400, but you are specifically reviewing the MSI AERO model, and this should be in the title.



All the cards are the same though...  You can overclock the 6500 xt, and that is a differential feature between brands in terms of OC potential.

This one you cannot. 

The default clocks are also identical across brands.

Since you can overclock the 6500 xt to add about 5% more performance, that makes this card relatively weaker.


----------



## Dragokar (Apr 26, 2022)

thelawnet said:


> All the cards are the same though...  You can overclock the 6500 xt, and that is a differential feature between brands in terms of OC potential.
> 
> This one you cannot.
> 
> ...


Well yes but not all fans behave the same and a fast reader might read that the fan overshoots on all 6400 models


----------



## AusWolf (Apr 26, 2022)

R0H1T said:


> This is pretty much a 750ti/1050ti class GPU except that Maxwell GPU was released over 8 years back & Pascal one over 5 years back! According to TPU charts it's still not 2x as fast as 750Ti & barely faster than 1050Ti, I think AMD should really do better after so many years especially in this segment. This belongs to 1030GT level right now & shouldn't cost a penny above 100 USD, the segment which it's released into right now is horrendously overpriced ~ granted Nvidia also haven't released anything of that kind like a 3050ti without power connector but the point remains!


What point? This is a 1650-eqivalent card with the same amount of VRAM and the same VRAM bandwidth. The 1650 does it with 128-bit GDDR5, the 6400 with 64-bit GDDR6. Only that low profile 1650s go for £250-300 on ebay while the 6400 costs £160 new. What's not to like?



DaHans said:


> I have the same use case in mind. I already wanted a HDMI 2.1 a year ago and bought a RTX 3060.
> Why did you buy the Sapphire one?
> I'm thinking about the Powercolor RX6400. This one has 0db fan stop.


The Sapphire one seems to have a longer cooler and the spec sheet mentions 55 W TBP instead of 53. It might not matter in normal usage, but it costs the same as any other model, so I thought why not.


----------



## Selaya (Apr 26, 2022)

the 1650 has much more & better features (NVENC/NVDEC, x16 bus, 3 displayouts, OC support) than this tho, it's like leagues above the 6400


----------



## AusWolf (Apr 26, 2022)

thelawnet said:


> the 1030 is a bit slower than the 550 2GB, which is in the list.
> 
> AFAIK, the 1030 is much better selling , so the 1030 would be a slightly more useful, but we can make a good guess 'just a bit lower than the 550'


No worries, I have a 1030, and my 6400 arrives on Saturday, I'll make sure to do a comparison.  ... on pci-e gen 3!



Selaya said:


> the 1650 has much more & better features (NVENC/NVDEC, x16 bus, 3 displayouts, OC support) than this tho, it's like leagues above the 6400


The only relevant thing this card is missing compared to the 1650 is the VP1 decoder. The x4 bus is what it is, and I don't believe the target audience gives a damn about the encoder (at least I don't). Nobody wants to see gameplay streams of Cyberpunk 2077 at low quality settings and/or 10 fps.


----------



## defaultluser (Apr 26, 2022)

ET3D said:


> You noticed wrongly, as the 6400 also has only 12 CUs compared to the 6500 XT.
> 
> Still, they're the same chip, so unlocking one to the other could be technically possible when the lower end SKU uses good silicon. Big 'if', though.



AMD has been lasering -of cut cards for years now (last one was RX 560)


----------



## The red spirit (Apr 26, 2022)

AusWolf said:


> The only relevant thing this card is missing compared to the 1650 is the VP1 decoder. The x4 bus is what it is, and I don't believe the target audience gives a damn about the encoder (at least I don't). Nobody wants to see gameplay streams of Cyberpunk 2077 at low quality settings and/or 10 fps.


But they absolutely should, because that means that card doesn't decode Youtube and Youtube on CPU is rough. IMO it fails as display adapter, but it can run Cyberbug and at 1080p low ~40 fps. If you wanted to record some older game, you can't with 6400. For that RX 550 works better, because it records. It's also useless to add this card to older computers struggling with Youtube or other services. Overall, it managed to alienate the audience it was intended for and pleased the gamers somewhat, who won't buy it. Would have been more acceptable if it had decoding capabilities, but power of GT 1030 DDR4. The irony is that GTX 1050 Ti, which matches RX 6400 in performance is selling for a bit less money and does more. And considering that Chinese manage to put laptop GPUs without such downsides on PCBs or MXM cards, it just shows how blatant cash grab RX 6400 is.


----------



## RedBear (Apr 26, 2022)

Valantar said:


> As I've said elsewhere the MSRP of this is about $40-50 too high, but other than that I don't see what people are complaining about.


Lack of generational upgrade? This thing performs worse than an RX 570 released in *2017* for a similar MSRP ($170).


ModEl4 said:


> I know, although I said early design stages not post 2020 era, unless AMD team was forecasting somehow the inflation which is in contrast with what  RX6800XT/6800 pricing strategy indicates.
> You say reputation was the only casualty, this isn't a negligible casualty, they made a wrong judgment call imo.
> The should have forecasted production to suffice for mobile contracts only or launch with competitive desktop pricing (RX 6400 launched now and it doesn't seem to be acceptable in the same way RX 6500XT wasn't acceptable forcing AMD partners to sell at SRP or below in some cases in Europe when at the same time were selling 6800XT/6800 with +50% from SRP (and Nvidia solutions also +50% in ASUS/MSI/GB case) So no, I don't think partners gained anything from the price strategy only AMD had some financial gains, partners lose potential margins and reputation (GB 3 fan 6500XT design lol)


 AMD doesn't seem to care _that_ much about reputation, just look at the Ryzen 5 4500. But yeah, to be fair you might be correct on the partners gaining nothing (or perhaps relatively little) from this pricing strategy.


AusWolf said:


> The only relevant thing this card is missing compared to the 1650 is the VP1 decoder. The x4 bus is what it is, and I don't believe the target audience gives a damn about the encoder (at least I don't). Nobody wants to see gameplay streams of Cyberpunk 2077 at low quality settings and/or 10 fps.


A lot of people stream eSports games, those have light graphical requirements and will work quite decently even on _this_ thing when coupled with an adequate CPU.


----------



## Valantar (Apr 26, 2022)

RedBear said:


> Lack of generational upgrade? This thing performs worse than an RX 570 released in *2017* for a similar MSRP ($170).


... but a 40 tier card isn't supposed to be a generational upgrade on a 70 tier card (even if AMD's naming back then was dumb and the 580 was more like a 60-tier in reality, with the 570 being a tad below that but still too powerful to fit its contemporary 50-tier). Polaris also delivered ridiculous value even for its time. As I've said time and time again, the pricing is silly, but performance for what this is trying to be is fine. This is an entry level card, with good entry level performance. What makes it problematic is it costing $160 when it should be more like $120 - which would make it fit with cards like the $109 (~$130 after inflation) 2016 GTX 1050. There's also the crazy increases in materials costs and shipping costs of the past few years. In a saner world, this would be $120 with the 6500 XT at $160-ish, but that's sadly not the world we're living in.


RedBear said:


> A lot of people stream eSports games, those have light graphical requirements and will work quite decently even on _this_ thing when coupled with an adequate CPU.


If they have any non-F Intel CPU or any AMD APU, they already have hardware accelerated encoding support though. And if not, then, well, this GPU isn't for them. And quite frankly that's fine.


----------



## ET3D (Apr 26, 2022)

Selaya said:


> the 1650 has much more & better features (NVENC/NVDEC, x16 bus, 3 displayouts, OC support) than this tho, it's like leagues above the 6400


I think that it's a moot point. It depends on the audience. The only point which I think really matters for some is encoding. The other points are, I think, only relevant to a tiny minority. Sure, they're bonus points, but they're not that important.

It's true that low profile cards are also only relevant for a minority of people, but AusWolf is right in that the 6400 is important for the low profile market because of its good price. Few if any people in that market will pay £100 more for a used card of similar performance just because it supports encoding.

By the way, if you're counting minor features, the 6400 has HDMI 2.1, making it "leagues above" the 1650's HDMI 2.0.


----------



## AusWolf (Apr 26, 2022)

The red spirit said:


> But they absolutely should, because that means that card doesn't decode Youtube and Youtube on CPU is rough. IMO it fails as display adapter, but it can run Cyberbug and at 1080p low ~40 fps. If you wanted to record some older game, you can't with 6400. For that RX 550 works better, because it records. It's also useless to add this card to older computers struggling with Youtube or other services. Overall, it managed to alienate the audience it was intended for and pleased the gamers somewhat, who won't buy it. Would have been more acceptable if it had decoding capabilities, but power of GT 1030 DDR4. The irony is that GTX 1050 Ti, which matches RX 6400 in performance is selling for a bit less money and does more. And considering that Chinese manage to put laptop GPUs without such downsides on PCBs or MXM cards, it just shows how blatant cash grab RX 6400 is.


How does the 1050 Ti do more than the 6400? Do you mean the video encoder?

Also, how did everybody suddenly become a game streamer? Did I miss something? No one has ever talked about nvenc or AMD VCE during a release. Ever. And now everybody is loud about it like the primary function of a graphics card was game recording. 

As for Youtube, it's butter smooth on my Ryzen 3 3100 with about 20% CPU usage at 1080p 60 Hz. I also have a 4th gen Core i7-4765T (35 W, 2 GHz) system in the making that cost me maybe £100 for the whole thing. I'll test it later, but I doubt it'll struggle a lot more.



Valantar said:


> ... but a 40 tier card isn't supposed to be a generational upgrade on a 70 tier card (even if AMD's naming back then was dumb and the 580 was more like a 60-tier in reality, with the 570 being a tad below that but still too powerful to fit its contemporary 50-tier). Polaris also delivered ridiculous value even for its time. As I've said time and time again, the pricing is silly, but performance for what this is trying to be is fine. This is an entry level card, with good entry level performance. What makes it problematic is it costing $160 when it should be more like $120 - which would make it fit with cards like the $109 (~$130 after inflation) 2016 GTX 1050. There's also the crazy increases in materials costs and shipping costs of the past few years. In a saner world, this would be $120 with the 6500 XT at $160-ish, but that's sadly not the world we're living in.


Let's not forget about the fact that the 6400 and 6500 XT are the only cards that are selling for MSRP brand new. 



RedBear said:


> A lot of people stream eSports games, those have light graphical requirements and will work quite decently even on _this_ thing when coupled with an adequate CPU.


Those people are free to buy a used 1050 Ti or RX 570/580 (if they don't have one already - I suppose most of them do). Ebay is literally flooded with them.


----------



## catulitechup (Apr 26, 2022)

W1zzard said:


> I have a GT1030, and tried to find it for this review, but no luck
> 
> edit: omg found it
> View attachment 245029
> ...



this card seems a rat in this space and thanks for your work


----------



## The red spirit (Apr 26, 2022)

AusWolf said:


> How does the 1050 Ti do more than the 6400? Do you mean the video encoder?


That and ability to record.



AusWolf said:


> Also, how did everybody suddenly become a game streamer? Did I miss something? No one has ever talked about nvenc or AMD VCE during a release. Ever. And now everybody is loud about it like the primary function of a graphics card was game recording.


It's handy to have. AMD VCE allows to convert or compress videos way faster than CPU. I had to do that. I also found it handy to be able to record with ReLive. Decoding is powering all videos basically. That's just an unquestionable feature set of video cards ever since 90s. My ATi X800 Pro could record, then why RX 6400 can't? AIW version could also record from inputs, but I don't have that card, nor I particularly care about such features. It's just something so stupid to take away from card. Imagine nVidia released RTX 4050 without any CUDA support. How would you feel? Or even better, AMD released RX 6300, which wouldn't support DX12 at all. 




AusWolf said:


> As for Youtube, it's butter smooth on my Ryzen 3 3100 with about 20% CPU usage at 1080p 60 Hz. I also have a 4th gen Core i7-4765T (35 W, 2 GHz) system in the making that cost me maybe £100 for the whole thing. I'll test it later, but I doubt it'll struggle a lot more.


Lucky you, my Athlon X4 845 is very borderline with 1080p60 and can drop frames. A card capable of decoding would be great for CPU like this.




AusWolf said:


> Let's not forget about the fact that the 6400 and 6500 XT are the only cards that are selling for MSRP brand new.


Does it even matter, if they are pile of poo?


----------



## Kissamies (Apr 26, 2022)

Whoa, didn't know that this cannot be overclocked. Reminds me of the days of some 9000 series cards in the early 2000s when ATITool could bypass that.


----------



## RedBear (Apr 26, 2022)

Valantar said:


> ... but a 40 tier card isn't supposed to be a generational upgrade on a 70 tier card (even if AMD's naming back then was dumb and the 580 was more like a 60-tier in reality, with the 570 being a tad below that but still too powerful to fit its contemporary 50-tier). Polaris also delivered ridiculous value even for its time. As I've said time and time again, the pricing is silly, but performance for what this is trying to be is fine. This is an entry level card, with good entry level performance. What makes it problematic is it costing $160 when it should be more like $120 - which would make it fit with cards like the $109 (~$130 after inflation) 2016 GTX 1050. There's also the crazy increases in materials costs and shipping costs of the past few years. In a saner world, this would be $120 with the 6500 XT at $160-ish, but that's sadly not the world we're living in.
> 
> If they have any non-F Intel CPU or any AMD APU, they already have hardware accelerated encoding support though. And if not, then, well, this GPU isn't for them. And quite frankly that's fine.


I don't care much about nomenclature to be honest, the brackets tend to move over time, anyway the RX 6500 XT wasn't even an upgrade over its immediate predecessor, it was actually a slight downgrade in the average gaming FPS benchmark according to Wizzard's review, and I would be honestly interested in a comparison between the RX 6400 and the RX 5300 XT, if anyone owns the later.

AMD APUs are PCIe Gen 3 only, so it's really a matter of matching this thing with a low end Alder Lake with iGPU, as it was true for the RX 6500 XT. The problem is that a lot of people might not notice this until they've already bought it, it doesn't help that reviewers tend to neglect the lack of video encoding capabilities (Wizzard for instance stressed that the low profile versions could be suboptimal for HTPCs, because they lack AV1 decoding capability, but there's no mention of the missing video encoders) and people might just assume that it's a common, baseline, capability nowadays.


----------



## thelawnet (Apr 26, 2022)

ET3D said:


> It's true that low profile cards are also only relevant for a minority of people, but AusWolf is right in that the 6400 is important for the low profile market because of its good price. Few if any people in that market will pay £100 more for a used card of similar performance just because it supports encoding.
> 
> By the way, if you're counting minor features, the 6400 has HDMI 2.1, making it "leagues above" the 1650's HDMI 2.0.






Valantar said:


> If they have any non-F Intel CPU or any AMD APU, they already have hardware accelerated encoding support though. And if not, then, well, this GPU isn't for them. And quite frankly that's fine.




I'm not sure about some of the assumptions above.

I live in Indonesia and hang out on Facebook 'build a PC' forums.

Incomes are low, compared to the West. Most builds have tended to be i3-10100f or 10105f, due to low cost and excellent spec (8 threads).

A 10100f is 980k rupiah (14.4k = US$1, but a day's wages starts around 100k/$7, so the PPP is much lower). 

A 10100 is around 560k more. 

A lot of people now are building i3-12100f, instead, where the 12100f is around 1600k. The premium for the IGP version is again around 500k.

A PC is an expensive purchase, and an IGP CPU in a cheap PC is a horrible waste of money, if you're going to add an extra GPU.

The GPU market here consists of:

* GT 1030 1400-1500K depending on brand
* GTX 1050 Ti 2900-3100K depending on brand
* RTX 6500 XT 3400-3600K depending on brand (cheapest = Biostar, Sapphire, more expensive = MSI)
* GTX 1650 3600-3700K depending on brand (GDDR6 for Inno3D , Gainward, Pait, Zotac  etc. GDDR5 from Asus, MSI or pay more for GDDR6)
* Geforce 3050 at 5600K-5800K.
* RX 6600 at 5750K+
* plus everything above that from the 3000/6000 series

The MSI RX 6400 has landed at 2970K. 

The GT 1030 is pretty horrible, and the 3050/6600 are just far too expensive in terms of absolute cost, so people are choosing between the 1050 Ti, 1650, 6500 XT, and now 6400 in that bracket.

The 6400 is a fundamental failure as a product in this context in that the price is far too close to the 6500 XT, and given that people build mini/mid-tower PCs with 550W bronze PSUs,  the 'low profile' feature is of no value at all, and the 6500 XT is about 33% faster, when overclocked.

There are a lot people playing Valorant and PES, who can make do with the 1030, even though the 6400 is 3x faster, the problem is it's not at all competing with the 1030 on price but with the 6500 XT and 1650. 

If the argument is 'get a Quicksync CPU', then that fails, because you would be better off with a 1650, which is significantly faster in fact when you get the GDDR6 version, and will overclock still further, works perfectly on the 10100/5f,  and once you add the cost of the non-F CPU, there is no saving, plus you will get much resale for a 1650 than a 6400.

In my experience seeing people's builds/queries, a HUGE proportion say they want to stream and/or do video editing, and while they can spend the extra 500K on the non-IGP CPU, most of the time they just end up with a 1650. And if they get the non-IGP CPU then they'll get a 6500 XT, not a 6400, which only serves SFF builds.

There doesn't seem to be a premium for a low profile 1650, either.

I feel as a product that it's substantially worse than a 1650, because after PCIE3 issues, no encoder, no overclocking, a decent boost with the GDDR6 1650 over the GDDR5 version, etc., the fact that this product series is already known as a turd, and will have poor resale value.

It's nothing that couldn't be solved by cutting the price by 1/3, of course. But we need to be clear that the 1650 GDDR6 is a much better product.


----------



## Kissamies (Apr 26, 2022)

RedBear said:


> AMD APUs are PCIe Gen 3 only, so it's really a matter of matching this thing with a low end Alder Lake with iGPU, as it was true for the RX 6500 XT.


R3 3100 from AMD could work as well from the budget side as it supports PCIe 4.0 too.


----------



## RedBear (Apr 26, 2022)

Lenne said:


> R3 3100 from AMD could work as well from the budget side as it supports PCIe 4.0 too.


It doesn't have a video decoder/encoder, we were talking about using the iGPU's hardware encoder for streaming. Also, it's kind of sold out, at least here in Europe.


----------



## Kissamies (Apr 26, 2022)

RedBear said:


> It doesn't have a video decoder/encoder, we were talking about using the iGPU's hardware encoder for streaming. Also, it's kind of sold out, at least here in Europe.


Well, everyone doesn't need thaose so it's up to the user to get hardware with those capabilities if needed.


----------



## ET3D (Apr 26, 2022)

Lenne said:


> R3 3100 from AMD could work as well from the budget side as it supports PCIe 4.0 too.


Is there any availability for the 3100? AMD's current low end range (4100, 4500, 5500) doesn't support PCIe 4.0.


----------



## Kissamies (Apr 26, 2022)

ET3D said:


> Is there any availability for the 3100? AMD's current low end range (4100, 4500, 5500) doesn't support PCIe 4.0.


Seems to be pretty out of stock here in my country.. and yeah, the newer ones are basically APUs without iGPU.


----------



## Valantar (Apr 26, 2022)

RedBear said:


> I don't care much about nomenclature to be honest, the brackets tend to move over time, anyway the RX 6500 XT wasn't even an upgrade over its immediate predecessor, it was actually a slight downgrade in the average gaming FPS benchmark according to Wizzard's review, and I would be honestly interested in a comparison between the RX 6400 and the RX 5300 XT, if anyone owns the later.


... but we're not talking about the 6500 XT here, we're talking about the 6400. So rather than shifting the goal posts to talk about a GPU that's a far worse configuration than this, maybe try to make points on topic?


RedBear said:


> AMD APUs are PCIe Gen 3 only, so it's really a matter of matching this thing with a low end Alder Lake with iGPU, as it was true for the RX 6500 XT. The problem is that a lot of people might not notice this until they've already bought it, it doesn't help that reviewers tend to neglect the lack of video encoding capabilities (Wizzard for instance stressed that the low profile versions could be suboptimal for HTPCs, because they lack AV1 decoding capability, but there's no mention of the missing video encoders) and people might just assume that it's a common, baseline, capability nowadays.


That's true (for now - 6000-series APUs have PCIe gen 4), but as some (non-comprehensive) reviews have shown, the PCIe 3.0 bottleneck seems smaller than with the 6500 XT. I'd love to see that tested here though.


thelawnet said:


> I'm not sure about some of the assumptions above.
> 
> I live in Indonesia and hang out on Facebook 'build a PC' forums.
> 
> ...


So your ultimate point is exactly the same as everyone else here is saying - that this is a bit too expensive for what it offers? I would also recommend waiting a bit before you start doing price comparisons - everything you're comparing with has had time on the market for prices to settle, after all. Launch prices, particularly in the current GPU market, are likely to be on the high side of what you'll see in a month or two.

Other than that, while I don't doubt your description of reality here, it's rather myopic. For example, you're assuming that every GPU goes into a new build, while the majority of the market for a GPU like this is unlikely to be brand-new full system builds. Upgrades are a huge market for low-end GPUs. And while there's a significant price difference between low end F and non-F Intel CPUs today, that's a relatively new phenomenon. There is a huge install base out there of people with acceptable CPUs (Skylake and onwards at least) with integrated graphics and QuickSync where a GPU like this could deliver a decent gaming performance bump for them.

As for the 1650 GDDR6 being a "decent boost", well ... 7.5% isn't _that_ much. It's a bump, sure, but it's not even enough to catch the 1060 or RX 570.

There are definitely geographic differences though - people in the US have been describing significant price premiums for LP 1650s here. YMMV. There's a small (~10%) premium for LP versions here in Sweden.

Heck, this GPU absolutely isn't for everyone. It's a kind of weird repurposing of a very specific mobile-first design, with some odd feature omissions because of that. But if it comes down to a more sensible price - which seems likely as it's sticking at MSRP for now, rather than shooting up - this will be an excellent entry gaming contender for those who don't need encoding capabilities or have them through their CPU.


----------



## Forza.Milan (Apr 26, 2022)

i hope no one buy it, period. except scalpers, maybe.


----------



## RedBear (Apr 26, 2022)

Valantar said:


> ... but we're not talking about the 6500 XT here, we're talking about the 6400. So rather than shifting the goal posts to talk about a GPU that's a far worse configuration than this, maybe try to make points on topic?
> 
> That's true (for now - 6000-series APUs have PCIe gen 4), but as some (non-comprehensive) reviews have shown, the PCIe 3.0 bottleneck seems smaller than with the 6500 XT. I'd love to see that tested here though.


How is it a worse configuration? They have the same 64 bit memory bus and the same 4 PCIe lanes with the same 16MB "Infinity Cache", the RX 6400 simply uses a partially enabled (12 out of 16 CU) Navi 24 chip with lower power and lower frequencies. They're quite comparable and if the 6500 XT didn't manage to match the 5500 XT with its absurd frequencies how do you expect this cut-up version to compare favourably against anything? It's a terrible value in terms of generational upgrade, I guess it might make you mad because you're an AMD fanboy who proudly displays his full AMD system, but a fact is a fact.


----------



## Valantar (Apr 26, 2022)

RedBear said:


> How is it a worse configuration? They have the same 64 bit memory bus and the same 4 PCIe lanes with the same 16MB "Infinity Cache", the RX 6400 simply uses a partially enabled (12 out of 16 CU) Navi 24 chip with lower power and lower frequencies. They're quite comparable and if the 6500 XT didn't manage to match the 5500 XT with its absurd frequencies how do you expect this cut-up version to compare favourably against anything? It's a terrible value in terms of generational upgrade, I guess it might make you mad because you're an AMD fanboy who proudly displays his full AMD system, but a fact is a fact.


Because it has its clock speeds pushed to an absolutely ridiculous level, tanking efficiency with very little performance to show for it, while highlighting the shortcomings of the oddly balanced Navi 24 die. And yes, I also agree that it not beating the 5500 is a travesty - I don't think AMD made the right choice with how they configured Navi 24 in general (or at least I think there ought to have been a die in between it and Navi 23).

But again: you were the one responding to me speaking of the 6400 by bringing up the 6500. So rather than continuing this meaningless offshoot, maybe you can address the points I made back then?

And again: what is the 6400 a generational upgrade on? There is no 5400, or any other recent 40-tier predecessor. Once again: if this was $120, it would be a great value entry-level GPU. It's just a tad too expensive. Ideally it should have matched the 5500's performance (as ideally each GPU should match the higher tier in the preceding generation), but given the failure of the 6500XT to do so, that was obviously not happening. Still, if this was about $120, it would be good value and excellent (chart-topping!) price/performance.


----------



## ET3D (Apr 26, 2022)

thelawnet said:


> I'm not sure about some of the assumptions above.


Every user has their own requirements. It seems that the 6400 is currently the best choice for those who want a low profile card. For those who don't, it might not be the best deal.


----------



## thelawnet (Apr 26, 2022)

The red spirit said:


> As for the 1650 GDDR6 being a "decent boost", well ... 7.5% isn't _that_ much. It's a bump, sure, but it's not even enough to catch the 1060 or RX 570.


7.5% (for GDDR6) + 3% faster (GDDR5 vs 6400) + 8.9% overclock








						Gigabyte GeForce GTX 1650 OC GDDR6 Review - Faster Memory Helps?
					

We put NVIDIA's new GTX 1650 GDDR6 to the test. The upgraded memory offers a +50% bandwidth increase, which definitely helps make up some ground against the GTX 1650 Super and RX 5500 XT. Since Gigabyte installed 14 Gbps chips and clocked them lower, memory OC works really well, too.




					www.techpowerup.com
				




Totalling 20%.

Which is not a worthwhile upgrade when changing, it should definitely be a consideration at new. If 6400 = $200, then that's $40 worth of value, before you consider the encoder, resale value.

As far as upgrades go, it's bad because you could have something like an i3/5/7-4xxx in a PCIE 2 H81  motherboard and lose 1/3 of your performance, and with anything else you lose 13% due to PCIE3

Which would put the difference between a 1650 GDDR6, overclocked, and a RX 6400, in a PCIE 3 motherboard, at 38%. 

If you are on PCIE3, I think this is slightly faster than an overclocked 1050 Ti, but only a small amount, and given the other issues it's questionable that this is better.

So for upgraders it's bad, for new builds it's bad given the existence of low profile 1650 gddr6s, but ok if this is sufficiently cheaper, fine for that purpose only.


----------



## Valantar (Apr 26, 2022)

thelawnet said:


> 7.5% (for GDDR6) + 3% faster (GDDR5 vs 6400) + 8.9% overclock
> 
> 
> 
> ...


20% is a meaningful difference, but I really don't follow your math on anything beyond that. If you're CPU limited, then that _reduces_ the PCIe losses seen when testing the 6500 XT at PCIe 3.0 and lower - these tests are all done on current, blazingly fast CPUs, after all. And, of course, a CPU bottleneck will work out mostly the same across different products at similar performance, but will restrict higher performance parts more. You're not running a 12900K in that PCIe 3.0 motherboard, after all.


----------



## Frick (Apr 26, 2022)

Valantar said:


> And again: what is the 6400 a generational upgrade on? There is no 5400, or any other recent 40-tier predecessor. Once again: if this was $120, it would be a great value entry-level GPU. It's just a tad too expensive. Ideally it should have matched the 5500's performance (as ideally each GPU should match the higher tier in the preceding generation), but given the failure of the 6500XT to do so, that was obviously not happening. Still, if this was about $120, it would be good value and excellent (chart-topping!) price/performance.



Honestly the closest predecessor is the RX550, and seen from that perspective this card is a major success. Retail price here seems to be about €220, which sounds absolutely nuts until you realize that is what a Geforce 1650 cost. This thing at €150 in a low profile format in the current market would be really nice. This card at <=€120 would mean the market has gone back to "normal". It's a real shame it's locked down.

Anyway, good to have a good review on this card, thanks!


----------



## AusWolf (Apr 26, 2022)

The red spirit said:


> That and ability to record.


Not too many people want to record their gameplay. Those who do have many other options.



The red spirit said:


> It's handy to have. AMD VCE allows to convert or compress videos way faster than CPU. I had to do that. I also found it handy to be able to record with ReLive. Decoding is powering all videos basically. That's just an unquestionable feature set of video cards ever since 90s. My ATi X800 Pro could record, then why RX 6400 can't? AIW version could also record from inputs, but I don't have that card, nor I particularly care about such features. It's just something so stupid to take away from card. Imagine nVidia released RTX 4050 without any CUDA support. How would you feel? Or even better, AMD released RX 6300, which wouldn't support DX12 at all.


So you're arguing that the 6400 should have a video encoder only because its predecessors had them? That's a poor argument. So is bringing up CUDA, which has a lot broader usage than a video encoder - handling in-game physics for example. If you mean nvenc, then honestly, I couldn't give a rat's arse if my GPU has it or not. I haven't converted a single video file since smartphones started to have their own built-in video decoders. If I ever had to, I'd do it on my main pc which has an 11700 and a 2070. Like I said, I look at the 6400 as a HTPC / low spec gaming card, not as a media encoding / streaming machine.

I still don't understand where this sudden interest in gameplay capture / video conversion came from.



The red spirit said:


> Lucky you, my Athlon X4 845 is very borderline with 1080p60 and can drop frames. A card capable of decoding would be great for CPU like this.


Luck has nothing to do with it. I'm pretty sure any fairly modern mainstream CPU can play Youtube - your Athlon X4 isn't one (in fact, I've always considered the FM2 platform kind of DOA). I'll test my 4765T tomorrow and let you know.



The red spirit said:


> Does it even matter, if they are pile of poo?


What is your definition of "a pile of poo"?


----------



## Tetras (Apr 26, 2022)

Frick said:


> Honestly the closest predecessor is the RX550, and seen from that perspective this card is a major success. Retail price here seems to be about €220, which sounds absolutely nuts until you realize that is what a Geforce 1650 cost. This thing at €150 in a low profile format in the current market would be really nice. This card at <=€120 would mean the market has gone back to "normal". It's a real shame it's locked down.
> 
> Anyway, good to have a good review on this card, thanks!



Purely from a hardware perspective, I agree, I have two RX 550 and I'd be happy to replace them with a RX 6400, but the first one cost 90 dollars (near release) and the second 70 dollars (near EOL). The cheapest RX 6400 is 200. I'd rather have a RX 6600 for 350 which is 2.5 - 3.0x as fast.


----------



## Nuke Dukem (Apr 26, 2022)

There are no bad graphics cards, guys. Just badly priced ones 

And even badly priced, this will probably be the weapon of choice for countless uncles across the globe when they consider turning their 12-year-old nephews' SFF school homework PCs into Fortnite gamers. I mean, what are you going to get? There's nothing cheaper that's new and beats decent IGP performance.



simlife said:


> *Y*eah*, at* the same time 11 bucks a*n* hour was consider*ed* decent in low cost non*-*cit*ies. It's been* 14-15+ or *whatever* *for* over a year*.* *FYI* that is a ton of money before any overtime if you want to do math... *C*ompan*ies* can*'*t pay *the* engine*e*rs*,* and *pay* taxes and ect ect ect*.* cant and do *$*100... *K*eep in mind *that* tech is CHEAP now..*.* *T*he *N*eo *G*eo console was *$*649.99 dollars in the 90s*,* when if you made 11 bucks you were doing decent for yourself...*_*( and a low*-*end computer could cost like 2*K* *--* again*,* a TON for the time) *P*erspective*,* boy*,* perspective..*.*
> 
> *Y*ou have to pay the warehouses, driver*,* if not *-* the brick and mort*a*r people..*.* before the tech and actual cost of copper ect ect of everything they use...(just to have a physica*l* item that works is a base cost then the *people* who ship or stores *that* stock it... *T*his is the 1030 again... 100000% just don*'*t go super cheap*.* *People* don*'*t do that when they buy pla*nes* or cars*,* the*re* is a mi*ni*mum cost you want
> "it*'*s to*o* high bec*a*use its power is to*o* low"... *I*f you said it before*,* then you *were* wrong before... *U*nless you think *people* should make 30*K* or less to stock or ship stuff like food other items*.*



To be this careless with your spelling and punctuation is borderline offensive.



etayorius said:


> Thanks for the review. It's an ok card for the price which i still a bit over priced, but it is what it is. If this card would had cost no more than $120 USD i would bought one for my kids PC which as a PhenomII x4 965BE, 6GB DDR3 and R7 240 2GB. They just it mainly to play Roblox, Fortnite and Genshin.



As much as I love my PII X4, I'm fairly certain that even the venerable 965 will massively bottleneck even a card this slow (by modern standards). Also, the motherboard is only PCI-E 2.0, that's going to further choke performance. But if whoever is playing doesn't mind, who am I to say anything? 



W1zzard said:


> finishing pcie 3.0 run first, then running gt1030



inb4 LowSpecPowerUp is born... I'm loving it


----------



## ModEl4 (Apr 26, 2022)

W1zzard said:


> I have a GT1030, and tried to find it for this review, but no luck
> 
> edit: omg found it
> View attachment 245029
> ...


You should be commended for all this hard work!
The choosing of RX 550/560 2GB versions is peculiar imo, it only shows how 2GB cripples the performance in today's 1080p test bed. For example RX 560 (16CU version) 4GB it should be close to 60% of RX6400 not 42% like the 2GB version.
Also the GT 1030 (launched at $69 SRP) comparison although welcome it will just show too big difference due to 2GB, it should be preferable to compare it with 1050Ti (launched at $139 SRP) instead imo.
Maybe you didn't have stock for these VGAs?
I just hope when the new IGPs shows with 13th gen Intel and of course the RDNA2 6000G series to compare them with 4GB discreet graphics versions in order to be a fair comparison in today's testbed.


----------



## Kissamies (Apr 26, 2022)

It's good to have comparison with GT 1030 as it's the last well-known and affordable low-profile card. Faster cards like 1050 Ti and 1650 are less common and more expensive in low-profile form.


----------



## sLowEnd (Apr 26, 2022)

The price is a bit hard to stomach, but the card is otherwise fine for what it is IMO.
If it was ~$30-40 less it'd be easier to justify.


----------



## Kissamies (Apr 26, 2022)

sLowEnd said:


> The price is a bit hard to stomach, but the card is otherwise fine for what it is IMO.
> If it was ~$30-40 less it'd be easier to justify.


Agree. Too close to 6500 XT what comes to pricing.


----------



## Trov (Apr 26, 2022)

Was PCIe 3.0 vs 4.0 compared specifically on this card? I think it may be a stretch to assume the same performance drop that the 6500 XT has. I am a little disappointed in this review because I was specifically looking for that comparison on this card in particular; nobody seems to have done it yet. The lower performance and/or lower graphics settings for this card may mean the bus does not get saturated even at 3.0.

As far as pricing and the conclusion, I think one must also compare it to the other cards in it's form factor - low profile. In this realm the Low Profile GTX 1650 is selling for $300 and up, and even 1050 Ti's are sold New for over $250.

In Single Slot Low Profile, its only competition is Quadro T1000, which performs just under a GTX 1650 but sells for $400 and up. Though that card does have NVENC.

Based on that, and considering Low Profile cards only, I would consider the RX 6400 to be a good deal. If you can fit any bigger card though, it definitely isn't. If you're looking for something to toss into a Lenovo Thinkstation Tiny, it's the best card available. I think the final conclusion of this review is lacking in not mentioning the low profile/single slot applications of this card (though I do realize the reviewed card was a full-height version)

I have a Thinkstation with a Quadro T1000 I bought a couple weeks ago (and am now regretting given this is under half the price.) I have a 6400 on the way and will compare the 6400 with that card to determine which is the Single Slot Low Profile King.
(not counting of course the RTX A2000 with a custom made single slot cooler, as A2000 prices are currently ludicrous)


----------



## Ravenas (Apr 27, 2022)

I’m interested in FSR results. Was this touched on and I missed it?


----------



## eidairaman1 (Apr 27, 2022)

So is this Card 16X or 4X like the 6500XT?


----------



## AusWolf (Apr 27, 2022)

eidairaman1 said:


> So is this Card 16X or 4X like the 6500XT?


X4. It's essentially the same thing with a slightly cut-down GPU (and lot better efficiency as a result).



ModEl4 said:


> You should be commended for all this hard work!
> The choosing of RX 550/560 2GB versions is peculiar imo, it only shows how 2GB cripples the performance in today's 1080p test bed. For example RX 560 (16CU version) 4GB it should be close to 60% of RX6400 not 42% like the 2GB version.
> Also the GT 1030 (launched at $69 SRP) comparison although welcome it will just show too big difference due to 2GB, it should be preferable to compare it with 1050Ti (launched at $139 SRP) instead imo.
> Maybe you didn't have stock for these VGAs?
> I just hope when the new IGPs shows with 13th gen Intel and of course the RDNA2 6000G series to compare them with 4GB discreet graphics versions in order to be a fair comparison in today's testbed.


I also have a 1050 Ti at hand.  Do you want me to do a comparison when my 6400 arrives?


----------



## Frick (Apr 27, 2022)

AusWolf said:


> I also have a 1050 Ti at hand.  Do you want me to do a comparison when my 6400 arrives?



Yeah that'd be nice. The goal should be to achieve close to 60FPS at 1080p across all games.


----------



## R0H1T (Apr 27, 2022)

AusWolf said:


> What point? This is a 1650-eqivalent card with the same amount of VRAM and the same VRAM bandwidth. The 1650 does it with 128-bit GDDR5, the 6400 with 64-bit GDDR6. Only that low profile 1650s go for £250-300 on ebay while the 6400 costs £160 new. What's not to like?


This isn't a 750Ti/1050Ti class replacement ~ which tbf hasn't been a segment AMD's competed in the last decade, if ever in fact. PCIe slot powered & the most efficient dGPU's in a 1080p (cards) lineup, this is not it & at least 60% overpriced!


----------



## ModEl4 (Apr 27, 2022)

AusWolf said:


> X4. It's essentially the same thing with a slightly cut-down GPU (and lot better efficiency as a result).
> 
> 
> I also have a 1050 Ti at hand.  Do you want me to do a comparison when my 6400 arrives?


Sure if you like, although it's easy to figure out the performance level:
If we check within Nvidia's ecosystem, before 3 years 1050Ti was 75% of 1650 and 1650 was -20.63% slower than 1060 6G.
Today the difference still remains the same between 1650/1060, the Turing based 1650 is -20.16% slower than Pascal based 1060 6G, so essentially for this comparison Pascal architecture in the GTX 1060 6G form lost only 0.5% vs the Turing based 1650.
If Pascal based 1050Ti which has the same 4GB size memory as 1650 lost also 0.5%, it should be 76-77% of RX 6400 (1650 is 3% faster than RX6400 in TPU results)
In a PCI-express 3 system if RX 6400 is -9-10% slower, then 84-85% of RX 6400.
Or something like that, it can't be too far off...


			https://tpucdn.com/review/evga-gtx-1650-sc-ultra-black/images/relative-performance_1920-1080.png


----------



## Taraquin (Apr 27, 2022)

This cards could be awesome if it costed 100usd max, had pasdive cooler and could be overclocked. I think 1650 is a much better choice for most as it stands now if you want slot powered card.


----------



## AusWolf (Apr 27, 2022)

Frick said:


> Yeah that'd be nice. The goal should be to achieve close to 60FPS at 1080p across all games.





ModEl4 said:


> Sure if you like, although it's easy to figure out the performance level:
> If we check within Nvidia's ecosystem, before 3 years 1050Ti was 75% of 1650 and 1650 was -20.63% slower than 1060 6G.
> Today the difference still remains the same between 1650/1060, the Turing based 1650 is -20.16% slower than Pascal based 1060 6G, so essentially for this comparison Pascal architecture in the GTX 1060 6G form lost only 0.5% vs the Turing based 1650.
> If Pascal based 1050Ti which has the same 4GB size memory as 1650 lost also 0.5%, it should be 76-77% of RX 6400 (1650 is 3% faster than RX6400 in TPU results)
> ...


Cool, I'll check it when it arrives and I have the time over the weekend. It's going into a pci-e gen 3 system, so it should be a pretty interesting comparison. 



R0H1T said:


> This isn't a 750Ti/1050Ti class replacement ~ which tbf hasn't been a segment AMD's competed in the last decade, if ever in fact. PCIe slot powered & the most efficient dGPU's in a 1080p (cards) lineup, this is not it & at least 60% overpriced!


Exactly my point!  This thing isn't supposed to replace anything. It's just a slot-powered / low profile card that can do a bit of light gaming or movie watching (as long as you don't need hardware AV-1 acceleration).


----------



## Valantar (Apr 27, 2022)

AusWolf said:


> Cool, I'll check it when it arrives and I have the time over the weekend. It's going into a pci-e gen 3 system, so it should be a pretty interesting comparison.
> 
> 
> Exactly my point!  This thing isn't supposed to replace anything. It's just a slot-powered / low profile card that can do a bit of light gaming or movie watching (as long as you don't need hardware AV-1 acceleration).


I'll be really interested in that comparison too - and just generally in a "what settings do you need for 1080p60 on a 6400" type of test. Especially interesting with PCIe 3.0 (and presumably not the most powerful CPU?) too.


----------



## catulitechup (Apr 27, 2022)

Frick said:


> Yeah that'd be nice. The goal should be to achieve close to 60FPS at 1080p across all games.


rx 6400 is too weak for 1080p 60fps* suffer in many titles, various around 30fps



> *for this purpouse according tpu benchmarks rtx 3050 or rx 6600 or higher are better option



for media center have better form however lack of av1 hardware support and are too expensive, on this side intel igp xe based (av1 support) are better maybe pentium g7400 or ideally i3 12100**



> **this cpu crush ryzen 3 3100 because offer much better cpu performance, have xe based igp, retain ddr4 compatibility and is more cheap compared around 160us of ryzen 3 3100 in newegg


----------



## Kissamies (Apr 27, 2022)

catulitechup said:


> rx 6400 is too weak for 1080p 60fps* suffer in many titles, various around 30fps
> 
> 
> 
> for media center have better form however lack of av1 hardware support and are too expensive, on this side intel igp xe based (av1 support) are better maybe pentium g7400 or ideally i3 12100**


Unless you have a 3100 already..


----------



## Valantar (Apr 27, 2022)

catulitechup said:


> rx 6400 is too weak for 1080p 60fps* suffer in many titles, various around 30fps


TPU benchmarks are run at Ultra. Which is understandable and a good practice for higher end cards (even if it's dumb to play at Ultra even with those cards), but rather ridiculous for a card like this. I have zero doubt the 6400 can do 1080p60 steady in pretty much any game you throw at it as long as you set the settings to some more reasonable level for this class of GPU. Obviously not in RT, but AAA games more broadly? At medium-ish? I wouldn't be surprised at all.


----------



## The red spirit (Apr 27, 2022)

thelawnet said:


> 7.5% (for GDDR6) + 3% faster (GDDR5 vs 6400) + 8.9% overclock


Why are you quoting me? I didn't write that


----------



## ModEl4 (Apr 27, 2022)

AusWolf said:


> Cool, I'll check it when it arrives and I have the time over the weekend. It's going into a pci-e gen 3 system, so it should be a pretty interesting comparison.
> 
> 
> Exactly my point!  This thing isn't supposed to replace anything. It's just a slot-powered / low profile card that can do a bit of light gaming or movie watching (as long as you don't need hardware AV-1 acceleration).


Great, just keep in mind that at 1080p your CPU (and your system in general) plays a great role so if it's not fast enough it will influence accordingly the results , so depending on the game if it is CPU limited for example, you will get much different results from w1zzard, also even greater difference you will see if you go from ultra to low (in order to hit 60fps or so in some games) etc, there are many factors that will not make comparable your results with TPU's.
Generally when there isn't another factor (VRAM size differences for example etc) as you go down in resolution and as you tone down settings, the difference between 2 VGAs becomes smaller and smaller, making the slower GPU to seem closer and closer to the faster GPU. (Regarding resolution, there is for example the 6700XT/3060Ti case in which going down in QHD from 4K we are seeing 6700XT to gain actually in performance delta vs the 4K comparison but this happens only this generation due to infinity cache peculiarities from AMD side and from Nvidia side, latency penalty related issues from moving the rops inside the GPC when in the past was closely tied to memory controller and L2)


----------



## The red spirit (Apr 27, 2022)

AusWolf said:


> Not too many people want to record their gameplay. Those who do have many other options.


You have, but only with CPU or with capture card, neither is appealing to low spec market, when ReLive and Shadowplay exist.




AusWolf said:


> So you're arguing that the 6400 should have a video encoder only because its predecessors had them? That's a poor argument. So is bringing up CUDA, which has a lot broader usage than a video encoder - handling in-game physics for example. If you mean nvenc, then honestly, I couldn't give a rat's arse if my GPU has it or not. I haven't converted a single video file since smartphones started to have their own built-in video decoders. If I ever had to, I'd do it on my main pc which has an 11700 and a 2070. Like I said, I look at the 6400 as a HTPC / low spec gaming card, not as a media encoding / streaming machine.


And you miss the point, by discrediting the lack of proper decoders. Which is the entire reason why you would buy a low end "display adapter" like RX 6400. BTW CUDA is actually relevant for media usage too. Ever heard of MadVR? Some people actually want to use it.



AusWolf said:


> I still don't understand where this sudden interest in gameplay capture / video conversion came from.


Has always been there, just that everyone assumed that GPU can do it and that was that.



AusWolf said:


> Luck has nothing to do with it. I'm pretty sure any fairly modern mainstream CPU can play Youtube - your Athlon X4 isn't one (in fact, I've always considered the FM2 platform kind of DOA). I'll test my 4765T tomorrow and let you know.


You can keep shitting on Athlon X4, but you are lucky to be able to swap core components of your computer every 2 years or so. 



AusWolf said:


> What is your definition of "a pile of poo"?


RX 6400, RX 6500 XT or aka laptop special for desktop users with love from Alibaba.


----------



## W1zzard (Apr 27, 2022)

The review has been updated with GT1030 numbers .. I hate you guys .. testing at these FPS rates is such a shitshow

Working on RTX 3090 Ti FE review now, and PCIe 3.0 scaling for 6400


----------



## Valantar (Apr 27, 2022)

For those interested, HWUB has their review up, and as usual it is in-depth and excellent. Includes comparisons to the 1050 Ti, 1650, RX 570, and a handful of faster cards, and tests are run at non-Ultra settings tuned for decent performance on low-end hardware. My only further wish would have been for a lower-end CPU to make for a more realistic test system, but that's also problematic in its own ways.

In summary: worse than I had expected. The PCIe 3.0 bottleneck is still there, though possibly a tad smaller than on the 6500 XT. On average across their 12 tested games the 6400 matches a 1650 on PCIe 4.0, but loses by a noticeable margin on 3.0. It _mostly_ matches the RX 570, but there are several outlier titles where the 570 completely runs away from it (R6S, F1 2021), pulling the average down. Doom Eternal seems to work very poorly on 4GB AMD cards, as even the RX 570 falls significantly behind the 1650 there. Performance is, as with the 6500 XT, very hit-or-miss, and they end up only recommending it specifically for low profile builds, as for everything else it's just not worth the money.









I'm guessing the written Techspot review will probably show up at some point too for those not wanting to watch a video.


----------



## catulitechup (Apr 27, 2022)

Valantar said:


> TPU benchmarks are run at Ultra. Which is understandable and a good practice for higher end cards (even if it's dumb to play at Ultra even with those cards), but rather ridiculous for a card like this


for this reason maybe be interesting performance at 720p, still with newer titles must be run around acceptable and as your said higher quality in this card is a   more or less like raytracing in this card


----------



## Kissamies (Apr 27, 2022)

catulitechup said:


> for this reason maybe be interesting performance at 720p, still with newer titles must be run around acceptable and as your said higher quality in this card is a   more or less like raytracing in this card


Who uses 720p these days? I had a GT 1030 on my ex-HTPC and even with that card, I couldn't even think about going below 1080p.


----------



## catulitechup (Apr 27, 2022)

very interesting results in various pci-e 3.0 is a huge loss
































Resuming same story like rx 6500 xt, if dont have pci-e gen 4 mainboard forget it


----------



## Kissamies (Apr 27, 2022)

catulitechup said:


> very interesting results in various pci-e 3.0 is a huge loss


If it would just be a 8 lane card... having choked it to x4 is just stupid.


----------



## catulitechup (Apr 27, 2022)

Lenne said:


> If it would just be a 8 lane card... having choked it to x4 is just stupid.


yeah fucking scumbag companies try low cost with things dont be cutted like pci-e lanes as your said

for this reason in my case stay waiting intel arc because intel use also pci-e gen 4 but with 8x lanes as your suggest


----------



## Kissamies (Apr 27, 2022)

catulitechup said:


> yeah fucking scumbags companies try low cost with things dont be cutted like pci-e lanes as your said
> 
> for this reason in my case stay waiting intel arc because intel use also pci-e gen 4 but with 8x lanes as your suggest


I did read somewhere (somebody probably posted on some thread here) that this Navi 24 chip was designed for laptop use, so that's why its lanes were cut to x4, and they decided to bring it to desktops later.

Anyway, hella stupid choice especially when it has only 4GB of VRAM which is a bottleneck in modern games. And when pairing it with a PCIe 3.0 platform, the problem gets even worse. At least RX 5300/5500 cards had 8 lanes.


----------



## Valantar (Apr 27, 2022)

Lenne said:


> I did read somewhere (somebody probably posted on some thread here) that this Navi 24 chip was designed for laptop use, so that's why its lanes were cut to x4, and they decided to bring it to desktops later.
> 
> Anyway, hella stupid choice especially when it has only 4GB of VRAM which is a bottleneck in modern games. And when pairing it with a PCIe 3.0 platform, the problem gets even worse. At least RX 5300/5500 cards had 8 lanes.


It is, it seems to be explicitly designed as a low-cost dGPU for pairing with 6000-series APUs in relatively thin-and-light designs at a low price point. Even accounting for this I'm ever increasingly baffled at how it's held back by that PCIe bus. How much more area would an x8 connection have cost them? 16 more pins on the package, plus a few mm2 of die area? That cut can't possibly be worth it, considering the performance losses seen here.


----------



## catulitechup (Apr 27, 2022)

Lenne said:


> I did read somewhere (somebody probably posted on some thread here) *that this Navi 24 chip was designed for laptop use, so that's why its lanes were cut to x4, and they decided to bring it to desktops later.*
> 
> Anyway, hella stupid choice especially when it has only 4GB of VRAM which is a bottleneck in modern games. And when pairing it with a PCIe 3.0 platform, the problem gets even worse. At least RX 5300/5500 cards had 8 lanes.


yeah this is a laptop chip severely cutted since origin and maybe them adapted chips without sell to desktop for no loss and without forget put horrible price with insufficient pci-e lanes

or maybe them are a huge scumbags because make rx 6400 with all this fails to force users buy pci-e gen 4 mainboard like B550 and market have many of this mainboards unsold


----------



## Kissamies (Apr 27, 2022)

Valantar said:


> It is, it seems to be explicitly designed as a low-cost dGPU for pairing with 6000-series APUs in relatively thin-and-light designs at a low price point. Even accounting for this I'm ever increasingly baffled at how it's held back by that PCIe bus. How much more area would an x8 connection have cost them? 16 more pins on the package, plus a few mm2 of die area? That cut can't possibly be worth it, considering the performance losses seen here.


Yeah, a slight redesign would've more than appriciated.


----------



## progste (Apr 27, 2022)

It would make sense at 80$, kinda like "my first budget gaming PC" material.
Maybe once prices retrun to sane levels it will go back to that, but until then people trying to build ona budget are probably better off looking on the used market.


----------



## Kissamies (Apr 27, 2022)

progste said:


> It would make sense at 80$, kinda like "my first budget gaming PC" material.
> Maybe once prices retrun to sane levels it will go back to that, but until then people trying to build ona budget are probably better off looking on the used market.


With 160USD MSRP, I highly doubt that. And yeah, without a doubt an used card is a much more wise choice.


----------



## kanecvr (Apr 27, 2022)

The only disappointing things about this card are the price (first and foremost) and the PCI-E x4 interface. What moron thought that would be a good idea? Especially since these kinds of entry level cards are used primarily to upgrade older PCs or add graphics to older OEM office machines.

Good job AMD! You just made sure anyone looking for a low profile low-tdp graphics card to turn an old office PC into an entry level gaming computer or media center (very common practice in my part of europe) stays clear away from your graphics cards. This includes the 6500XT and to some extent the 6600XT.


----------



## Kissamies (Apr 27, 2022)

kanecvr said:


> The only disappointing things about this card are the price (first and foremost) and the PCI-E x4 interface. What moron thought that would be a good idea? Especially since these kinds of entry level cards are used primarily to upgrade older PCs or add graphics to older OEM office machines.
> 
> Good job AMD! You just made sure anyone looking for a low profile low-tdp graphics card to turn an old office PC into an entry level gaming computer or media center (very common practice in my part of europe) stays clear away from your graphics cards. This includes the 6500XT and to some extent the 6600XT.


Agree. With even x8 this would (in a low-profile form) be a great card for the SFF office PCs many tech-tuber revies from time to time.


----------



## mechtech (Apr 27, 2022)

@W1zzard 

any plans to review amd 5500/5600 CPUs??


----------



## Kissamies (Apr 27, 2022)

mechtech said:


> @W1zzard
> 
> any plans to review amd 5500/5600 CPUs??


Personally I find 5700X way more interesting..


----------



## defaultluser (Apr 27, 2022)

AusWolf said:


> Let's not forget about the fact that the 6400 and 6500 XT are the only cards that are selling for MSRP brand new.




The 6500 XT did this because it's the shittiest card ever produced.

As a result, the 50% faster 3050 got bum-rushed by anyone with a brain., but now is  dropping to MSRP:



			https://www.evga.com/products/product.aspx?pn=08G-P5-3551-KR
		


*The 6400 can  at-least justify it's existence ( but only for hardcore RX 560 holdouts who already passed on the low-profile 1650!)*


----------



## thelawnet (Apr 27, 2022)

defaultluser said:


> The 6500 XT did this because it's the shittiest card ever produced.
> 
> As a result, the 50% faster 3050 got bum-rushed by anyone with a brain., but now is  dropping to MSRP:
> 
> ...



In my market the 3050 is the same price as the 6600,  and it costs more than 50% more than the 6500 XT.

At the moment the 6500 xt is good here because it offers significantly better performance than anything around the price. We don't even have an RRP here, and if you offered the price of a 6500 xt for a 3050 they would just laugh at you. There is no such thing as selling cards at RRP, it's just 100% market forces.

Both the 6400 and 6500 xt are completely justifiable products, the *only *issue is the price. literally there is no issue with a $100 GPU having no encoder, and requiring pcie4. And once you consider that $100 is now $200 in buying gpus, it is what it is.

All the hysteria about the 6500 xt was just ridiculous.


----------



## Valantar (Apr 27, 2022)

thelawnet said:


> In my market the 3050 is the same price as the 6600,  and it costs more than 50% more than the 6500 XT.
> 
> At the moment the 6500 xt is good here because it offers significantly better performance than anything around the price. We don't even have an RRP here, and if you offered the price of a 6500 xt for a 3050 they would just laugh at you. There is no such thing as selling cards at RRP, it's just 100% market forces.
> 
> ...


While I mostly agree with you, these GPUs do stand out in a negative sense in how variable their performance is, and how they perform disproportionately badly in certain titles. Outside of those titles, performance is decent - they're just priced a bit too high for what they deliver (but then so are all GPUs these days). But with two Navi 24 implementations now demonstrating these odd bottlenecks and uneven performance, it's becoming clear that the prioritizations made by AMD when configuring this die were ... well, a very poor fit for desktop usage, put simply. The die area cost of adding an x8 PCIe link would have been next to nothing. The memory bandwidth is a more difficult hurdle to overcome, and is also a bit more excusable - or it would have been if the PCIe bus was wide enough to stream in assets more rapidly. Instead, we get a GPU that, while performing decently overall, is beaten by the GTX 1050 Ti in several titles in HWUB's testing when limited to PCIe 3.0. That's ... to be honest, shockingly bad. The 1050 Ti came out in 2016, and had a power budget similar to this. If not for these weird bottlenecks, this GPU would have _trounced_ it, but instead AMD chose to cripple it in some very strange ways. They _kind of_ make sense in a very specific use case: low cost dGPUs for affordable, low power but gaming capable thin-and-light laptops. And as AMD's 6000 APUs also have PCIe 4.0, these will likely look pretty good for what they are (25W or 35-50W mobile GPUs, essentially competing with MX550 and the like). But for desktop usage? Eeeeeeh. _Really_ depends on your games. In most cases they are decent, but expensive. In a few cases they are atrociously bad. That's ... not a good mix. That's like having a friend who is okay most of the time but tends to drink all your beer when they come over, but then also at times starts massive arguments over nothing. Is that a good experience?

To me, it looks like AMD's choice to drastically cut the memory bus _and_ the PCIe link _and_ the Infiniti Cache was at least one cut too many. An increase in either of the three would likely have gone some way towards alleviating these bottlenecks. As it stands, it's a real shame, as this die had so much potential to be a great low power chip. Instead it's a kind of good but also weirdly unpredictable one.


----------



## catulitechup (Apr 27, 2022)

thelawnet said:


> In my market the 3050 is the same price as the 6600,  and it costs more than 50% more than the 6500 XT.
> 
> At the moment the 6500 xt is good here because it offers significantly better performance than anything around the price. We don't even have an RRP here, and if you offered the price of a 6500 xt for a 3050 they would just laugh at you. There is no such thing as selling cards at RRP, it's just 100% market forces.
> 
> ...



both cards are trash in various levels like: features, compatibility, price compared previous amd and nvidia gpus*



> *sff maybe a exception but this is not moment to buy until appear cut price but this depend how nvidia can respond to rx 6500 xt and rx 6400 without forget how intel arc can affect the actual situation



if you consider give 200us for 100us product, seriously must need leave this conformist attitude


----------



## AusWolf (Apr 27, 2022)

Valantar said:


> I'll be really interested in that comparison too - and just generally in a "what settings do you need for 1080p60 on a 6400" type of test. Especially interesting with PCIe 3.0 (and presumably not the most powerful CPU?) too.


It'll be the one in my signature: the Ryzen 3 3100. 



The red spirit said:


> You have, but only with CPU or with capture card, neither is appealing to low spec market, when ReLive and Shadowplay exist.


You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?



The red spirit said:


> And you miss the point, by discrediting the lack of proper decoders. Which is the entire reason why you would buy a low end "display adapter" like RX 6400. BTW CUDA is actually relevant for media usage too. Ever heard of MadVR? Some people actually want to use it.


What do you mean by "proper"? Is being able to decode everything except from AV-1 not proper? Then nothing is proper except from nvidia Ampere, Navi 21, 22 and Intel Rocket Lake and above.



The red spirit said:


> Has always been there, just that everyone assumed that GPU can do it and that was that.


A fair point. It may be true - I've just missed it, I guess, as the whole _"let me stream how I troll around in Fortnite"_ scene is totally alien to me.



The red spirit said:


> You can keep shitting on Athlon X4, but you are lucky to be able to swap core components of your computer every 2 years or so.


I'm shitting on the Athlon X4 because a 3rd gen Core i7 would cost you pennies and laughably destroy it. It isn't only not competitive now - it wasn't even competitive when it was new. Like I mentioned, I've just built a *whole system* with an i7-4765T for around 100 quid.



The red spirit said:


> RX 6400, RX 6500 XT or aka laptop special for desktop users with love from Alibaba.


Care to elaborate?


----------



## Kissamies (Apr 27, 2022)

Personally I wouldn't be that worried about media decoding, as any even slightly modern processor can practically decode anything.


----------



## AusWolf (Apr 27, 2022)

Valantar said:


> While I mostly agree with you, these GPUs do stand out in a negative sense in how variable their performance is, and how they perform disproportionately badly in certain titles. Outside of those titles, performance is decent - they're just priced a bit too high for what they deliver (but then so are all GPUs these days). But with two Navi 24 implementations now demonstrating these odd bottlenecks and uneven performance, it's becoming clear that the prioritizations made by AMD when configuring this die were ... well, a very poor fit for desktop usage, put simply. The die area cost of adding an x8 PCIe link would have been next to nothing. The memory bandwidth is a more difficult hurdle to overcome, and is also a bit more excusable - or it would have been if the PCIe bus was wide enough to stream in assets more rapidly. Instead, we get a GPU that, while performing decently overall, is beaten by the GTX 1050 Ti in several titles in HWUB's testing when limited to PCIe 3.0. That's ... to be honest, shockingly bad. The 1050 Ti came out in 2016, and had a power budget similar to this. If not for these weird bottlenecks, this GPU would have _trounced_ it, but instead AMD chose to cripple it in some very strange ways. They _kind of_ make sense in a very specific use case: low cost dGPUs for affordable, low power but gaming capable thin-and-light laptops. And as AMD's 6000 APUs also have PCIe 4.0, these will likely look pretty good for what they are (25W or 35-50W mobile GPUs, essentially competing with MX550 and the like). But for desktop usage? Eeeeeeh. _Really_ depends on your games. In most cases they are decent, but expensive. In a few cases they are atrociously bad. That's ... not a good mix. That's like having a friend who is okay most of the time but tends to drink all your beer when they come over, but then also at times starts massive arguments over nothing. Is that a good experience?
> 
> To me, it looks like AMD's choice to drastically cut the memory bus _and_ the PCIe link _and_ the Infiniti Cache was at least one cut too many. An increase in either of the three would likely have gone some way towards alleviating these bottlenecks. As it stands, it's a real shame, as this die had so much potential to be a great low power chip. Instead it's a kind of good but also weirdly unpredictable one.


It looks like I won't need to do my test, then. I'll still run a couple of benchmarks, though, because I'm curious. 

As for what you said, I agree.

I'll hold on to what I said about the 6500 XT when it was released: it's a bad all-around offering. It's not good enough for gaming at its price, and it consumes way too much power for what it is.

As for the 6400, though, I still don't think it's a bad value simply because there is no other low profile card available at this price range. The rare 1650 is overly expensive (at least in a low profile version) and the cheaper 1030 is still way below this performance range. It's sad, but the only thing that makes the 6400 great is the same thing that made the 1030 and low profile 1050 Ti great: the lack of competition.

With that said, I'm not gonna withdraw my order for the 6400 because now I'm more curious than ever, and I still need that HDMI 2.1.


----------



## Ravenas (Apr 27, 2022)

Valantar said:


> TPU benchmarks are run at Ultra. Which is understandable and a good practice for higher end cards (even if it's dumb to play at Ultra even with those cards), but rather ridiculous for a card like this. I have zero doubt the 6400 can do 1080p60 steady in pretty much any game you throw at it as long as you set the settings to some more reasonable level for this class of GPU. Obviously not in RT, but AAA games more broadly? At medium-ish? I wouldn't be surprised at all.



I 100% agree here. However, data on FSR/RSR performance is desired.


----------



## catulitechup (Apr 27, 2022)

curiously in microcenter appear rx 6500 xt around 160us on openbox, same price than rx 6400 xt


----------



## The red spirit (Apr 27, 2022)

AusWolf said:


> You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?


Bruh 6400 and 6500 XT don't have ReLive. 




AusWolf said:


> What do you mean by "proper"? Is being able to decode everything except from AV-1 not proper? Then nothing is proper except from nvidia Ampere, Navi 21, 22 and Intel Rocket Lake and above.


If it can't run YT, then it's improper. And yes, many rather recent cards are improver in this way. Today, decoding 4K YT, should be norm.




AusWolf said:


> A fair point. It may be true - I've just missed it, I guess, as the whole _"let me stream how I troll around in Fortnite"_ scene is totally alien to me.


Was talking about encoders/decoders. People converted videos for ages, it's not some special functionality.




AusWolf said:


> I'm shitting on the Athlon X4 because a 3rd gen Core i7 would cost you pennies and laughably destroy it. It isn't only not competitive now - it wasn't even competitive when it was new. Like I mentioned, I've just built a *whole system* with an i7-4765T for around 100 quid.


Good for you, but I don't care about that. Build had complicated history, but it serves as example, how relatively recent hardware fails to decode videos properly. AMD also launched excavator chips on AM4 and there are people with mini computer, that have laptop chips. You miss the point.




AusWolf said:


> Care to elaborate?


Not sure what, I thought it was already known that 6400 and 6500 XT are nothing more than harvested laptop chips put on PCB to resell for desktop users. Explains why video outs are limited, why it uses 4x slot and why it has lackluster decoding/encoding capabilities. I doubt that AMD made entirely new SKU as they still left whole physical 16x slot on cards with gold pins. or that they specifically re-engineered whole GPU to nerf decoder/encoder. People on Alibaba and Aliexpress used to do the same with nVidia chips in the past or just make a PCIe X16 adapter for MXM cards. AMD stole idea and made it more official, while charging a lot more than Ali people did. Toxic capitalism at its finest.


----------



## Kissamies (Apr 27, 2022)

The red spirit said:


> Bruh 6400 and 6500 XT don't have ReLive.


I don't even know what that is, so I doubt I would need that feature.


----------



## Valantar (Apr 27, 2022)

The red spirit said:


> Not sure what, I thought it was already known that 6400 and 6500 XT are nothing more than harvested laptop chips put on PCB to resell for desktop users. Explains why video outs are limited, why it uses 4x slot and why it has lackluster decoding/encoding capabilities. I doubt that AMD made entirely new SKU as they still left whole physical 16x slot on cards with gold pins. or that they specifically re-engineered whole GPU to nerf decoder/encoder. People on Alibaba and Aliexpress used to do the same with nVidia chips in the past or just make a PCIe X16 adapter for MXM cards. AMD stole idea and made it more official, while charging a lot more than Ali people did. Toxic capitalism at its finest.


This needs pointing out: you know that GPU dice are used essentially universally across both mobile and desktop segments, right? That pretty much every consumer GPU die AMD and Nvidia has made for several generations has both mobile and desktop variants? And no, this isn't akin to a PCIe MXM adapter - it's a dGPU, with the GPU package soldered to a made-for-purpose PCB. Does this look like an adapter board?


			https://www.techpowerup.com/review/msi-radeon-rx-6400-aero-itx/images/front.jpg
		

There is literally nothing linking this to those adapter boards. Just because the GPU die is purpose-built for mobile usage doesn't make the dGPU PCBs anything other than regular old dGPU PCBs. Also, PCB design for a GPU this simple, with this little I/O and this low power consumption is _dead simple_.

As for the PCIe fingers being fully populated: design elements like that are copy-pasted into literally every design; the PCB design software has that layout saved, ready for adding to any design. Removing pins is far more work than just leaving them there but not connected to anything, and the cost of the copper + gold plating is so low, even across thousands of units, to not matter whatsoever.


Lenne said:


> I don't even know what that is, so I doubt I would need that feature.


ReLive is AMD's streaming/recording feature in Radeon Software.



AusWolf said:


> It looks like I won't need to do my test, then. I'll still run a couple of benchmarks, though, because I'm curious.


I'm still very interested in seeing a comparison with a more reasonable CPU, so feel free to go for it


----------



## catulitechup (Apr 27, 2022)

The red spirit said:


> Not sure what, I thought it was already known that 6400 and 6500 XT are nothing more than harvested laptop chips put on PCB to resell for desktop users. Explains why video outs are limited, why it uses 4x slot and why it has lackluster decoding/encoding capabilities. I doubt that AMD made entirely new SKU as they still left whole physical 16x slot on cards with gold pins. or that they specifically re-engineered whole GPU to nerf decoder/encoder. People on Alibaba and Aliexpress used to do the same with nVidia chips in the past or just make a PCIe X16 adapter for MXM cards. AMD stole idea and made it more official, while charging a lot more than Ali people did. Toxic capitalism at its finest.



this is interesting


----------



## W1zzard (Apr 27, 2022)

mechtech said:


> @W1zzard
> 
> any plans to review amd 5500/5600 CPUs??





Lenne said:


> Personally I find 5700X way more interesting..


AMD offered 5700X and 5600 a while ago and wanted to get back to me as soon as they have the tracking number, that was two weeks ago


----------



## AusWolf (Apr 27, 2022)

The red spirit said:


> Bruh 6400 and 6500 XT don't have ReLive.


Did you actually read my answer?
_"You basically said that almost every modern graphics card *except for* the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?"_

Oh, and don't "bruh" me. Thanks.



The red spirit said:


> If it can't run YT, then it's improper. And yes, many rather recent cards are improver in this way. Today, decoding 4K YT, should be norm.


I am watching 4K Youtube on my HTPC as I'm typing this and it does not have an AV-1 decode capable GPU.



The red spirit said:


> Was talking about encoders/decoders. People converted videos for ages, it's not some special functionality.


I'm not sure. I used to convert lots of videos to be able to watch them on multiple devices (traditional mobile phones, PSP, DVD players...), but I don't think that's the norm anymore. Pretty much any device can play anything nowadays, which means average home users don't really need to convert anything.



The red spirit said:


> Good for you, but I don't care about that. Build had complicated history, but it serves as example, how relatively recent hardware fails to decode videos properly. AMD also launched excavator chips on AM4 and there are people with mini computer, that have laptop chips. You miss the point.


Because that relatively recent hardware in question (Athlon X4) was was lower end than the lowest of all ends when it was released - like basically everything on the FM2 platform. Complaining that it doesn't play Youtube is like complaining that your first gen single core Atom takes half an hour to load the Windows desktop. You could go to a computer recycling centre, pick up a Sandy Bridge mobo+CPU combo that was made ten years ago for basically free, and be a lot happier.

And obviously, don't buy a 6400 or 6500 XT to play Youtube videos if you're one of the 3 people on the planet who's still using an Athlon X4 and doesn't want to swap it for something equally cheap but miles better.



The red spirit said:


> Not sure what, I thought it was already known that 6400 and 6500 XT are nothing more than harvested laptop chips put on PCB to resell for desktop users. Explains why video outs are limited, why it uses 4x slot and why it has lackluster decoding/encoding capabilities. I doubt that AMD made entirely new SKU as they still left whole physical 16x slot on cards with gold pins. or that they specifically re-engineered whole GPU to nerf decoder/encoder. People on Alibaba and Aliexpress used to do the same with nVidia chips in the past or just make a PCIe X16 adapter for MXM cards. AMD stole idea and made it more official, while charging a lot more than Ali people did. Toxic capitalism at its finest.


Let me address those point separately:
"Harvested laptop chips": That's a good theory and I can see where it's coming from (lack of video encoder, PCI-e x4), but where are those laptop GPUs? And if the theory is right, so what?
"Limited video outs": 2 is perfectly fine for me, and for most users, I think.
"x4 slot": Yes, that's very sad indeed.
"Lacklustre en/decoding": Like I sad, AV-1 is limited to Ampere, Rocket/Alder Lake, or Navi 21 and 22, so it's still a niche. As for encoding, not everybody needs it (as discussed).
"AMD left the x16 slot": I think it's more like manufacturers left the x16 slot as redesigning the PCB would have cost money.
"Nerf the de-/encoder": They didn't nerf anything. Navi 24 is a completely new chip. Just like nvidia never nerfed the 16-series by cutting out the RT cores. Those RT cores were never there in the first place.
"Toxic capitalism": Making money is the ultimate goal of every for-profit company. You can only do that by cutting costs and increasing prices. AMD designed Navi 24 to be as small as possible with this in mind. I agree that they cut too many corners with the x4 bus, and that the 6500 XT is an inefficient, power-hungry monster of a card for its performance level, but other than that, the 6400 isn't that bad, especially when you consider the competition which is sadly the now £100 GT 1030 or the almost non-existent low profile GTX 1650 for £300 on ebay.

Sorry for the long post. I've said what I wanted, I'll leave it at that for now. 



Valantar said:


> I'm still very interested in seeing a comparison with a more reasonable CPU, so feel free to go for it


I've got Cyberpunk 2077 and Merto: Exodus EE in the download queue. I've also got 3DMark and Unigine Superposition installed.  Still have to wait until Saturday for the 6400, though.


----------



## Kissamies (Apr 27, 2022)

Valantar said:


> ReLive is AMD's streaming/recording feature in Radeon Software.


Ah. Never saw any need for that feature as I've always used Afterburner for recording and OBS back when I streamed.


----------



## Valantar (Apr 27, 2022)

Lenne said:


> Ah. Never saw any need for that feature as I've always used Afterburner for recording and OBS back when I streamed.


Wait, Afterburner has a recording feature? Isn't that just an OC tool?


----------



## ModEl4 (Apr 27, 2022)

W1zzard said:


> AMD offered 5700X and 5600 a while ago and wanted to get back to me as soon as they have the tracking number, that was two weeks ago


If I remember correctly you said in a previous post that you live in Germany.
Isn't Germany supposed to be a greatly structured country with higher organization that many other European countries?
Or is this just AMD's German office fault?
Just asking, the most memorable thing I remember about Germany is Norm Macdonald's comedy scetch:








jk


----------



## The red spirit (Apr 27, 2022)

Lenne said:


> I don't even know what that is, so I doubt I would need that feature.


AMD's Shadowplay. Screen recording/streaming function


----------



## catulitechup (Apr 27, 2022)

Valantar said:


> Wait, Afterburner has a recording feature? Isn't that just an OC tool?


yeah can record too since some versions ago

in my case use nvenc with ffmpeg 4.3 on xubuntu 22.04, other options could be simple screen recorder, vokoscreeen, kazam
or obs as your said


----------



## The red spirit (Apr 27, 2022)

Valantar said:


> This needs pointing out: you know that GPU dice are used essentially universally across both mobile and desktop segments, right? That pretty much every consumer GPU die AMD and Nvidia has made for several generations has both mobile and desktop variants? And no, this isn't akin to a PCIe MXM adapter - it's a dGPU, with the GPU package soldered to a made-for-purpose PCB.


Not really. AMD's APUs, especially mobile are cut down versions and not similar to desktop chips. Meanwhile, Intel basically lowers TDP and that's mobile chip. nVidia used to rebrands their mobile chips, use different dies and in other ways cripple GPUs and thus they weren't similar to desktop parts. 




Valantar said:


> Does this look like an adapter board?
> 
> 
> https://www.techpowerup.com/review/msi-radeon-rx-6400-aero-itx/images/front.jpg


Imagine they just integrated adapter part into board and you can't literally see MXM card. BTW MXM is dead, but they still use some kind of interconnect for mobile chips. It's that converted for desktop usage, but you clearly see how the GPU itself is mobile chip due to various bizarre limitations. 



Valantar said:


> Just because the GPU die is purpose-built for mobile usage doesn't make the dGPU PCBs anything other than regular old dGPU PCBs. Also, PCB design for a GPU this simple, with this little I/O and this low power consumption is _dead simple_.


Power consumption is mainly decided by amount of cores and their voltage/clock speed. 




Valantar said:


> As for the PCIe fingers being fully populated: design elements like that are copy-pasted into literally every design; the PCB design software has that layout saved, ready for adding to any design. Removing pins is far more work than just leaving them there but not connected to anything, and the cost of the copper + gold plating is so low, even across thousands of units, to not matter whatsoever.


I think that you are overestimating difficulties. You can literally saw-off those pins and card will work just fine. There were 4X cards in the past and they didn't cost any extra compared to 16X or 1X cards. Motherboards even had 4X slots.


----------



## Kissamies (Apr 27, 2022)

The red spirit said:


> AMD's Shadowplay. Screen recording/streaming function


Ah, I don't install that bloatware called GF Experience.


----------



## The red spirit (Apr 27, 2022)

AusWolf said:


> Did you actually read my answer?
> _"You basically said that almost every modern graphics card *except for* the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?"_
> 
> Oh, and don't "bruh" me. Thanks.


I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?



AusWolf said:


> I am watching 4K Youtube on my HTPC as I'm typing this and it does not have an AV-1 decode capable GPU.


And so I can too, but it's wasteful (makes CPU work at full blast basically) and depending on system literally impossible without crazy frame skipping. Decoding of popular codecs shouldn't be some "premium" feature. People used to buy MPEG-2 cards in the past, at this rate, we might need VP9 or AV-1 cards again, because AMD shits on their customers.



AusWolf said:


> I'm not sure. I used to convert lots of videos to be able to watch them on multiple devices (traditional mobile phones, PSP, DVD players...), but I don't think that's the norm anymore. Pretty much any device can play anything nowadays, which means average home users don't really need to convert anything.


Or you have BD rips, but don't have space and need to compress with minimal quality losses and don't want it to take days. That's why you get encoding capable GPU or basically anything new, except 6400 or 6500 XT.



AusWolf said:


> Because that relatively recent hardware in question (Athlon X4) was was lower end than the lowest of all ends when it was released - like basically everything on the FM2 platform.


How wrong you are. AMD A4 APUs are the lowest of the low, along with Athlon X2s. Athlon X4s were mid end chips. Comparable to Intel i3s of the time. And at the time lowest end new chip was actually Sempron 140, single core, K10 arch, AM3 chip. 



AusWolf said:


> Complaining that it doesn't play Youtube is like complaining that your first gen single core Atom takes half an hour to load the Windows desktop. You could go to a computer recycling centre, pick up a Sandy Bridge mobo+CPU combo that was made ten years ago for basically free, and be a lot happier.


It's not even close to Atoms, those things couldn't play YT, when they were new. Athlon X4 can play YT, but at 1080p only. At 1440p, another codec is used and then it drops some frames. I could also use enhancedx264ify and play everything with GPU alone, but hypothetically AMD could have made a proper card and I could have just dropped it in and played even 8k perfectly fine. It's just needless e-waste to release gimped cards, that are useless.



AusWolf said:


> And obviously, don't buy a 6400 or 6500 XT to play Youtube videos if you're one of the 3 people on the planet who's still using an Athlon X4 and doesn't want to swap it for something equally cheap but miles better.


It's my non daily machine for light usage and 1080p on it works fine, so I won't upgrade. But people in the past bought GT 710s to make YT playable on say Pentium D machines. If that's what buyer needs and don't care about gaming at all, then that's fine purchase. And including full decoding capabilities didn't really cost any extra. If GT 710 could do it, then why RX 6400 can't? That's just AMD selling gimped e-waste to dumb people. Mark my words, once GPU Shortage ends, suddenly AMD will stop pulling this crap on their GT 710 equivalent cards and they will rub it to RX 6500 XT customers and customers will suck it up. AMD is straight up preying on stupid and careless with such nonsense. And better yet, no they won't ever give you full decoding capabilities, but will start to market as "premium" feature only for 6900 tier cards. So if you ever need those capabilities, now you will become their "premium" customer. You will be forced to buy overpriced shit with artificial demand. AMD is precisely cashing in during shortage as much as they can and no they aren't hurting due to it, they are making astronomical profits like never before. There's nothing else we can do, other than boycotting shitty products and picking their competitor products instead. 




AusWolf said:


> "Harvested laptop chips": That's a good theory and I can see where it's coming from (lack of video encoder, PCI-e x4), but where are those laptop GPUs? And if the theory is right, so what?


Nothing much, but explains why 6400 and 6500 XT are so bizarrely limited.



AusWolf said:


> "Nerf the de-/encoder": They didn't nerf anything. Navi 24 is a completely new chip. Just like nvidia never nerfed the 16-series by cutting out the RT cores. Those RT cores were never there in the first place.


Nope, that's exactly nerfing. Adapting pointless laptop chip for desktop usage and brand it as fully capable 6000 series is exactly nerfing. And who knows, if those Navi 24s can't actually decode, I certainly don't have microscope for that, but it would be interesting to see. Wouldn't be surprised if they did.



AusWolf said:


> "Toxic capitalism": Making money is the ultimate goal of every for-profit company. You can only do that by cutting costs and increasing prices. AMD designed Navi 24 to be as small as possible with this in mind. I agree that they cut too many corners with the x4 bus, and that the 6500 XT is an inefficient, power-hungry monster of a card for its performance level


That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more. 




AusWolf said:


> , but other than that, the 6400 isn't that bad, especially when you consider the competition which is sadly the now £100 GT 1030 or the almost non-existent low profile GTX 1650 for £300 on ebay.


Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti, if you can find 1050 Ti low profile version, that's literally the same thing, but you can overclock it, record gameplay, stream and VP9 dec/enc, h265 dec/enc. 1050 Ti is just better. 1650 is closer to 6500 XT, but real 6500 XT competitor is 1650 Super.


----------



## mechtech (Apr 28, 2022)

Lenne said:


> Personally I find 5700X way more interesting..


Me too, BUT I want to see how the 5500 compares, since it would be a good budget cpu for sons PC which is running an old fx-8320 :|


----------



## Kissamies (Apr 28, 2022)

mechtech said:


> Me too, BUT I want to see how the 5500 compares, since it would be a good budget cpu for sons PC which is running an old fx-8320 :|


Yeah, agree there. I just personally want more cores at last (had a B450 & 2600 before my current B550 & 3600).


----------



## mama (Apr 28, 2022)

Niche.  Pricey for what it is.


----------



## Valantar (Apr 28, 2022)

The red spirit said:


> Not really. AMD's APUs, especially mobile are cut down versions and not similar to desktop chips. Meanwhile, Intel basically lowers TDP and that's mobile chip. nVidia used to rebrands their mobile chips, use different dies and in other ways cripple GPUs and thus they weren't similar to desktop parts.


APUs are not GPUs. Nor are Intel CPUs GPUs. Also, what you're describing is (a slight misrepresentation of) how the chip industry has always operated: any chip that can be used for multiple purposes is used for those purposes as long as it makes sense economically. AFAIK, Nvidia has never used "different dies" for mobile chips (outside of a few rare edge cases). Chips are binned during production and different bins are used for different purposes.


The red spirit said:


> Imagine they just integrated adapter part into board and you can't literally see MXM card.


What you are describing is a PCIe dGPU AIC. Literally nothing else than that.


The red spirit said:


> BTW MXM is dead, but they still use some kind of interconnect for mobile chips.


Yes, it's called PCIe. MXM is dead because essentially all dGPU-equipped laptops have the GPU integrated directly into the motherboard.


The red spirit said:


> It's that converted for desktop usage, but you clearly see how the GPU itself is mobile chip due to various bizarre limitations.


Okay, the problem here is that you're mixing up two quite different understandings of "GPU" - the one denoting the die itself, and the one denoting a full implementation including RAM, VRMs, and other ancillary circuitry. It is entirely true that this GPU - the die, and its physical characteristics - is primarily designed for mobile use. I've gone into this at quite some length in both this thread and others. What _isn't_ made for mobile use is its desktop implementations. And, crucially, just because the die is primarily designed for mobile use doesn't make it any kind of special case - as I said above, chips are binned and used for different purposes. That's how the chip industry works.

What people are discussing in terms of implementation, which you keep derailing with this odd "it's like an MXM adapter" nonsense, is _the design choices made by AMD when this die was designed_, and the tradeoffs inherent to this. When I and others say it was designed for mobile, that means the it has a featureset balanced against a specific use case, and has zero overprovisioning for other use cases (mainly desktop, but also potentially others). The board implementation is essentially irrelevant to this, and it doesn't relate in any way to MXM adapters, as - as you point out - MXM is dead, and isn't relevant to how mobile GPUs are implemented today.

So: what we have here are completely normal, entirely regular desktop AIC GPU boards with nothing particularly unique about them, designed around a die that seems to have its design goals set for a highly specific mobile use case with some inherent major drawbacks due to this.


The red spirit said:


> Power consumption is mainly decided by amount of cores and their voltage/clock speed.


Jesus, dude, seriously? Do I need to spoon feed you _everything_? Yes, that is the main determinant for power consumption. But if your goal is _as low power consumption as possible_, then you also start cutting other things. Such as memory bandwidth and PCIe, which both consume power - and in a 25W mobile GPU, even a watt or two saved makes a difference. These are of course _also_ cost-cutting measures at the same time. As I've pointed out plenty of times: it's clear that this die is purpose-built for _cheap, low power_ laptops with entry-level gaming performance.


The red spirit said:


> I think that you are overestimating difficulties. You can literally saw-off those pins and card will work just fine. There were 4X cards in the past and they didn't cost any extra compared to 16X or 1X cards. Motherboards even had 4X slots.


I never said it was difficult. I said it takes time and work to remove them from the design; time and work that costs money while providing no material benefits, and savings so small that they don't matter in the end. Thus the simplest and cheapest solution is not to bother doing so.


The red spirit said:


> I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?


They were literally saying that there are so many _other_ options that having _one_ option without them shouldn't matter. That was pretty easy to understand IMO.


The red spirit said:


> Nothing much, but explains why 6400 and 6500 XT are so bizarrely limited.


But you're misusing the term "harvested" here. "Harvested" implies they are chips that failed/were binned too low for some use case. That does not seem to be the case here - the 6500 XT seems to be a fully enabled die, but clearly one that is binned for high clocks rather than low power. The 6400 seems to be a middle-of-the-road bin, with a few CUs fused off. Neither appear "harvested", as there's no major cuts made (4 CUs isn't a lot, and everything else is intact). They're just different versions of the same chip, all equally valid implementations, and none bear the "we'd rather use these somehow than scrap them" characteristic for harvested chips.


The red spirit said:


> That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more.


While I mostly agree with you in principle (though I'd go a lot further than you do here on some points), I think this is not the most suitable case for this critique. Navi 24 is more of a strangely unbalanced design than it is a cash grab - a cash grab would try to lure people in somehow, while this just behaves strangely instead. A cash grab needs to present itself as good value or enticing, which this doesn't do - and AMD doesn't have the GPU market mindshare to really change that. Its high entry price (on the desktop) takes this from bad to worse, but that's universal across the GPU market right now, and in no way unique to these two GPU models. And yes, AMD, just like other major corporations, are in it for profits first and foremost - as a publicly traded US corporation they are legally obligated to do so. This system is an absolute travesty in so many ways, but it plays out in much more nefarious ways than these unbalanced GPU designs.

There are also valid reasons for _some_ cost increases - compared to even two years ago, many raw materials (copper, aluminium, many others) now cost 2-3x what they used to. International shipping has also increased massively, which also affects MSRPs. Neither of those are sufficient to explain the inflated MSRPs of these or other GPUs on the market currently, but they go _some_ way towards explaining them. Another explanation is AIO partners resisting scraped-down marigns - many GPU AIO partners have reportedly had essentially zero margin on their products, with MSRPs previously being so low they have struggled to break even on their own designs. Again this doesn't necessarily justify increased MSRPs -after all, the GPU makers themselves have very high margins overall, in the 40% range.


----------



## sith'ari (Apr 28, 2022)

catulitechup said:


> @W1zzard very good review however lack of 720p, this card suffer with many games at 1080p
> 
> raytracing in this card  level is a joke but results are..................
> 
> ...



I have to disagree on that .
*For me , those RT-numbers are of extreme importance* since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era. 
Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :


> _“Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.” ( https://www.game-debate.com/news/26...until-even-low-end-radeon-gpus-can-support-it )_


I never forgot that pompous statement and always waited for AMD to release their RT-capable low-end GPUs .
And ... voila !!! the RX6500XT & RX6400 in all their ""low-end RayTracing glory"" , so ,according to mr Wang's statement , i guess now the time has finally come for the ""utilisation of ray tracing games to proceed"" !!
Who doesn't want Ray Tracing games@10-15fps after all ??
I want to apologise in advance from AMD fans , but i always thought that this was a "*cheap & easy*" statement back then , and i don't tend to forget such things.
I had to wait for more than 2 years in order to verify/validate that statement , but the passage of time always comes ... mr.Wang.
So , yeah , as i said in the beginning ,those RT numbers are of extreme importance from historical point of view.


----------



## Valantar (Apr 28, 2022)

sith'ari said:


> I have to disagree on that .
> *For me , those RT-numbers are of extreme importance* since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
> Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :
> 
> ...


So ... you saw an executive make a comment that essentially says "this new feature won't take off until a lot of people have access to it", and that struck you as so strange that you roam forums years later looking for ways to laugh at it? I mean, you do you, but that's a pretty tame statement right there. About as self-evident as "you're likely to get wet when it's raining" or "people will be hungry until they are fed". Is that pompous? I'd say the opposite - it's a plain statement of fact. Does it have a slightly negative tone, in response to a competitor with a new feature? Sure. But so what? It's still true. RT still hasn't taken off after all, and is still pretty rare. These GPUs won't change that - rather, you should rather change that quote to say "until we can offer _good_ ray tracing in all product ranges". So if anything, the statement itself is _too optimistic_. Pompous? Not at all. Cheap and easy? I frankly don't know what that means in this context. It's definitely an obvious response to a new, exotic, high-end feature, and doesn't bring anything new or useful to the table. But ... again, who cares? There are _so_ many more valid and important reasons to criticize corporate behaviour and PR nonsense than this right here. AMD has a long, storied history of _terrible_ GPU PR. This isn't even a blip on the radar compared to that.

On the other hand, from a technical perspective, the RT hardware is integrated into the RDNA2 shader core, so unless they were to design an explicitly RT-less subvariant (expensive and unlikely), or add an unnecessary software block, it's just a feature that's there in hardware. It's nowhere near sufficiently powerful for RTRT graphics, but it might be useful for RT spatial audio or other low-intensity tasks (like MS has spoken of for the Xbox). That's a ways out, and likely to be very rare if it ever appears, but having the potential is IMO still better than artificially blocking it off.


----------



## sith'ari (Apr 28, 2022)

Valantar said:


> So ... you saw an executive make a comment that essentially says "this new feature won't take off until a lot of people have access to it", and that struck you as so strange that you roam forums years later looking for ways to laugh at it? I mean, you do you, but that's a pretty tame statement right there. About as self-evident as "you're likely to get wet when it's raining" or "people will be hungry until they are fed". Is that pompous? I'd say the opposite - it's a plain statement of fact. Does it have a slightly negative tone, in response to a competitor with a new feature? Sure. But so what? It's still true. RT still hasn't taken off after all, and is still pretty rare. These GPUs won't change that - rather, you should rather change that quote to say "until we can offer _good_ ray tracing in all product ranges". So if anything, the statement itself is _too optimistic_. Pompous? Not at all. Cheap and easy? I frankly don't know what that means in this context. It's definitely an obvious response to a new, exotic, high-end feature, and doesn't bring anything new or useful to the table. But ... again, who cares? There are _so_ many more valid and important reasons to criticize corporate behaviour and PR nonsense than this right here. AMD has a long, storied history of _terrible_ GPU PR. This isn't even a blip on the radar compared to that.
> 
> On the other hand, from a technical perspective, the RT hardware is integrated into the RDNA2 shader core, so unless they were to design an explicitly RT-less subvariant (expensive and unlikely), or add an unnecessary software block, it's just a feature that's there in hardware. It's nowhere near sufficiently powerful for RTRT graphics, but it might be useful for RT spatial audio or other low-intensity tasks (like MS has spoken of for the Xbox). That's a ways out, and likely to be very rare if it ever appears, but having the potential is IMO still better than artificially blocking it off.


First of all , who among the AMD tech enthusiasts didn't pay attention to that statement during the time it was made ? 
if you make a research you'll see a great amount of media reproducing that statement back then , and that statement wasn't made by a simple employee , but from the VicePresident of engineering in Radeon . How many people are above a Vice President of engineering in RTG ? (not AMD in general , but rather in Radeon Technologies which is right in the area of topic : GPUs)
So this statement was done by one of the highest(if not the highest) inside RTG , which is what matters , since we are not talking CPUs but rather GPUs.

Also yes , back then it was a cheap&easy statement , because :
1)This could cut the hype from nVIDIA's RayTracing implementations , implying that AMD cares for gamers , thus they will release low-end RT for the masses.
2)Didn't have any immediate cost for AMD , since very few people would bother to remember something which was said more than 2 years ago.
But as i said , personally i get very*(VEEEERY) *intrigued when i hear such pompous/low-cost statements , and i patiently wait to see what will happen.
*And what happened exactly ? 10-15fps with RT-enabled is what happened*. 
Have you compared RTX2060 's RT-performance Vs those 2 Radeon GPUs ? it's like day & night , yet , nVIDIA was burried back then by the press for their RT performance even for a somewhat capable GPU such as the RTX2060. 
Based on RTX2060's RT-performance Vs these 2 , does anyone think that nVIDIA wasn't capable back then to release a similar RT-performer with an RX6400 ?? of course they could have .
But they didn't launch such a product , instead they wisely chose to release a GTX-line for low-end.

So , i'm asking : now that AMD did something that nVIDIA could easily have done 2 years ago , what does that mean ?
that the ... ""utilisation of ray tracing games is ready to proceed"" ??
If yes , then this means that the ""utilisation of ray tracing games"" was ready to proceed back from day-1 , back from the Turing days , since AMD released a low-end RT product which is several times worse than nVIDIA's RTX2060


----------



## Valantar (Apr 28, 2022)

sith'ari said:


> First of all , who among the AMD tech enthusiasts didn't pay attention to that statement during the time it was made ?
> if you make a research you'll see a great amount of media reproducing that statement back then , and that statement wasn't made by a simple employee , but from the VicePresident of engineering in Radeon . How many people are above a Vice President of engineering in RTG ? (not AMD in general , but rather in Radeon Technologies which is right in the area of topic : GPUs)
> So this statement was done by one of the highest(if not the highest) inside RTG , which is what matters , since we are not talking CPUs but rather GPUs.
> 
> ...


I still think you're making a mountain out of ... not even a molehill, more like a grain of sand here. "Competitor's exec comments vaguely negatively about the exclusivity about a company's new feature" is ... standard. Expected. Dime-a-dozen. Something that happens every day of the week, everywhere, all the time. You could say that people picked up on it, but ... so what? Fanboys will be fanboys. Press will report on statements made. That's their job, quite literally. That doesn't necessarily make those statements interesting or notable outside of that moment. If what you're looking for is to rile up fanboys, please go on Reddit or somewhere more suited to that, rather than poisoning interesting discussions on the forums. Because pointing this out as if it's somehow unusual or especially worthy of comment is rather absurd.

As for your two numbered points:
1: So what? Who cares? Competitor PR is meant to _compete_. Counteracting hype surrounding a competing product is expected. Literally everyone does that. Nvidia does that every time AMD adds a new feature, AMD does it every time Nvidia adds a feature. If bog-standard PR behaviour like this bothers you that much, you're better off just not paying attention at all.
2: Again: so what? Would it have mattered more if it through some absurd mechanism cost AMD a lot to say this? No. What you seem to be implying is that AMD should have put their money where their mouth was, which ... isn't that what they've done? RDNA2 overall has decently capable (Turing-level-ish) RTRT support, which clearly wasn't free for AMD. That Navi 24 is too small a die for it to be useful doesn't change the fact that they've clearly invested heavily in making this feature work for them as well.

You've yet to show how this was pompous or anything but stating the obvious. If anything, you're demonstrating that this was a _conservative assessment_. That's the opposite ov being pompous. We now _have_ bottom-to-top RT support, yet it _still _hasn't taken off. So what he said was necessary was too optimistic, even! This, again, is completely obvious - it takes years and years for even widely supported graphics features to take off, let alone ones that entirely change how graphics work - but you're claiming that this is somehow pompous? Maybe look up that word? He was effectively overly optimistic about the adoption rate of a competitor's new tech. That is hardly pompous.

As for the RTX 2060 comparison: while you're not technically wrong, it's par for the course to be more heavily criticized for base level performance of an exclusive new feature in its first generation than for base level performance in subsequent generations. This is in part due to press attention spans, but also due to new features being expensive (especially to end users), and thus coming with an expectation on a return on investment, while adding said feature to low-end parts in later generations is a lower stakes endeavor. That doesn't make the RT support in Navi 24 any more useful for RTRT graphics, but it also makes the "meh, sure, it has RT support on paper, but don't expect it to be useful" response completely expected and understandable. It would have been exactly the same if AMD were the first to deliver RT and Nvidia then delivered a low-end RT-supporting card the next generation.

So: we have an executive of a competitor trying to counteract hype surrounding a company's new tech (expected), making a so-obvious-it-hurts statement (expected), that in hindsight turned out to even be too optimistic an assessment (perhaps not expected, but certainly not pompous). Is this worth making a fuss over?


I mean, the unusable RT support on these GPUs is _really_ not their biggest problem. Not by a long shot. Even the RTX 3050, RX 6600, and RTX 2060 deliver _barely_ playable RTRT results in most games at 1080p. What would you expect from GPUs with half the resources of a 6600, or less? You could always make an argument saying that AMD ought to have disabled RT support on this because it's not useful in real life, but that would be something entirely different from what you're doing here.


----------



## sith'ari (Apr 28, 2022)

Valantar said:


> I still think you're making a mountain out of ... not even a molehill, more like a grain of sand here. "Competitor's exec comments vaguely negatively about the exclusivity about a company's new feature" is ... standard. Expected. Dime-a-dozen. Something that happens every day of the week, everywhere, all the time. You could say that people picked up on it, but ... so what? Fanboys will be fanboys. Press will report on statements made. That's their job, quite literally. That doesn't necessarily make those statements interesting or notable outside of that moment. If what you're looking for is to rile up fanboys, please go on Reddit or somewhere more suited to that, rather than poisoning interesting discussions on the forums. Because pointing this out as if it's somehow unusual or especially worthy of comment is rather absurd.
> 
> As for your two numbered points:
> 1: So what? Who cares? Competitor PR is meant to _compete_. Counteracting hype surrounding a competing product is expected. Literally everyone does that. Nvidia does that every time AMD adds a new feature, AMD does it every time Nvidia adds a feature. If bog-standard PR behaviour like this bothers you that much, you're better off just not paying attention at all.
> ...


I don't expect anything my friend ,
of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such a statement.
So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance  that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.
You are wondering why i'm reacting :
well , this statement made everyone to react since this statement was reproduced everywhere back then , the only difference is that i kept remembering this statement in order to see if it will actually be validated or just a hype of the moment in order to undermine nVIDIA 's "mid-range RT-capable GPUs" .
Anyone who thinks that this statement was meaningless ,they should also have said that by the time that this statement was made , but no one undermined that statement back then. I'm simply one the very few who waited for 2 years to actually check the validity of such statement ,and you are critisizing me for doing such a thing ??

Unfortunately , as i said ,i get very intrigued by such statements ,thus i'm incapable to forget them.
I pay attention at them during their hype , but i also pay attention when the hype has ended and the time of truth has come...


----------



## The red spirit (Apr 28, 2022)

Valantar said:


> Okay, the problem here is that you're mixing up two quite different understandings of "GPU" - the one denoting the die itself, and the one denoting a full implementation including RAM, VRMs, and other ancillary circuitry. It is entirely true that this GPU - the die, and its physical characteristics - is primarily designed for mobile use. I've gone into this at quite some length in both this thread and others. What _isn't_ made for mobile use is its desktop implementations. And, crucially, just because the die is primarily designed for mobile use doesn't make it any kind of special case - as I said above, chips are binned and used for different purposes. That's how the chip industry works.


Did I even claim that card is GPU?



Valantar said:


> What people are discussing in terms of implementation, which you keep derailing with this odd "it's like an MXM adapter" nonsense, is _the design choices made by AMD when this die was designed_, and the tradeoffs inherent to this. When I and others say it was designed for mobile, that means the it has a featureset balanced against a specific use case, and has zero overprovisioning for other use cases (mainly desktop, but also potentially others). The board implementation is essentially irrelevant to this, and it doesn't relate in any way to MXM adapters, as - as you point out - MXM is dead, and isn't relevant to how mobile GPUs are implemented today.
> 
> So: what we have here are completely normal, entirely regular desktop AIC GPU boards with nothing particularly unique about them, designed around a die that seems to have its design goals set for a highly specific mobile use case with some inherent major drawbacks due to this.


Which was my main point



Valantar said:


> Jesus, dude, seriously? Do I need to spoon feed you _everything_? Yes, that is the main determinant for power consumption. But if your goal is _as low power consumption as possible_, then you also start cutting other things. Such as memory bandwidth and PCIe, which both consume power - and in a 25W mobile GPU, even a watt or two saved makes a difference. These are of course _also_ cost-cutting measures at the same time. As I've pointed out plenty of times: it's clear that this die is purpose-built for _cheap, low power_ laptops with entry-level gaming performance.


Few connectors barely even consume a few watts. Savings are nill. It's just a limitations of laptop oriented GPU, more than anything else. There were GT 710s with 4 video connectors and those cards shared literally the same TDP. 



Valantar said:


> I never said it was difficult. I said it takes time and work to remove them from the design; time and work that costs money while providing no material benefits, and savings so small that they don't matter in the end. Thus the simplest and cheapest solution is not to bother doing so.


Considering that they removed art from box, had some other moronic things done, it would make sense to give it 4x connector. I doubt that there wouldn't be savings, but it's just AMD either being lazy or intentionally misleading. Hell, there were 1X GT 710s, 4X GT 710s.



Valantar said:


> They were literally saying that there are so many _other_ options that having _one_ option without them shouldn't matter. That was pretty easy to understand IMO.


Except other options are too CPU heavy on lower end hardware and nobody will pay for capture card, if their budget for GPU is RX 6400 level. Loss of ReLive is really bad.



Valantar said:


> But you're misusing the term "harvested" here. "Harvested" implies they are chips that failed/were binned too low for some use case. That does not seem to be the case here - the 6500 XT seems to be a fully enabled die, but clearly one that is binned for high clocks rather than low power. The 6400 seems to be a middle-of-the-road bin, with a few CUs fused off. Neither appear "harvested", as there's no major cuts made (4 CUs isn't a lot, and everything else is intact). They're just different versions of the same chip, all equally valid implementations, and none bear the "we'd rather use these somehow than scrap them" characteristic for harvested chips.


Don't see how using mobile GPU for desktop doesn't count as "harvest". In normal times, low end desktop GPU would be made, without having to resort to mobile chip harvesting.



Valantar said:


> While I mostly agree with you in principle (though I'd go a lot further than you do here on some points), I think this is not the most suitable case for this critique. Navi 24 is more of a strangely unbalanced design than it is a cash grab - a cash grab would try to lure people in somehow, while this just behaves strangely instead. A cash grab needs to present itself as good value or enticing, which this doesn't do - and AMD doesn't have the GPU market mindshare to really change that. Its high entry price (on the desktop) takes this from bad to worse, but that's universal across the GPU market right now, and in no way unique to these two GPU models. And yes, AMD, just like other major corporations, are in it for profits first and foremost - as a publicly traded US corporation they are legally obligated to do so. This system is an absolute travesty in so many ways, but it plays out in much more nefarious ways than these unbalanced GPU designs.


I disagree, Ryzen created a lot of mindshare among "enthusiasts" (at least the ones that claim to be ones). Obviously not as much as nVidia or Intel, but they have it. And despite moderately poor press, they are banking on their customers not noticing cut down features. Will it work? I don't know, but I see how AMD is being somewhat misleading, for people, who aren't aware about missing features and crippled hardware.  



Valantar said:


> There are also valid reasons for _some_ cost increases - compared to even two years ago, many raw materials (copper, aluminium, many others) now cost 2-3x what they used to. International shipping has also increased massively, which also affects MSRPs. Neither of those are sufficient to explain the inflated MSRPs of these or other GPUs on the market currently, but they go _some_ way towards explaining them. Another explanation is AIO partners resisting scraped-down marigns - many GPU AIO partners have reportedly had essentially zero margin on their products, with MSRPs previously being so low they have struggled to break even on their own designs. Again this doesn't necessarily justify increased MSRPs -after all, the GPU makers themselves have very high margins overall, in the 40% range.


Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those  that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.


----------



## Valantar (Apr 28, 2022)

sith'ari said:


> I don't expect anything my friend ,


Well that's a large part of your problem right there, and it goes a long way towards explaining why you're so surprised by something that isn't otherwise worthy of note.


sith'ari said:


> of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
> But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such as statement.


Wasn't it? What he said was that RTRT won't take off until we have bottom-to-top support. We're literally days off the price of entry hitting €160. And RTRT still hasn't taken off. So while on the one hand it's too early to judge - it's not like these low-end cards have had the time to make any kind of impact - on the other he's also _right_, or at least going in the right direction. Remember, the relevant spectrum of opinions at the time this was said was either Nvidia's "RTRT is here today and it's amazing", vs. this "RTRT won't really take off until we all have it". Which of those is more wrong? The former. Period. That doesn't mean that RTRT doesn't exist, or isn't at times amazing, but ... it's overall a niche thing still. It hasn't taken off. So the only thing wrong with that statement is not being sufficiently pessimistic.


sith'ari said:


> So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance  that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.


But that isn't what he said, at least if you quoted him correctly. All he said is that he didn't think RT would "proceed" (a wording I'd like to see a second opinion of the translation of, as it's rather weird, and the original source of the interview is in Japanese - that looks like poor quality machine translation to me) until it could be offered across the board. In the source you provided the statement is also directly linked to AMD's refusal to enable software-only RTRT in pre-RDNA2 GPUs (which Nvidia did for 1000-series GPUs, which nobody ever used beyond benchmarks due to terrible performance). Given that he's making this statement in opposition to something, that something is then the assumption that it _will_ do so. Yet ... it hasn't. So, while he might have been wrong about the time frame (though technically it's far too early to judge, even if I don't believe it's accurate either), he wasn't _wrong_ in and of itself. His position is more accurate than the opposing one.


sith'ari said:


> You are wondering why i'm reacting :
> well , when this statement made everyone to react since this statement was reproduced everywhere back then , the only difference is that i kept remembering this statement in order to see if it will actually be validated or just a hype of the moment in order to undermine nVIDIA 's "mid-range RT-capable GPUs" .


... and? Is it not validated? It's only _very_ recently that RTRT support has been available across the budget range. Has it taken off yet? No. So ... he was either right, or pointing in the right direction, but too optimistic. I mean, isn't the failure of RT taking off until now _more_ of a refutation of Nvidia's early hype than this statement?


sith'ari said:


> Anyone who thinks that this statement was meaningless ,they should also have said that by the time that this statement was made , but no one undermined that statement back then. I'm simply one the very few who waited for 2 years to actually check the validity of such statement ,and you are critisizing me for doing such a thing ??


Because making a cautionary statement about a new tech is, frankly, a sensible thing to do. We should all be doing so, all the time, every time someone promises us a revolutionary new feature. Being cautious and expecting it to take time before it takes off (if ever!) is solely a good thing.


sith'ari said:


> Unfortunately , as i said ,i get very intrigued by such statements ,thus i'm uncapable to forget them.
> I pay attention at them during their hype , but i also pay attention when the hype has ended and the time of truth has come...


The problem is, you're entirely misrepresenting the direction of the statement he's made here. You're presenting it as if he said "once RTRT is available across the range, it will take off". That's a positive statement, a statement making a claim to something necessarily coming true at a given point. What he said was, to paraphrase slightly, "RTRT won't take off until it's available across the range". That's _not_ a positive statement, it's a cautionary statement, saying that something will (might) only come true if certain preconditions are met. It is also a statement, crucially, made in the context of another actor effectively stating that "RTRT is here right now and is revolutionizing graphics". That RTRT is now available across the board, yet still hasn't taken off? That's proof that his statement _didn't go far enough_. He wasn't wrong, he was too cautious, too optimistic of things working out - effectively too close to Nvidia's stance!

Put it this way: on a scale from 0 to 100, where 0 is "RTRT will never be the dominant graphics paradigm" and 100 is "RTRT arrived with the arrival of Turing, and revolutionized real-time graphics", this statement is, at best, at 50 - though arguably it's more like a 70-80, as it's implicitly promising wide RTRT adoption within a relatively close time frame - just not _right then_. Reality, the truth? Maybe 20? 30? Predicting the future is impossible outside of chance and broad statistics, but there's no indication currently that RTRT will become the dominant graphics paradigm any time soon. So if anything, the statement you're referencing was pointing in the right direction compared to its contemporaries (considering Nvidia were firmly at 100), but too optimistic still.

Also, speaking of hype, what was the most hyped up thing at that time - Nvidia's Turing RT support, or this statement? I'd expect the former to have had anywhere from 100x to 100 000x the attention, both in press and enthusiast discussions. Considering that, it seems you aren't paying attention to when the hype ends and the truth becomes visible? Because the truth is that RTRT still isn't more than a niche.


----------



## AusWolf (Apr 28, 2022)

sith'ari said:


> I don't expect anything my friend ,
> of course low-end products is obvious that they will face extreme challenges when they are assigned to perform extremely demanding tasks such as RayTracing acceleration.
> But what is obvious -apparently - for you or me , wasn't obvious for the VicePresident of Radeon Technologies Group who made such a statement.
> So now i'm simply commenting that this statement was completely invalid , since AMD did release those mentioned low-end RT-GPUs , yet it's obvious from their performance  that such products are unsuitable for promoting RayTraced gaming , thus a completely invalid statement by mr Wang.
> ...


So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.


----------



## mechtech (Apr 28, 2022)

sith'ari said:


> I have to disagree on that .
> *For me , those RT-numbers are of extreme importance* since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
> Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :
> 
> ...


I don't think I even have a game that has Ray Tracing lol  hmmmm  maybe Borderlands 3 does??  That's my newest game.


----------



## thelawnet (Apr 28, 2022)

The red spirit said:


> Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those  that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.



1050 ti is exactly the same price in Indonesia also.

The 6400 is 40% faster than the 1050 ti, so I'd take the 6400. Plus newer, and more efficient.

It helps that I have a quicksync pcie4 cpu. But even without quicksync, as a budget gaming card at like $80, the rx 6400 is obviously hugely superior, because, like, it's a budget card for gaming, not a workstation card. And as mentioned it's MUCH faster.

The 1050 ti is two generations old, so by this point it should be selling for less, not more. Inflation isn't normal for old PC parts, they get steadily cheaper till they become ewaste.

The argument about why they are the same price is just market forces. 1050 ti = Nvidia, better brand recognition, better features. 6400 = AMD, worse features, better performance. Price ends up the same.

If there is reduced demand/more supply, they will fall.

At the moment, $200 for a $100 GPU is too much for me, so I skip.


----------



## sith'ari (Apr 28, 2022)

Valantar said:


> ......................................
> But that isn't what he said, at least if you quoted him correctly. All he said is that *he didn't think* RT would "proceed" (a wording I'd like to see a second opinion of the translation of, as it's rather weird, and the original source of the interview is in Japanese - that looks like poor quality machine translation to me) until it could be offered across the board.


I didn't quote him , GameDebate quoted him (i put the link as you surely noticed) and the quote says :
""""_Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.”"""
This statement :
1)was made when nVIDIA had launched their mid-range RT-implementations , so his statement definately implies things against nVIDIA
2)*He states* , *NOT* thinks( the part i underlined in your comment) , that utilisation of RayTracing games will not proceed unless *we(AMD)* offer RayTraying from low-end to high-end.
If you don't think that those remarks aren't a clear attempt in order to cut hype from nVIDIA 's Turing then that's your estimation and i certainly have a different one .
Furthermore , today it was proven that not only did they try to cut hype from nVIDIA's Turing back then , but more important , they did it by using false claims , as ,again , was proven by the RT-numbers today._


----------



## Valantar (Apr 28, 2022)

The red spirit said:


> Did I even claim that card is GPU?


Yep. Several times. But more importantly, you're _responding to people using that term_ while clearly switching between several meanings of it, making your arguments miss the point.


The red spirit said:


> Which was my main point


Then why on earth bring in MXM adapters, if your point (?) was "this is a bog-standard, regular GPU design"? Sorry, but this does not compute. You made a claim that the desktop Navi 24 GPUs were equivalent to third-party vendors designing MXM-to-PCIe adapters for mobile GPUs in desktop use cases. This claim is false. Period.


The red spirit said:


> Few connectors barely even consume a few watts. Savings are nill.


... not on a 25W mobile GPU. Which is what the 6300M is. "A few watts" is quite a notable difference in that scenario. And, if you look at the post you quoted, I said "even a watt or two makes a difference", which is _less_ than what you're saying here. The 6500M is 35-50W, where such a difference is less important, but can still allow for slightly higher clocks or better efficiency still.


The red spirit said:


> It's just a limitations of laptop oriented GPU, more than anything else. There were GT 710s with 4 video connectors and those cards shared literally the same TDP.


... and? TDP is a board-level designation that is essentially invented by chipmakers and/or OEMs. I'm talking about chip-level power savings from a design perspective. Every single one of those GT 710s have the same chip on board (though if some of them have more output hardware it's likely that that consumes a tad more power).


The red spirit said:


> Considering that they removed art from box, had some other moronic things done, it would make sense to give it 4x connector. I doubt that there wouldn't be savings, but it's just AMD either being lazy or intentionally misleading. Hell, there were 1X GT 710s, 4X GT 710s.


... so: PCB design software lets you add in a ready-made PCIe x16 connector design. It's already there, already done, and adding it is trivial - they then just position it correctly along the PCB perimiter and connect the relevant traces to the right pins. Removing those pins from that ready-made connector design, or god forbid making it from scratch for the nth time, would take _much more time_. A box design? The equivalent would be whether to copy in the company logo or to draw it from scratch. Which do you think they do? Also, graphical design is _just a tad_ simpler than PCB design. Not that that makes it easy or of low value, but the cost of screwing up a cardboard box design is rather lower than screwing up a modified connector on a thousand PCBs.


The red spirit said:


> Except other options are too CPU heavy on lower end hardware and nobody will pay for capture card, if their budget for GPU is RX 6400 level. Loss of ReLive is really bad.


... other GPUs with hardware encode/decode are too CPU heavy? I still think you're somehow failing to understand what was said: they said that there are plenty of other GPU alternatives on the market with those encode/decode blocks included, and that one option without them is thus not much of an issue. I don't necessarily agree entirely, but I don't see it as _that_ bad either. But I sincerely hope you can actually understand what was said now.


The red spirit said:


> Don't see how using mobile GPU for desktop doesn't count as "harvest". In normal times, low end desktop GPU would be made, without having to resort to mobile chip harvesting.


.... there is no difference between a "mobile GPU" and "desktop GPU" in this regard. That's my whole point. You're talking about _chips_, not full GPU designs. And chips are _always_ used in_ both_ mobile and desktop. There is _nothing_ unique about this happening here - the only unique thing is the specific design characteristics of this die.


The red spirit said:


> I disagree, Ryzen created a lot of mindshare among "enthusiasts" (at least the ones that claim to be ones). Obviously not as much as nVidia or Intel, but they have it. And despite moderately poor press, they are banking on their customers not noticing cut down features. Will it work? I don't know, but I see how AMD is being somewhat misleading, for people, who aren't aware about missing features and crippled hardware.


If you look at GPU sales in that same period, you'll see that that Ryzen mindshare essentially hasn't translated into Radeon mindshare at all - despite AMD becoming _much_ more competitive in GPUs in the intervening period, their market share has been stagnant. And, of course, the RX 5700 XT debacle was in the middle of this, whic _definitely_ soured broad opinions on Radeon GPUs.


The red spirit said:


> Then how come GTX 1050 Ti costs the same? We know that it's better, doesn't lack features and has superior decoder/encoder capabilities, but they are made new for same price. It might be just another weirdness of Lithuanian tech market, but I would rather get 1050 Ti instead of RX 6400. And regarding 1050 Ti, they used to sell for 140-170 EUR, now they are selling for ~200 EUR. despite Lithuania's 15+% yearly inflation, it seems that actual price of manufacturing barely increased. Basically 10% or less, which is nothing compared to material price increases, at least those  that media talks about. If material prices directly affected cards in full, then 1050 Ti would be 400-600 EUR + Lithuania's own inflation. That's clearly not the case. I guess that relationship of material prices and end product prices (graphics cards in this case) is more complicated.


Because most 1050 Ti stock is likely produced years ago, especially the silicon. And, of course, even if it's brand new, producing a 1050 Ti die on Samsung 14nm is _much_ cheaper than producing a Navi 24 die on TSMC 6nm, even if other materials costs are probably similar-ish. And, of course, literally every single design cost for the 1050 Ti is long since amortized, which has a major impact on margins. If you have two _entirely identical _products, where one is brand-new, and the other has been in production for 3,4,5,6 years? The new one has to pay for materials, production costs, marketing costs, board design costs, silicon tape-out and QC costs, driver development costs, and more. The older one? Materials and production costs - everything else is long since paid off (though it _might_ theoretically still be marketed). Drivers are likely still in development (hopefully!), but development costs will most likely be much lower due to driver maturity and developer familiarity with the hardware. There are good reasons why older hardware is more affordable than newer hardware, beyond just inflation.


----------



## ModEl4 (Apr 28, 2022)

sith'ari said:


> I have to disagree on that .
> *For me , those RT-numbers are of extreme importance* since , i never forgot David Wang's(AMD senior VP of engineering for RTG) pompous statement during the Turing era.
> Back then ,AMD didn't have any response against nVIDIA 's RayTracing , yet mr. Wang , apparently in an effort to undermine what nVIDIA has been doing , made the following statement :
> 
> ...


First I thought, oh it's just sarcasm because your delivery was a little bit exaggerated but you meant it.
The article dates from 2018...
He was right, (probably the *w*_*e*_ he used in his comment was meant as *we as an industry ,*not that it matters much) and it's exactly like @Valantar said, he probably meant it like this : "this new feature won't take off until a lot of people have access to it".
Now with the consoles (major product segment for AMD) and having all the product lines supporting raytracing there is less resistance from game developers to allocate money/time/effort for this feature.
If a game supports FSR and you use performance mode, so 540p internal -1080p on screen or you tone down settings a little bit and use FSR quality mode (720p) and the raytracing utilization isn't heavy, you can achieve very playable frame rates.
It's not a good option at all I will agree, but the option is there.
Switch users (an extremely successful console) are perfectly fine with similar resolutions and worst picture quality when playing on TV, so I'm sure there are a lot people out there that they won't mind about RX6400 RT graphics performance if the game is good (TPU readers isn't typical users)
Did he said it because they had nothing to compete back then and probably because he wanted to kill the hype Nvidia was trying to generate back then, yes sure.
Did he play a negative roll trying to delay the adoption from devs of a graphics feature that will ultimately help advance the visual fidelity of the industry, OK maybe even that, but don't you think your reaction is a bit too much?
Now as engineering goes, he's an asset for AMD, sure RTG had some judgment laps with Navi 24 and with pricing strategy in general, but achieving 2.81GHz on N6, and having only 107mm² die size for the transistor budget (not to mention that raytracing implementation of RDNA2 although weak on performance it is extremely efficient regarding transistor budget size that it adds) are very good indications for the work that is happening in RTG imo.
Is RDNA2 worst in raytracing than  2018 Turing, sure, it's worst than Turing in other things also, but Nvidia isn't an easy opponent...


----------



## sith'ari (Apr 28, 2022)

AusWolf said:


> So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.


i didn't say anything about RX6400 being a crap , i commented on a past comment from  David Wang , AMD's senior VicePresident of engineering for Radeon Technologies Group


----------



## catulitechup (Apr 28, 2022)

AusWolf said:


> So um... the low-end 6400 is crap because it can't run for example, Cyberpunk 2077 with RT Psycho at 100 fps? Let's not forget that we're talking about an entry-level product after all.


yeah is crap but another reasons like cutted features, performance on pci-e gen 3 (most users have this mainboards) and highly price but for raytracing meh....................

but maybe is usefull for some users like sff but personally i dont buy anything until nvidia respond to this and intel arc stay in market

meanwhile dont give any money to fucking scumbag companies


----------



## Valantar (Apr 28, 2022)

sith'ari said:


> I didn't quote him , GameDebate quoted him (i put the link as you surely noticed) and the quote says :
> """"_Utilisation of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.”"""
> This statement :
> 1)was made when nVIDIA had launched their mid-range RT-implementations , so his statement definately implies things against nVIDIA
> ...


.... you wrote out their quote in text. In your post. That's literally you quoting him. You're quoting him, even if you're quoting _their_ quoting of his statement, because you're reproducing his words (through their translation). That's the same thing (barring, of course, any misrepresentations in their quote).

As for the rest:
1: Yes. That I entirely agree with this should have been plenty clear from my post.
2: That's literally the same thing. He's commenting on the future. The best he, as any human, can do is to state his opinion and intention. Unless he is literally all-powerful, he does not have the ability to make deterministic statements about the future.
3? You apparently stopped using numbers, but ... yeah. So. Did you read what I wrote? It doesn't seem like it. Because if you did you would see that I absolutely think that it was that, but that I also think that _reality_ has "cut the hype from Nvidia's Turing" _much_ more than this statement. Going back to my 0-100 scale: Nvida was saying "it's 100!", he was saying "probably more like 70-80", and reality has since kicked down the door with a solid 20-30. Was he wrong? In a naïvely absolutist way where the absolute wording of an assessment matters more than its direction and overall gist, sure. But the direction of his argument is _far_ more accurate than Nvidia's statement at the time. So, you're arguing that it was bad that he "cut" the Turing hype, yet ... reality has shown that hype to be complete BS? What are you so annoyed about? That an AMD exec made a vaguely accurate, but overly optimistic prediction that contradicted Nvidia's marketing? After all, if it annoys you that a corporate executive made a dubious claim back then, shouldn't you be even more annoyed at Nvidia execs trumpeting the wonders of RTRT and how it would revolutionize real time graphics? Because their statements at the time are much, much further from reality than the statement you quoted here.


----------



## sith'ari (Apr 28, 2022)

ModEl4 said:


> ......
> He was right, (probably the *w*_*e*_ he used in his comment was meant as *we as an industry ,*not that it matters much) and it's exactly like @Valantar said, he probably meant it like this : "this new feature won't take off until a lot of people have access to it".
> .....
> *Did he said it because they had nothing to compete back then and probably because he wanted to kill the hype Nvidia was trying to generate back then, yes sure*.
> ...


Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say altruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
who put him representative of nVIDIA ? How altruistic indeed  !!of course by "we" he meant his own company *otherwise he should make it clear that he speaks on behalf of the entire industry *!!
Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally didn't have any low-end RT-GPUs by then...


----------



## AusWolf (Apr 28, 2022)

The red spirit said:


> I read everything, either your English failed or you said 6400 and 6500XT have decoding. What else would your question "How many more options do you need?" imply here, in case of non recording capable cards?


OK, let me simplify: The 6500 XT and 6400 do *not* have a video encoder (and in fact, the GT 710 / 730 duo don't, either). *Everything else* released in the last couple years does. How many more options do you want?



The red spirit said:


> And so I can too, but it's wasteful (makes CPU work at full blast basically) and depending on system literally impossible without crazy frame skipping. Decoding of popular codecs shouldn't be some "premium" feature. People used to buy MPEG-2 cards in the past, at this rate, we might need VP9 or AV-1 cards again, because AMD shits on their customers.


Every fairly modern CPU can do it at relatively low usage. Needing an AV-1 decode capable GPU for watching 4K Youtube on a PC with a sh*t CPU is a need that you invented.



The red spirit said:


> Or you have BD rips, but don't have space and need to compress with minimal quality losses and don't want it to take days. That's why you get encoding capable GPU or basically anything new, except 6400 or 6500 XT.


If you need that feature, fair enough. I just don't think many people do.



The red spirit said:


> How wrong you are. AMD A4 APUs are the lowest of the low, along with Athlon X2s. Athlon X4s were mid end chips. Comparable to Intel i3s of the time. And at the time lowest end new chip was actually Sempron 140, single core, K10 arch, AM3 chip.


Not really. A 3 years older Core i3 4330 can beat it with ease.



The red spirit said:


> It's not even close to Atoms, those things couldn't play YT, when they were new. Athlon X4 can play YT, but at 1080p only. At 1440p, another codec is used and then it drops some frames. I could also use enhancedx264ify and play everything with GPU alone, but hypothetically AMD could have made a proper card and I could have just dropped it in and played even 8k perfectly fine. It's just needless e-waste to release gimped cards, that are useless.


I wasn't comparing it to Atoms. I was merely stating that you're holding the Athlon X4 in too high regard.



The red spirit said:


> It's my non daily machine for light usage and 1080p on it works fine, so I won't upgrade. But people in the past bought GT 710s to make YT playable on say Pentium D machines. If that's what buyer needs and don't care about gaming at all, then that's fine purchase. And including full decoding capabilities didn't really cost any extra. If GT 710 could do it, then why RX 6400 can't? That's just AMD selling gimped e-waste to dumb people. Mark my words, once GPU Shortage ends, suddenly AMD will stop pulling this crap on their GT 710 equivalent cards and they will rub it to RX 6500 XT customers and customers will suck it up. AMD is straight up preying on stupid and careless with such nonsense. And better yet, no they won't ever give you full decoding capabilities, but will start to market as "premium" feature only for 6900 tier cards. So if you ever need those capabilities, now you will become their "premium" customer. You will be forced to buy overpriced shit with artificial demand. AMD is precisely cashing in during shortage as much as they can and no they aren't hurting due to it, they are making astronomical profits like never before. There's nothing else we can do, other than boycotting shitty products and picking their competitor products instead.


What are you talking about? The 710 can do MPEG-1, MPEG-2, VC-1 and H.264 decode. The 6400 / 6500 XT can do all that, *and* H.265. The only thing it can't decode is AV-1 - neither can the 710 by the way.



The red spirit said:


> That's not exactly what I meant. It's just about artificially creating demand and selling low end poo-poo for huge premium. It's not making money, it's straight up daylight robbery. AMD pulled the same crap when they launched Ryzen 5600X and 5800X with huge premiums, when Intel sold 20% slower parts for literally half the price. And they dared to claim that 5600X was some value wonder. Only to release 5600, 5500, when people realized that Intel has some good shiz too. They also intentionally didn't sell any sensible APUs, only 5600G or 5700G, also for nearly twice what they were actually worth, but fanboys didn't question that and instead bought as much as AMD managed to make. Had they released 5400G (hypothetical 4C8T rDNA 2 APU), it would have outsold 5600G/5700G by times, but why do this, if they can artificially limit their line up and convince buyers to buy way too overpriced stuff instead? That's exactly why I call this toxic capitalism, because goods can be available, but companies don't make them, due to lower, but still reasonable margins. If you look at their financial reports, they made an absolute killing during shortage and pandemic, so that basically confirms that they had huge mark-ups. That also explains why RX 6400 and 6500 XT lack features, lack performance and are overpriced, crappy products. 6500 XT is literally that poo, that RX 570 is equivalent of it, but RX 570 was made ages ago, wasn't gimped and cost way less. Even with inflation included, there's no way that 6400 and 6500 XT must cost as much as they do. Their mark-up is as high as 40%, if not more.


I see what you mean, and I kind of agree with the sentiment.
It's only that the 6400 and 6500 XT aren't the only symptoms of this "wild capitalism". I don't think there's any GPU on the market that sells for a reasonable price at the moment. I mean, what other options do you have? A 1030 for £100? A 1050 Ti for £150-200? Or a used 1650 from ebay for £250-300? Hell no! Ampere doesn't even exist in this range. I'd much rather buy something new with warranty for this price.



The red spirit said:


> Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti, if you can find 1050 Ti low profile version, that's literally the same thing, but you can overclock it, record gameplay, stream and VP9 dec/enc, h265 dec/enc. 1050 Ti is just better. 1650 is closer to 6500 XT, but real 6500 XT competitor is 1650 Super.


Because 1. it actually costs more than the 6400, 2. not being a consumer card, it's a bit problematic to find one, 3. the 6400 isn't slower than the 1050 Ti, 4. I have no intention to overclock. It's going into a home theatre PC with a Ryzen 3 in it, so all its job will be to put a 4K 60 Hz image on my TV through HDMI. Being capable of some light gaming is only a plus.



sith'ari said:


> i didn't say anything about RX6400 being a crap , i commented on a past comment from  David Wang , AMD's senior VicePresident of engineering for Radeon Technologies Group


Ah OK! I read too much in the post you commented on. 

Obviously, the 6400 is not going to be for you, and there's nothing wrong with that. I wouldn't want one as my main card, either, to be fair.


----------



## sith'ari (Apr 28, 2022)

Valantar said:


> .....
> 3? You apparently stopped using numbers, but ... yeah. So. Did you read what I wrote? It doesn't seem like it. Because if you did you would see that I absolutely think that it was that, but that I also think that _reality_ has "cut the hype from Nvidia's Turing" _much_ more than this statement. Going back to my 0-100 scale: Nvida was saying "it's 100!", he was saying "probably more like 70-80", and reality has since kicked down the door with a solid 20-30. Was he wrong? In a naïvely absolutist way where the absolute wording of an assessment matters more than its direction and overall gist, sure. But the direction of his argument is _far_ more accurate than Nvidia's statement at the time. So, you're arguing that it was bad that he "cut" the Turing hype, yet ... reality has shown that hype to be complete BS? What are you so annoyed about? That an AMD exec made a vaguely accurate, but overly optimistic prediction that contradicted Nvidia's marketing? After all, if it annoys you that a corporate executive made a dubious claim back then, shouldn't you be even more annoyed at Nvidia execs trumpeting the wonders of RTRT and how it would revolutionize real time graphics? Because their statements at the time are much, much further from reality than the statement you quoted here.


nVIDIA with Turing always promoted RayTracing *combined with DLSS* , never individually .That's something most of the tech-press tends to ... forget for some reason.
With DLSS enabled the RayTracing is feasable even with cards such as an RTX2060 
check at 46:35 : 







AMD never did something similar , meaning that their current RT-low end offerings are basically RT-incapable


----------



## Valantar (Apr 28, 2022)

sith'ari said:


> Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say altruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
> who put him representative of nVIDIA ? How altruistic indeed  !!of course by "we" he meant his own company *otherwise he should make it clear that he speaks on behalf of the entire industry *!!
> Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally didn't have any low-end RT-GPUs by then...


It is entirely reasonable for "we" in a statement like that to be taken as "we as an industry", as AMD isn't a game developer and can thus only ever represent a portion of what is necessary for this to happen. AMD (or Nvidia, or any other single actor) has zero control over whether RT will become the dominant graphical paradigm.


sith'ari said:


> nVIDIA with Turing always promoted RayTracing *combined with DLSS* , never individually .That's something most of the tech-press tends to ... forget for some reason.
> With DLSS enabled the RayTracing is feasable even with cards such as an RTX2060
> check at 46:35 :
> 
> ...


So ... uh ... Turing was announced in August 2018. Why are you posting a video from January 2019 as proof that "Nvidia always promoted RT combined with DLSS"? That presentation is literally a way for Nvidia to demonstrate that their RT performance would be less terrible once DLSS arrived, as it wasn't out yet at that point. Also, they've routinely promoted both: RT, _and_ RT+DLSS. DLSS was also not mentioned whatsoever at the Turing launch, while RT was a huge focus. So you're entirely wrong in claiming that Nvidia never promoted RT on its own. That is factually untrue. They started adding DLSS to the marketing once it became known just how poor RT performance was, then toned it down after the reception of DLSS was lacklustre, then stepped it up with DLSS 2.0 again.

Also, AMD never did anything similar? Really? Have you heard of FSR? RSR? FSR 2.0? They're doing exactly the same thing: starting off saying "we have hardware RTRT", then adding upscaling, then intermittently promoting one or both. Also, in regards to this actual topic, have you seen AMD promote the 6500 XT's or 6400's RT capabilities heavily? 'Cause from what I can see from the 6500 XT's product page, the only mention of RT is a spec among several other specs ("16 Compute Units & Ray Accelerators"), while FSR has a huge full page width banner. That certainly doesn't look like they're promoting RT heavily, and _definitely_ not without upscaling. Heck, they're promoting FSR explicitly, while RT is barely mentioned.

But back to your initial argument here: you've still not shown how the statement you're so worked up about is any more untrue than Nvidia's initial marketing hype - heck, even that video starts out with a _looooong_ spiel on how RTRT is the future of graphics, a new paradigm, a way to make graphics_ fundamentally better_ (it just needs some help along the way!). It takes quite some time in that presentation before DLSS is mentioned.


----------



## AusWolf (Apr 28, 2022)

I agree with this guy.

The 6400 isn't a good value because it's good. It's a good value because other low power options are either 1. significantly worse (1030), or 2. significantly more expensive with no performance advantage (1650).


----------



## catulitechup (Apr 28, 2022)

AusWolf said:


> OK, let me simplify: The 6500 XT and 6400 do *not* have a video encoder (*and in fact, the GT 710 / 730 duo don't, either*). Everything else released in the last couple years does. How many more options do you want?


this gpus if stay based in gk208* and have nvenc but nvidia cut nvenc capabilities with gp108 like gt 1030



> *https://www.techpowerup.com/gpu-specs/geforce-gt-710.c3027 - https://www.techpowerup.com/gpu-specs/geforce-gt-730.c1988



on this side gt 710 and gt 730 gk208 are better than rx 6400


----------



## AnotherReader (Apr 28, 2022)

Great review W1zzard! Thanks for adding the GT 1030. A few things are clear from this review and previous ones.

The 6500 XT is being hampered by the PCIe bottleneck. Its performance relative to the 6400 is below what it would have been without the self-imposed bottlenecks
Similarly, AMD's APUs have been prevented from reaching their full potential due to the lack of L3 cache for the IGP. The 680M in the 6800HS clocks close to the 6400, but the 6400 is nearly 30% faster
I used numbers from a recent 3090 Ti review for the 6800 and 6900 XT. I chose the 6800 and 6900 XT, because the difference in CUs between these two is the same as the 6400 and 6500 XT. It isn't a perfect comparison, because the 6800 disables an entire shader engine and is, in some ways, more of a cut than the 6400. For clock speeds, I used the reviews for the reference 6800 and 6900 XT as well as the recent review for the Sapphire 6500 XT. The methodology has changed since the earlier reviews, but I'm hoping that the ratios will be the same. All numbers for the 6400 are at 1080p while the resolution for the 6800 comparison is listed in the table. The "6900 vs 6800 v2" column is estimating the relative speed of a 6900XT if its clock speed increase was the 20% of the 6500/6400 vs the 3% in reality. This is, of course, a maximal estimate, and doesn't account for L3 misses. Table 2 has been sorted by the 6500/6400 speedup.


*GPU**Clock Speed*RX 64002285RX 6500 XT2742RX 68002225RX 6900 XT2293


*Game**6400**6500 XT**6500 vs 6400**Resolution**6800**6900 XT**6900 vs 6800**6900 vs 6800 v2**performance increase*Deathloop26.530.11.142160104.1119.11.141.3317%Far Cry 641481.17216064.580.31.241.4524%Doom Eternal61.473.31.192160130.4161.21.241.4421%Battlefield V75.1921.232160110.2128.31.161.3611%Elden Ring34.742.61.23216051.160.91.191.3913%Forza Horizon 527.133.51.24216070.188.41.261.4719%Metro Exodus43.955.61.271440133.5165.71.241.4514%Divinity Original Sin II68.9891.292160104.7130.51.251.4512%Dying Light 231.2411.31144085.4108.51.271.4813%The Witcher 346.762.11.331080178.8230.51.291.5013%CyberPunk 207719.225.61.33144065.981.81.241.458%Borderlands 342.356.91.35216056.472.11.281.4911%Red Dead Redemption 224.833.61.351080100127.71.281.4910%Control28.438.81.37144082.3103.81.261.477%Guardians of the Galaxy29.740.91.38216069.184.31.221.423%F1 202149.885.71.722160137.81721.251.45-16%

Other than the weird numbers for F1 2021, the trend is clear. In many games, performance could have been increased by 10 to 20 percent with a wider PCIe connection and a slightly larger L3 or a 96-bit bus.


----------



## AusWolf (Apr 28, 2022)

catulitechup said:


> this gpus if stay based in gk208* and have nvenc but nvidia cut nvenc capabilities with gp108 like gt 1030
> 
> 
> 
> on this side gt 710 and gt 730 gk208 are better than rx 6400


I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)


----------



## ModEl4 (Apr 28, 2022)

sith'ari said:


> Nice answer , but don't you think that mr Wang would be extremely ehmm ... let's say altruistic , if he uses the term "we" by referring to nVIDIA as well ?? come on mate ? he speaks on behalf on nVIDIA ? how possible do you thing this is ?
> who put him representative of nVIDIA ? How altruistic indeed  !!of course by "we" he meant his own company *otherwise he should make it clear that he speaks on behalf of the entire industry *!!adva
> Moreover , since you also agree that he did say those things in order to kill hype from nVIDIA , who , coincidentally didn't have any low-end RT-GPUs by then...


I don't know the man to know if he's the altruistic type.jk
Usually character types advancing in multinational companies have specific characteristics that i don't value.
But he's a professional with a career,  you could see him in a few years jumping to Apple or whatever just like Raja went from AMD to Intel.
So he is part of an industry (if i remember correctly, he'd  worked at SGI then Artx, i may be wrong, i am too bored to look it up) and plays an important role in the second largest graphics solution provider, also it was 2018 and AMD hadn't even launched RDNA1 (Q3 2019) so if with *we* meant *AMD*, then the message translates as follows:
_"Utilisation of ray tracing games will not proceed unless _*AMD*_ can offer ray tracing in _*all*_ product ranges from low end to high end"_
Something lost in translation don't you think?
Anyway like i said in my first reply it's not even that important.
Recently AMD when Intel revealed ARC, they throw shade at Intel's low end 128EU chip pointing out that although using the same N6 process their chip was faster while at the same time much smaller.
I remember the days when higher transistor counts meant that the design had more features or it was more forward looking and the marketing teams using it to promote their tech, AMD on the contrary done the opposite, so the shade was for Intel's engineering team specifically, kinda below the belt and i bet this was not welcomed at all!
Didn't this bothered you more?


----------



## catulitechup (Apr 28, 2022)

AusWolf said:


> I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)


yeah this list dont show gk208, in before versions appear but back to theme nvidia begin cut nvenc in desktop with gp108 aka gt 1030

for this reason dont buy gt 1030, at that moment i had gt 630 gk208 for my linux wine youtube channel (very usefull in this time)


----------



## thelawnet (Apr 28, 2022)

AusWolf said:


> I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)


A few reasons IMO:


there was mass hysteria about the 6500 xt price from tech reviewers, even though it had a relatively honest one, not a fake one like with other cards
those tech reviewers whip up hysteria to get clicks on their videos
nobody cared about the GT 1030: https://www.techpowerup.com/reviewdb/Graphics-Cards/NVIDIA/GT-1030/ because it came out in 2017, you had the RX 550 for the same price with encoding, the RX 560 for $20 more, the 1050, the 1050 Ti, etc. a huge raft of cheap cards, and the 1030 was an irrelevant product. 
encoding is probably more important now, with more people doing streaming, tiktok, etc. 
the RX 6400 is $180 but should be $80. While that reflects the rest of the market so is somewhat defensible, the absolutely higher price means you have a reasonable expectation of more features.  It's just fairer to the consumer that you give them a more complete product.
Of course there is a reason, this is just recycled laptop hardware, but let's not pretend it's totally unreasonable to be more demanding about the ONlY  current gpu under $200, then when there were about 20 of them


----------



## Valantar (Apr 28, 2022)

AusWolf said:


> I don't see the GK208 listed here. And you're right, the GP108 (GT 1030) doesn't have NVENC, either. How was that not a problem then, but suddenly a problem now with the 6400? (a question to everyone)


I have a vague memory of it being discussed as a drawback of it at launch, but (and I'm guessing here) probably nobody took it seriously as a gaming or "do-it-all" GPU back when it launched, while now people are doing so for Navi 24 GPUs thanks to silly pricing. At $80 people were probably just happy to see it kind of do 3D.


----------



## The red spirit (Apr 28, 2022)

thelawnet said:


> 1050 ti is exactly the same price in Indonesia also.
> 
> The 6400 is 40% faster than the 1050 ti, so I'd take the 6400. Plus newer, and more efficient.
> 
> ...


I don't see your 40% claims:









Literally the same, situation will be even more in favour for 1050 Ti in gen 3 system.


----------



## AnotherReader (Apr 28, 2022)

The red spirit said:


> I don't see your 40% claims:
> 
> 
> 
> ...


There's something wrong here. The 1050 Ti is now the equivalent of the 680M, i.e. the IGP in the new 6000 series APUs. This is at least 30% faster. Hardware unboxed also shows it to be nearly 40% faster.


----------



## thelawnet (Apr 28, 2022)

The red spirit said:


> I don't see your 40% claims:
> 
> 
> 
> ...


already posted in this thread










40%

idk what site that is you posted


----------



## Valantar (Apr 28, 2022)

The red spirit said:


> I don't see your 40% claims:
> 
> 
> 
> ...


Take a look at the Hardware Unboxed video i linked back on page 5. On average across their 12 games at their chosen settings (tuned for decent low end performance, not Ultra or anything silly), the 6400 averages 60fps (48 1% lows) vs 43fps (35 1% lows) for the 1050 Ti. This average is significantly pulled down by a couple of titles where the 6400 performs atrociously badly (Rainbow 6 Siege, F1 2021, Doom Eternal) - it's typically faster still. That's a 28% (27% 1% lows) advantage for the 6400. I haven't watched your video so I don't know which games are tested or how many, but I tend to trust HWUB's results over random youtubers.


Edit: 40% is a bit misleading, depending on the wording, as percentages are relative to your starting point. I say 28% faster with the 1050 Ti as 100%, which is the same as being 40% slower than the 6400's 100%. That the 6400 is 28% faster and the 1050 Ti is 40% slower is saying the same thing.


----------



## The red spirit (Apr 28, 2022)

Valantar said:


> Yep. Several times. But more importantly, you're _responding to people using that term_ while clearly switching between several meanings of it, making your arguments miss the point.


Whatever then, I know terminology well. No need to be annoying.



Valantar said:


> Then why on earth bring in MXM adapters, if your point (?) was "this is a bog-standard, regular GPU design"? Sorry, but this does not compute. You made a claim that the desktop Navi 24 GPUs were equivalent to third-party vendors designing MXM-to-PCIe adapters for mobile GPUs in desktop use cases. This claim is false. Period.


Becasue that used to be how laptop cards were connected. As far as I know there was MXM 1060, so it's quite recent stuff and because Alibaba specials used MXM to PCIe adapters to basically create what RX 6400 and RX 6500 XT are. Conceptually the same shit.



Valantar said:


> ... not on a 25W mobile GPU. Which is what the 6300M is. "A few watts" is quite a notable difference in that scenario. And, if you look at the post you quoted, I said "even a watt or two makes a difference", which is _less_ than what you're saying here. The 6500M is 35-50W, where such a difference is less important, but can still allow for slightly higher clocks or better efficiency still.


This is 6400, a watt or two difference is nothing.



Valantar said:


> ... and? TDP is a board-level designation that is essentially invented by chipmakers and/or OEMs. I'm talking about chip-level power savings from a design perspective. Every single one of those GT 710s have the same chip on board (though if some of them have more output hardware it's likely that that consumes a tad more power).


And that shows that you don't save anything. Those connectors are in milliwatt range. 



Valantar said:


> ... so: PCB design software lets you add in a ready-made PCIe x16 connector design. It's already there, already done, and adding it is trivial - they then just position it correctly along the PCB perimiter and connect the relevant traces to the right pins. Removing those pins from that ready-made connector design, or god forbid making it from scratch for the nth time, would take _much more time_. A box design? The equivalent would be whether to copy in the company logo or to draw it from scratch. Which do you think they do? Also, graphical design is _just a tad_ simpler than PCB design. Not that that makes it easy or of low value, but the cost of screwing up a cardboard box design is rather lower than screwing up a modified connector on a thousand PCBs.


lol I can saw off that connector, dude, there's no real design there to be done. You can literally change it in Paint and it will work.


... *other GPUs with hardware encode/decode are too CPU heavy? I still think you're somehow failing to understand what was said*: they said that there are plenty of other GPU alternatives on the market with those encode/decode blocks included, and that one option without them is thus not much of an issue. I don't necessarily agree entirely, but I don't see it as _that_ bad either. But I sincerely hope you can actually understand what was said now.
[/QUOTE]
And they are either much slower, old (therefore you lack encoding/decoding) or more expensive. Only 1050 Ti is truly competitive, it's the only alternative. And I wrote that CPU recording is too heavy on *CPU*. Do you even read what I write? You might as well not reply if you don't.



Valantar said:


> .... there is no difference between a "mobile GPU" and "desktop GPU" in this regard. That's my whole point. You're talking about _chips_, not full GPU designs. And chips are _always_ used in_ both_ mobile and desktop. There is _nothing_ unique about this happening here - the only unique thing is the specific design characteristics of this die.


Which is literally what makes mobile chip. You have no point to make, other than troll and nitpick.



Valantar said:


> If you look at GPU sales in that same period, you'll see that that Ryzen mindshare essentially hasn't translated into Radeon mindshare at all - despite AMD becoming _much_ more competitive in GPUs in the intervening period, their market share has been stagnant. And, of course, the RX 5700 XT debacle was in the middle of this, whic _definitely_ soured broad opinions on Radeon GPUs.


Oh well.



Valantar said:


> Because most 1050 Ti stock is likely produced years ago, especially the silicon. And, of course, even if it's brand new, producing a 1050 Ti die on Samsung 14nm is _much_ cheaper than producing a Navi 24 die on TSMC 6nm, even if other materials costs are probably similar-ish.


Weren't their production restarted? Even 730 made a legendary come back. 



Valantar said:


> And, of course, literally every single design cost for the 1050 Ti is long since amortized, which has a major impact on margins. If you have two _entirely identical _products, where one is brand-new, and the other has been in production for 3,4,5,6 years? The new one has to pay for materials, production costs, marketing costs, board design costs, silicon tape-out and QC costs, driver development costs, and more. The older one? Materials and production costs - everything else is long since paid off (though it _might_ theoretically still be marketed). Drivers are likely still in development (hopefully!), but development costs will most likely be much lower due to driver maturity and developer familiarity with the hardware. There are good reasons why older hardware is more affordable than newer hardware, beyond just inflation.


Even more reasons not to make e-waste like RX 6400 then.



thelawnet said:


> already posted in this thread
> 
> 
> 
> ...











						AMD Radeon RX 6400 Specs
					

AMD Navi 24, 2321 MHz, 768 Cores, 48 TMUs, 32 ROPs, 4096 MB GDDR6, 2000 MHz, 64 bit




					www.techpowerup.com
				




TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.



AusWolf said:


> Every fairly modern CPU can do it at relatively low usage. Needing an AV-1 decode capable GPU for watching 4K Youtube on a PC with a sh*t CPU is a need that you invented.


Because people haven't been doing this with Core 2 Duo machines. You are talking nonsense. Wanna watch Netflix, Youtube, Vimeo, Twitch, you need AV-1. One day your sh*t i7 won't cut it anymore and you will want cheap shit that can decode, not crap like RX 6400.



AusWolf said:


> If you need that feature, fair enough. I just don't think many people do.


You would be surprised by how common that need is. 




AusWolf said:


> Not really. A 3 years older Core i3 4330 can beat it with ease.


lol userbenchmark, they still use Athlon 64 logo for AM3+ chips. Even if hypothetically Userbench was any good, then 26% difference is definitely not beating it with ease. More like beating, but still quite close.



AusWolf said:


> I wasn't comparing it to Atoms. I was merely stating that you're holding the Athlon X4 in too high regard.


It's just an example, pretty close to what old computer can have performance wise. Ignore it if you will.



AusWolf said:


> What are you talking about? The 710 can do MPEG-1, MPEG-2, VC-1 and H.264 decode. The 6400 / 6500 XT can do all that, *and* H.265. The only thing it can't decode is AV-1 - neither can the 710 by the way.


So what? My point was about how back in the day lowest of the low card had all capabilities of decoding/encoding for low price. I'm not saying that GT 710 is superior to your RX 6400. What a way to miss the entire point.




AusWolf said:


> I see what you mean, and I kind of agree with the sentiment.
> It's only that the 6400 and 6500 XT aren't the only symptoms of this "wild capitalism". I don't think there's any GPU on the market that sells for a reasonable price at the moment. I mean, what other options do you have? A 1030 for £100? A 1050 Ti for £150-200? Or a used 1650 from ebay for £250-300? Hell no! Ampere doesn't even exist in this range. I'd much rather buy something new with warranty for this price.


RX 6600, the only "value" out there. Has full decoding/encoding, is good value, but yeah 400 EUR. On lower end, there are 1050 Ti, T600, T1000 deals, not great, but better than RX 6400. To be fair, RX 6600 is the only card today that doesn't suck. 




AusWolf said:


> Because 1. it actually costs more than the 6400, 2. not being a consumer card, it's a bit problematic to find one, 3. the 6400 isn't slower than the 1050 Ti, 4. I have no intention to overclock. It's going into a home theatre PC with a Ryzen 3 in it, so all its job will be to put a 4K 60 Hz image on my TV through HDMI. Being capable of some light gaming is only a plus.


I dunno about you, but I can buy it at any store. If you can find it, then there's no argument for RX 6400. T600 is better.


----------



## thelawnet (Apr 28, 2022)

The red spirit said:


> TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.



?

The TPU review shows 3% difference to a 1650









						AMD Radeon RX 6400 Review
					

The AMD Radeon RX 6400 is targeting entry-level gamers wanting to spend less than $200 for their graphics card. Priced at $160, the RX 6400 is the most affordable new release in a long time, but does it have the graphics horsepower to deliver an AAA gaming experience?




					www.techpowerup.com
				




The 1650 is 35% faster than a 1050 ti









						MSI GeForce GTX 1650 Gaming X 4 GB Review
					

The GeForce GTX 1650 is NVIDIA's latest Turing release, going head-to-head with AMD's Radeon RX 570. MSI's Gaming X is a premium rendition of this chip with a much improved cooler, idle-fan stop, and large overclock out of the box. Noise levels are amazing and make the card whisper quiet.




					www.techpowerup.com
				




My source shows the 1050 ti at 43 fps, the rx 6400 at 60 fps.

60/43 = 139.53% = 40% faster.

The database you refer to is just done in a spreadsheet or something. Not game tested.

It seems clear that the 1650 when it came out was substantially faster than 1050 ti. The 6400 has the performance (or 2-3 fps less) of a 1650. 

I don't think this is that complicated.

On PCIE3, sure, it gets closer to the 1050 ti, especially at 1% lows. But nowhere near on PCIE4


----------



## Valantar (Apr 28, 2022)

The red spirit said:


> Whatever then, I know terminology well. No need to be annoying.


Your arguments show something else. And if you find people countering your arguments "annoying", that's on you, not me.


The red spirit said:


> Becasue that used to be how laptop cards were connected. As far as I know there was MXM 1060, so it's quite recent stuff


MXM had its heyday around 2010, with ever-dwindling use since then. Also, please note that the 1060 launched 6 years ago. That's quite a while - and even back then it was exceedingly rare.


The red spirit said:


> and because Alibaba specials used MXM to PCIe adapters to basically create what RX 6400 and RX 6500 XT are. Conceptually the same shit.


But that's the thing: they aren't whatsoever. Not even close. I'm arguing against you saying this because it is utter and complete nonsense.

Again with the spoon-feeding:
- On the one hand you have a GPU die, on a package. It has specifications and requirements around which a PCB is made to hold its required ancillary components - RAM, VRMs, display pipeline processing, etc., and through which traces are run to components and connectors.
- On the other hand, you have a pre-made mobile GPU board, complete with RAM, VRMs, ancillary components, and an interface (MXM, i.e. mobile PCIe x8 with some extras), to which an adapter board is made taking the display signals sent over the interface and making them into outputs, providing a power input, and running traces from the MXM slot to a PCIe slot.

Now, which of these is the most apt comparison to the RX 6500 XT and 6400? The former. Because it is _exactly what they are_. Conceptually they have _nothing_ in common with an MXM adapter+MXM GPU. Nothing at all. The made-for-mobile part here is _the design of the silicon itself_, which is mainly a way of explaining and understanding its strange design tradeoffs. That doesn't mean that AMD didn't plan desktop implementations from the get-go - they clearly did, as they have come to market much faster than the mobile variants. But the design has some tell-tale signs towards being _mainly_ designed for pairing with AMD's 6000-series APUs in a laptop.


The red spirit said:


> This is 6400, a watt or two difference is nothing.


I'm not talking about this specific implementation, I'm talking about the rationale behind designing the Navi 24 die as weirdly as they did. I'm not saying it makes sense, I'm saying that this is the likely explanation.


The red spirit said:


> And that shows that you don't save anything. Those connectors are in milliwatt range.


Connectors, PHYs, controllers, and more, it all adds up. And in a strictly made-for-purpose design, such cuts are sometimes made. Again: not saying it makes sense in a larger context, only trying to understand the rationales surrounding this design.


The red spirit said:


> lol I can saw off that connector, dude, there's no real design there to be done. You can literally change it in Paint and it will work.


.... do you think PCBs are designed in MSPaint?

And yes, obviously you can cut it off. Depending on your skills and tools, that might be faster than doing this properly in the design phase, especially as this will then entail additional QC.


The red spirit said:


> And they are either much slower, old (therefore you lack encoding/decoding) or more expensive. Only 1050 Ti is truly competitive, it's the only alternative. And I wrote that CPU recording is too heavy on *CPU*. Do you even read what I write? You might as well not reply if you don't.


... so now there aren't that many options? Because, to refresh your memory, this whole branch of the discussion started out with someone paraphrasign you, saying "You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?" So ... is this bad because it's much worse than everything else, or is it bad because it's failing to provide a much-needed, missing function in this market segment? It's one or the other, as those options are mutually exclusive.


The red spirit said:


> Which is literally what makes mobile chip. You have no point to make, other than troll and nitpick.


No. On a silicon level, for PC hardware, there is no such thing as a "mobile chip". You can say it's a mobile-first design, you can say it prioritizes mobile-friedly features, but _mobile chip_ literally doesn't work, as it excludes non-mobile use cases. There are no PC silicon manufacturers (CPUs, APUs, GPUs) who don't implement their silicon in _both_ mobile _and_ desktop versions. The _same_ silicon. Navi 24 is a mobile-first design, quite clearly. It is not a "mobile chip". And this isn't nit-picking, as the difference between the two is meaningful and entirely undermines your arguments.


The red spirit said:


> Oh well.


That's the most sensible response to being proven wrong I've seen from you in a while. Thanks!


The red spirit said:


> Weren't their production restarted? Even 730 made a legendary come back.


They might have become, but what you're quoting already accounts for that.


The red spirit said:


> Even more reasons not to make e-waste like RX 6400 then.


No, AMD should just have made a better die design. It's sad, really - this die has massive potential, but it's flawed in some really bad ways.


The red spirit said:


> TPU database literally showing no difference. Perhaps HWUB's test just had more RX 6400 friendly games. And your own source only has like 30% difference, no need to exaggerate it.


There's something wrong with the TPU DB entry there, as it doesn't align with TPU's own benchmark results - they show it ~matching the 1650, yet the database shows the 1650 as 22% faster. The entry is wrong, somehow.

Also, it's kind of ...odd? to reference the database result rather than actual test results when you're in the comments thread for that review. Just saying.


----------



## AusWolf (Apr 28, 2022)

The red spirit said:


> Because people haven't been doing this with Core 2 Duo machines. You are talking nonsense. Wanna watch Netflix, Youtube, Vimeo, Twitch, you need AV-1. One day your sh*t i7 won't cut it anymore and you will want cheap shit that can decode, not crap like RX 6400.


Which sh*t i7 (I've got two)? I've just watched a 4K Youtube video on my 4765T and integrated graphics (HD 4600) without any lag or dropped frame. It's a 9 year-old 35 Watt CPU, mind you. if this thing can do it, then anything above an i5 2500 can do it, and you can literally pick one up for pennies.



The red spirit said:


> lol userbenchmark, they still use Athlon 64 logo for AM3+ chips. Even if hypothetically Userbench was any good, then 26% difference is definitely not beating it with ease. More like beating, but still quite close.


That "quite close" i3 is a *lower-mid-tier* CPU that was released *3 years before* the Athlon X4 and still beats it.



The red spirit said:


> It's just an example, pretty close to what old computer can have performance wise. Ignore it if you will.


What performance? FM2 was never competitive against contemporary Intel CPUs.



The red spirit said:


> So what? My point was about how back in the day lowest of the low card had all capabilities of decoding/encoding for low price. I'm not saying that GT 710 is superior to your RX 6400. What a way to miss the entire point.


I didn't miss your point. I only stated that it's irrelevant.



The red spirit said:


> RX 6600, the only "value" out there. Has full decoding/encoding, is good value, but yeah 400 EUR. On lower end, there are 1050 Ti, T600, T1000 deals, not great, but better than RX 6400. To be fair, RX 6600 is the only card today that doesn't suck.


I agree about the 6600, but I don't agree about the nvidia ones you mentioned, as they are too expensive here in the UK. The 1050 Ti is selling at about the same price as the 6400, which is quite frankly a ripoff. Quadros aren't only expensive, but hard to find, too.



The red spirit said:


> I dunno about you, but I can buy it at any store. If you can find it, then there's no argument for RX 6400. T600 is better.


I saw it at one store on a "bought to order" basis a while ago, but it was just shy of £250 or so. The 6400 for £160 is a much better deal.


----------



## Overvoltage (Apr 29, 2022)

Interesting discussion. I agree with people that the lack of a video encoder is bad. Also I don't like the limitation of PСIE 4.0 X4. Prices are not the best.

But I can't agree that the card is bad just because it's slow. It is not far from 1050 or 1060 or 470, but why should this be bad?

Fast cards overtake past generations. performance cards are catching up with past generations etc. If someone is not satisfied with this performance, he can pay more.

Why $1500 cards if they are the same as $150 cards? The less you pay, the more you sacrifice. First, we refuse 4K, then 2K, then ULTRA, and so we gradually reach low 1080p at the lowest price and minimum consumption.

You can say that we have already seen it on 1060, yes, but people who buy 1060 and 6400 do not overlap. If you ever bought a 1060, your goal is at least 3050 and 3060. Anyone who buys a 6400 as a gaming solution used to buy GT 620, 730 and similar cards. If there are still such players, they have something to replace, the performance gain will be large.


----------



## Trov (Apr 29, 2022)

The red spirit said:


> Why not Quadro T600? It's like GTX 1650 LE, but low profile and costs less than 6400. And since 6400 is slower than 1050 Ti


Huh?
Every part of that statement is false.

-T600 at least in the US is selling for $250+, which is nearly $100 more than RX 6400
-T2000 is equivlent to GTX 1650. T1000 is downclocked from that, and T600 has fewer cores from that. T600 performs nearly the same as 1050 Ti.
-Where are you seeing it's slower than 1050 Ti? Just a few pages before yours shows that RX 6400 is faster than 1650 in most cases.

It is only slower than 1050 Ti in two specific games (Doom Eternal and Rainbow 6 Siege) and only on PCIe 3.0. It nearly matches or beats the 1650 in all other cases.

You also compare it to 1050 Ti price...are you comparing specifically to 1050 Ti Low Profile editions? In the US also those are going for $250 or more.


My XFX RX 6400 arrives tomorrow. I will install it in my Lenovo Thinkstation P330 Tiny and compare it to a Quadro T600 and Quadro T1000. It is a PCIe 3.0 x8 system with an 8th gen i5. I will also be comparing thermals and fan noise at various loads (both my T600 and T1000 happily reach 83C throttling temp). The TechPowerUp review mentions great thermals and silent fans, but their card had a dual slot heatsink on it, so I will evaluate the single-slot cooling version. Lastly I will also give Linux performance a try, at least for the strange outlier of Doom Eternal running notably worse on RX 6400. I am curious if it is a driver issue, AMD famously has very good Linux drivers so it is worth a shot.

I also have an Optiplex 7010 SFF which has an Ivy Bridge i7 in it, to test its suitability as a "throw it in an old off-lease $50 PC" option, and see how it stacks up on that outdated machine vs those two Quadros and a GTX 1650 Low Profile.


I had assembled the Optiplex + 1650 at a time when it was possible to get both for a total of less than $200, and used it as an HTPC that could also play games. Years later, for Low Profile cards, there is still no good successor to the Low Profile 1650 that is in a viable price range (so, RTX A2000 for $800+ is out of the question).
Therefore I got the P330 Tiny as it was a good deal, as with the Turing Quadros single-slot low-profile GPU tech looks like it had advanced enough for that small 1L PC to reach and even beat that Optiplex in performance. Unfortunately I had bought the Quadros just a week before the RX 6400 launched for $100 less than the T600 and $250 less than the T1000. If the RX 6400 can keep its thermals in check I think it will handily prove to be a better option than both of those. Since this machine's 8th Gen Intel has QuickSync I dont think I am going to mind the missing encoders even for HTPC purposes.


----------



## thelawnet (Apr 29, 2022)

Overvoltage said:


> You can say that we have already seen it on 1060, yes, but people who buy 1060 and 6400 do not overlap. If you ever bought a 1060, your goal is at least 3050 and 3060. Anyone who buys a 6400 as a gaming solution used to buy GT 620, 730 and similar cards. If there are still such players, they have something to replace, the performance gain will be large.


Utter nonsense.

I bought a 7870 for about £120, and then a 1060 6GB for £200.

I today have a i5-12500 without gpu.

The 620 had 1/16 of the shaders of a 680, and when the 730 came out it wasn't even announced, just dumped on the market









						Best Video Cards: June 2014
					






					www.anandtech.com
				




Nobody went out to buy a 730, they went out to buy one of the many affordable cards actually good for gaming. When I bought my 7870 I remember considering the GTX 750 Ti for a long time, but the 7870 was the same price and much better. There were any number of bad cards which you would be stupid to buy for gaming, and then from about £100 you just needed to decide how much GPU you wanted.

I would buy a 6400 today, if it was cheaper. Why not a 6600/3050 ? It costs £300, so no way.

At 40% of the performance of a 6600, give me a 6400 for £120 and I would buy it right now.

The 6400 is absolutely a mainstream product, in that the market now where I am is: 1030 - big price gap - 6400, 1050 ti, 6500 xt, 1650 - big price gap - 3050, 6600

Since the price for a good GPU is much higher now, that means that there is room in the market for more SKUs. Whereas the 730 release was just a plop out and it was irrelevant, the 6400 is priced between 1050 ti and 1060 release prices.

That makes it a gaming product in the same sense of 1050 ti.  We can't say 'you bought a 1060, you must buy a 3060', if 3060 now is 2x the price of the 1060 then. I thought the 1060 was too much to spend on a GPU, and I certainly would not have bought a 1070, but it made sense to replace the 7870 as being 2x+ faster, so ok I bought the 1060 6GB for more money than I wanted to spend because it delivered good value, so I stretched my budget. Today I would not pay more for less, pay '70 money for '60 performance, that is crazy to me. I would much rather pay for lower performance, than burn $$$ in an inflated market. At least if you buy a cheap card you can game, and can upgrade later when the market is good. Now to pay $$$ for 2-year-old tech and then in 6 months oops, your 3080 priced like a 1090 is now a 4060? Ugh, no.


----------



## Overvoltage (Apr 29, 2022)

> I bought a 7870 for about £120, and then a 1060 6GB for £200.


The choice may be based on the desired performance. The 1060 was cheaper than the 3060, but it may be more important for you to stay with the same level of comfort by paying more money. Therefore, I evaluate the 6400 as a low-end product, and not just for its price.

ps

in my country I ordered 6400 for ~$265 there are no other prices.


----------



## thelawnet (Apr 29, 2022)

i consider gaming to be a luxury/discretionary purchase. Therefore I'm buying on value for money.

Right now on my IGP I could play games such as AOE4 at 1080p low. While I can't play AAA games such as Total War, I  could game 24/7 playing indie games or whatever.

A low end GPU would get me into AAA games at low settings, so that is a big value in that sense, as opposed to not being able to play them. More money = more fps/detail, which is not as important maybe as being able to play at all, but if you have 60 fps for £100 or 120 fps for £150, it makes sense to go for the more expensive card even if you have a 60 Hz monitor, because that card is going to need replacement less soon.

But if you have 60 fps for £200 or 120fps for £375, due to unusual market conditions, the 60 fps card makes more sense honestly, because in a generation or two you should get twice the performance for half the money, so buy the cheapest card now rather than buying on fps/$ in an inflated market


----------



## Valantar (Apr 29, 2022)

thelawnet said:


> Utter nonsense.
> 
> I bought a 7870 for about £120, and then a 1060 6GB for £200.
> 
> ...


GPU pricing in the early 2010s was absolutely nuts though - I bought my old HD 6950 in 2011 for NOK 1356. In that case, that was also helped by a weak USD/strong Norwegian Krone, as the conversion rate at the time was around 1 USD = 5.7 NOK. At the moment it's around 1 USD = 9.4 NOK, which naturally drives up prices on imported goods with USD pricing to match. Still, accounting for inflation (1356 2011-NOK = 1731 2022-NOK) _and_ exchange developments (9.4/5.7=1.65), that card would theoretically have cost me 2856 NOK today. That's a lot more, of course - but it was AMD's third highest GPU at the time (behind the 6970 and 6990), and had launched less than a year before (even if the 7000-series would be launching a few months later). For comparison, today 6400s and 1650s start at ~2200 NOK, 6500 XTs at ~2400, and 1650 Supers at ~2600. So, for the money that in 2011 got me a third-highest tier GPU, I could today get a slightly fancy third-from-the-bottom GPU instead. Which would, 11 years later, deliver slightly more than twice the performance according to TPU's database.

The big question is what, exactly, changed since then. Because that HD 6950 had an MSRP of $299 - NOK 1700 at the time, not accounting for the 25% Norwegian VAT (which would have made it 2130 NOK). So what exactly made it possible for me to buy it less than a year after launch for just 64% of MSRP? Clearly PCB costs are much higher today, with the signalling requirements of GDDR6 (vs. 5) and PCIe 4.0 (vs. 2.0). The massive wattages, fluctuating clocks and need for very tightly controlled voltages of current GPUs also push VRM costs quite a bit higher - even if the 6950 was a 200W GPU (yet mine had a 2-slot, single fan HIS cooler, and was pretty quiet!).

So ... were margins just _far_ better back then? Was production that much simpler, bringing down costs? I have no idea, to be honest. Even just four years later, my next upgrade, a Fury X, was NOK 7000 (at the time, 1USD = 8.2 NOK). Of course that was a stupidly priced flagship, and there were far better value cards further down the stack. Still, something clearly started happening in the mid-2010s bringing GPU prices much, much higher in my part of the world. Or did we all just start paying big bucks for the most expensive SKUs, incentivizing GPU makers to push average product pricing ever higher?


----------



## thelawnet (Apr 29, 2022)

afaict what happened is:

1. silicon shortage (mobile phone prices are up this year, by around 20%, due to cost of chipsets)
2. miners inflated the price of everything because gpu = coins

When I bought my 1060 it was late in the cycle, so it was cheaper than earlier.

That would always tend to happen.

Now a new 1050 ti is more than five years ago, but the 1030 is much less inflated. 

To me while there is a small effect from silicon shortage, everything else is just miners.


----------



## Valantar (Apr 29, 2022)

thelawnet said:


> afaict what happened is:
> 
> 1. silicon shortage (mobile phone prices are up this year, by around 20%, due to cost of chipsets)
> 2. miners inflated the price of everything because gpu = coins
> ...


That's definitely a part of it, but it doesn't explain how prices seemed to take off in the early 2010s - the first mining boom was around 2017-2018, after all, and the silicon shortage started in 2020. As with everything, it's likely a complex combination of factors. In addition to the two you point out, there's also
- increasing production costs due to higher complexity
- BOM cost increases due to more/more advanced components required
- increasing material costs in recent years
- shareholders demanding higher margins
- higher demand as PC gaming has gone from a relatively niche hobby to global phenomenon in the past 10-ish years
- the former development bringing in a lot more wealthy people, driving up demand for expensive hardware
- the fact that Nvidia was essentially uncontested in the upper ranges of performance for a few years
- the _massive_ mindshare advantage Nvidia has, allowing them to push up prices due to the perception of being better (and AMD then following suit with pricing to not appear as a cheap option and because shareholders like that)
- AIB partners compensating for years of near-zero margins
- wild swings in the competitiveness of consoles - from outdated pre-2013, to low-end post-2013, to suddenly high-end post 2020

And probably a lot more. Still, I really, really hope we can see a return to a more sensible GPU pricing and market segmentation model some time soon. The current order just isn't tenable at all, and IMO PC gaming will shrink back to a niche quite rapidly if this continues.


----------



## The red spirit (Apr 29, 2022)

Valantar said:


> Your arguments show something else. And if you find people countering your arguments "annoying", that's on you, not me.


On you to nitpick on things, when I clearly am aware of correct terminology. 



Valantar said:


> MXM had its heyday around 2010, with ever-dwindling use since then. Also, please note that the 1060 launched 6 years ago. That's quite a while - and even back then it was exceedingly rare.


I didn't realize that 1060 was that old.



Valantar said:


> I'm not talking about this specific implementation, I'm talking about the rationale behind designing the Navi 24 die as weirdly as they did. I'm not saying it makes sense, I'm saying that this is the likely explanation.


That wasn't even close to clear, but okay.



Valantar said:


> .... do you think PCBs are designed in MSPaint?


You can basically take designed 6400 PCB and erase extra 16x part, that's how much design it would take to make it.




Valantar said:


> ... so now there aren't that many options? Because, to refresh your memory, this whole branch of the discussion started out with someone paraphrasign you, saying "You basically said that almost every modern graphics card except for the 6400 and 6500 XT has some kind of video encoder in it, which is true. How many more options do you need?" So ... is this bad because it's much worse than everything else, or is it bad because it's failing to provide a much-needed, missing function in this market segment? It's one or the other, as those options are mutually exclusive.


Yeah, basically there are many cards like RX 6900 XT or RTX 3090, that mere mortals can't afford. There are only handful of cards that are cheap enough and can decode/encode. And there always used to be plenty of such cards that can (for their time). 




Valantar said:


> They might have become, but what you're quoting already accounts for that.


I remember 1050 Tis being sold out, when 1650 came out and now 1050 Tis are back. I think they restarted production. 



Valantar said:


> There's something wrong with the TPU DB entry there, as it doesn't align with TPU's own benchmark results - they show it ~matching the 1650, yet the database shows the 1650 as 22% faster. The entry is wrong, somehow.


 



Valantar said:


> Also, it's kind of ...odd? to reference the database result rather than actual test results when you're in the comments thread for that review. Just saying.


I already showed another source, 6400 matched 1050 Ti. Test was done on gen 4 connector too.



AusWolf said:


> Which sh*t i7 (I've got two)? I've just watched a 4K Youtube video on my 4765T and integrated graphics (HD 4600) without any lag or dropped frame. It's a 9 year-old 35 Watt CPU, mind you. if this thing can do it, then anything above an i5 2500 can do it, and you can literally pick one up for pennies.


I would want to see you say the same if you had Core 2 Quad, which is still oaky for everyday tasks, but might not cut it anymore for YT. 




AusWolf said:


> That "quite close" i3 is a *lower-mid-tier* CPU that was released *3 years before* the Athlon X4 and still beats it.


You also quoted i3-4330, not 3220. Whatever, they only had 20% difference and base architecture of Athlon is Sandy bridge era old. 




AusWolf said:


> What performance? FM2 was never competitive against contemporary Intel CPUs.


It certainly was competitive in value comparisons. It beat Pentiums clearly and later could rival i3s in perf/money. They were really reasonable for low end system buyers, who still wanted to run games. You could get Athlon X4 for what Pentium for going for, that was no brainer.




AusWolf said:


> I didn't miss your point. I only stated that it's irrelevant.


But it's not. Weren't you the one, who bitched about getting older core GT 710 just for decoding? In that case, you should know better about why this stuff is important.




AusWolf said:


> I agree about the 6600, but I don't agree about the nvidia ones you mentioned, as they are too expensive here in the UK. The 1050 Ti is selling at about the same price as the 6400, which is quite frankly a ripoff. Quadros aren't only expensive, but hard to find, too.


I overestimated UK then. Used RX 580s are still quite "cheap" here.



Trov said:


> -T600 at least in the US is selling for $250+, which is nearly $100 more than RX 6400


In Lithuania RX 6400 goes for 195 EUR, 6500 XT goes for 214 EUR, T600 ~200 EUR. Obviously 6500 XT would be "good" deal for horsepower only, but if you want full featured GPU, then T600 it is or 1050 Ti. The only thing for 150 EUR you can get is RX 550 or GTX 750 Ti.



Trov said:


> -T2000 is equivlent to GTX 1650. T1000 is downclocked from that, and T600 has fewer cores from that. T600 performs nearly the same as 1050 Ti.


I only said that it's like 1650 *LE*, in practice it is 1050 Ti performance, maybe a bit faster than 1050 Ti. May be better deal than 6400 due to having all features and similar performance for same price. BTW T2000 is so rare, that TPU database doesn't have it, only mobile T2000. It practically doesn't exist. I haven't been able to find drivers either, so it doesn't exist at all. T1000 does and it's faster than GTX 1650. It literally has the same GPU (a bit downclocked), but faster vRAM (which compensates for slower GPU). T1000 is closer to GTX 1650 GDDR6 on hardware level.



Trov said:


> -Where are you seeing it's slower than 1050 Ti? Just a few pages before yours shows that RX 6400 is faster than 1650 in most cases.


Already quoted from where. Seems like it may occasionally perform poorly in some games, that skews results or it has some driver issues. Perhaps 6400 is a bit faster, but worthless without decoding/encoding, ReLive, OC, 4X slot. On gen 3 systems it's 1050 Ti performance for sure, which makes it unappealing to basically anyone with somewhat recent system or older. For gen 2 system is straight up poo.


----------



## Valantar (Apr 29, 2022)

The red spirit said:


> On you to nitpick on things, when I clearly am aware of correct terminology.


I'm not nitpicking, I'm pointing out that your core arguments - such as this somehow being similar in concept to an MXM adapter - make no sense, and seem to stem from a conflation of different understandings or uses of the same term.

Btw, does this mean you've realized how misguided that argument was, finally?


The red spirit said:


> That wasn't even close to clear, but okay.


It really ought to be. There is no other logical direction for the discussion to go in - once it's established that the bottlenecks can't be put down to implementation issues - as the die is fully enabled in the relevant aspects (PCIe, VRAM) and thus not bottlenecked by the implementation, the question then shifts to "why is the die design unbalanced in this way". This is exactly the confusion I've been trying to point out that you've been having.


The red spirit said:


> You can basically take designed 6400 PCB and erase extra 16x part, that's how much design it would take to make it.


Again: it's not difficult. But it takes more time than not doing so, and as it would be a new connector design, it would require new QC and likely trigger time-consuming further steps to ensure that nothing has been messed up by the changes to the connector (clearances, insertion force, friction against the connector, etc.). Thus, just leaving the copy-pasted x16 connector in there is the lowest effort, cheapest path forward.


The red spirit said:


> Yeah, basically there are many cards like RX 6900 XT or RTX 3090, that mere mortals can't afford. There are only handful of cards that are cheap enough and can decode/encode. And there always used to be plenty of such cards that can (for their time).


That's quite a different stance than the comment that was originally responded to, triggering that whole mess. But at least we can agree on that much. I still don't think it's a _major_ issue, but it's a bit of a let-down still. To me it just highlights that AMD designed this chip first and foremost for being paired with a 6000-series APU - but at least most of the same functionality can be had from any other APU or non-F Intel CPU.


The red spirit said:


> I remember 1050 Tis being sold out, when 1650 came out and now 1050 Tis are back. I think they restarted production.


It's entirely possible they did, but that would still involve essentially zero R&D, as all of that has already been done. All that would be needed would be some basic recalibration of various steps of the lithography process. Everything else is already done. And, of course, it's entirely possible that Nvidia had a stockpile of GP107 dice sitting in a warehouse somewhere. I'm not saying they did, but it wouldn't be all that surprising - there are always surplus stocks. Plus, GP107 is used for the MX350 GPU as well, which points towards some continuation of production, at least intermittently - that launched in 2020, after all.


The red spirit said:


>


?


The red spirit said:


> I already showed another source, 6400 matched 1050 Ti. Test was done on gen 4 connector too.


Doesn't matter, as your original source is contradicted by the review that data is supposedly based on. There is something wrong with the TPU database entry. Period. TPU's review shows the 6400 being within 1-3% of the GTX 1650, which is 25% faster than the 1050 Ti according to the same database. That would make the 6400 ~23% faster than the 1050 Ti from TPU's testing, despite the 1050 Ti not being present in the 6400 review itself. Hardware Unboxed's review showed the 6400 being 28% faster (or the 1050 Ti being 40% slower, depending on where you set your baseline for comparison). I trust both TPU and HWUB far more than your rando Youtuber, sorry.

Edit: taking a look at that video, I _definitely_ trust TPU and HWUB more. First off, there's also zero information about test methodology, test scenes, etc., so there's no way of ensuring the data is reliable. But there's also something else going on here, as ... well, the results are just too even. Like, every game is within a few FPS of each other. That just doesn't happen across a selection of games like that. No two GPUs from different manufacturers, across different architectures, scale that linearly unless they are bottlenecked elsewhere. This makes me think that either they failed to benchmark this properly, had some sort of system limitation, or the data was fudged somehow.

For example, their HZD results show the 6400 at 40/18fps avg/min vs. the 1050 Ti's 43/21 - at "1080p High". HWUB in HZD, at 1080p "Favor Quality" (which is likely what they mean by "High" in your video) get 49/41 (avg/1% low) for the 6400, and 38/33 for the 1050 Ti.  They also test CoD:WZ, which ... well, it's a multiplayer title, you literally can't make a repeatable test sequence in a game like that. Unless the data is gathered over literal hours of gameplay, that data should be discarded, as there's no way of knowing if it's representative.

Their AC Valhalla results are also pretty dubious - at 1080p high they place the 1050 Ti at 33/16 fps and the 6400 at 36/18 fps. HWUB's ACV testing, at 1080p medium, has the 1050 Ti at 39/29fps and the 6400 at 54/41fps. While lows and 1% lows can't really be compared (lows can be random outliers, that's why you use 1% or .1% lows), but the difference here just doesn't make sense. They're not using the same settings or test scene, but even accounting for that massive amount of variability, it doesn't make sense for the 6400's performance to increase by 50% going from high to medium, while the 1050 Ti's performance only increases by 18% from the same change.

So, to summarize: we have two reviews from reputable sources showing significant performance gains, and one unknown one with some significant methodological and data-related red flags showing them to be equal. You're welcome to disagree (and I'd love to hear your arguments for doing so if that's the case!), but I choose to trust the two reviews from trustworthy sources.


----------



## AusWolf (Apr 29, 2022)

The red spirit said:


> I would want to see you say the same if you had Core 2 Quad, which is still oaky for everyday tasks, but might not cut it anymore for YT.


Why would I have a Core 2 Quad when I can pick up a Sandy or Ivy i5 from any second-hand store or electronics recycling centre for literally a couple of pounds? Building my i7 4765T system cost me about £100. The whole system! And you don't need an i7 for Youtube, I just wanted the top of the line 35 W chip because why not.

The point is: there are countless options for building a low-spec PC for nearly free to watch Youtube. There is absolutely zero need to stick to a 10+ year-old / extremely weak CPU.



The red spirit said:


> But it's not. Weren't you the one, who bitched about getting older core GT 710 just for decoding? In that case, you should know better about why this stuff is important.


Bitching? What the...?  I said that the 6400 can decode all the video formats that the 710 can, and H.265 on top of that. Does this look like bitching to you?



The red spirit said:


> You also quoted i3-4330, not 3220. Whatever, they only had 20% difference and base architecture of Athlon is Sandy bridge era old.


Correct. The 4330 was released in 2013, the X4 845 in 2016. I don't give a damn about what architecture it is. The only thing that matters is that it's newer and slower and was selling within a similar price range.


----------



## Valantar (Apr 29, 2022)

AusWolf said:


> Building my i7 4765T system cost me about £100. The whole system! And you don't need an i7 for Youtube, I just wanted the top of the line 35 W chip because why not.


Hate to break it to you, but there's an i7-47*8*5T as well 

(I'm also curious how you got that build so cheap, given how high motherboard prices are these days - but this is getting rather OT now)


----------



## AusWolf (Apr 29, 2022)

I was expecting my Sapphire Pulse 6400 to arrive tomorrow, but thanks to a mistake with the courier, I got it today!  I've already done some testing with the 1050 Ti, now I'm waiting for a few games to install on the HTPC with the 6400 in it. I'll post results as soon as I have them (hopefully within an hour or two).



Valantar said:


> Hate to break it to you, but there's an i7-47*8*5T as well
> 
> (I'm also curious how you got that build so cheap, given how high motherboard prices are these days - but this is getting rather OT now)


I know, but I couldn't find one at the time.  As for the build, the motherboard was actually a gift from a friend. The CPU was quite expensive compared to what other contemporary parts sell for - making up around half of the budget. The other half was a cheap mini-ITX case, an even cheaper SFX PSU (just a basic 300 W unit) and 8 GB second-hand RAM. I admit that I had a SATA SSD laying around, so that was "free" too.

If you don't count gifts and spare parts, I'd still say that a build like this can be had for £150-200 (depending on your storage needs) if you know where to find the parts... and it's still overkill for Youtube.


----------



## catulitechup (Apr 29, 2022)

in good news rx 6400 arrive to microcenter









						PowerColor AMD Radeon RX 6400 ITX Single Fan 4GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
					

Get it now! The cooling fan utilizes two-ball bearing technology, increasing the longevity of the fans by up to 4 times. Mute Fan Technology intelligently turns off the fan below 60C, providing silent gaming during medium and low-load while reducing power consumption.




					www.microcenter.com
				




and only wait for what happen because rx 6400 in this moment stay around 10 to 15us more cheap than gtx 1050ti and aroud 40us less than gtx 1650


----------



## ThrashZone (Apr 29, 2022)

Hi,
Reminds me of the last tpu poll on the home page

Would you buy a 4gb gpu in 2022 maybe it was 2021
Believe the responses were mostly no and maybe hell no.


----------



## The red spirit (Apr 30, 2022)

Valantar said:


> I'm not nitpicking, I'm pointing out that your core arguments - such as this somehow being similar in concept to an MXM adapter - make no sense, and seem to stem from a conflation of different understandings or uses of the same term.
> 
> Btw, does this mean you've realized how misguided that argument was, finally?


No, that's you who don't understand. 



Valantar said:


> Again: it's not difficult. But it takes more time than not doing so, and as it would be a new connector design, it would require new QC and likely trigger time-consuming further steps to ensure that nothing has been messed up by the changes to the connector (clearances, insertion force, friction against the connector, etc.). Thus, just leaving the copy-pasted x16 connector in there is the lowest effort, cheapest path forward.


Lowest effort? Maybe, but unlikely the cheapest. There's nothing else to do, but shrink connector, remove traces to it and remove caps if traces have them. That's all. Probably takes like 5 minutes in professional fab software to do this.



Valantar said:


> That's quite a different stance than the comment that was originally responded to, triggering that whole mess. But at least we can agree on that much. I still don't think it's a _major_ issue, but it's a bit of a let-down still. To me it just highlights that AMD designed this chip first and foremost for being paired with a 6000-series APU - but at least most of the same functionality can be had from any other APU or non-F Intel CPU.


When you consider the lack of decoding/encoding, x4 PCIe, no ReLive, no overclocking, downclocking. This whole deal just stinks. It also alienates some previously interested audiences, like people with old systems that just wanna watch videos without frame skipping. 



Valantar said:


> It's entirely possible they did, but that would still involve essentially zero R&D, as all of that has already been done. All that would be needed would be some basic recalibration of various steps of the lithography process. Everything else is already done. And, of course, it's entirely possible that Nvidia had a stockpile of GP107 dice sitting in a warehouse somewhere. I'm not saying they did, but it wouldn't be all that surprising - there are always surplus stocks. Plus, GP107 is used for the MX350 GPU as well, which points towards some continuation of production, at least intermittently - that launched in 2020, after all.


Hypothetically, it would be possible to make a new card, that has older lithography and uses DDR4 or DDR5 memory with high bus width (meaning more lower capacity chips, instead of few bigger capacity chips that are faster). And to reduce RnD expenses, it could be relaunched GTX 1060 or GTX 1070 GPU, but with clock speed reduced, so that it is more efficient. If you look at how much cheaper less than cutting edge nodes are, you would realize that you could basically make bigger dies on older node, than smaller ones with new node and that would be cheaper. That would be an ideal cheap card to relaunch as GPU shortage special. 




Valantar said:


> You're welcome to disagree (and I'd love to hear your arguments for doing so if that's the case!), but I choose to trust the two reviews from trustworthy sources.


I won't, just that it's unusual when so many sources have rather different data. There still might be some driver related issues leading to inconsistent performance between different systems.



AusWolf said:


> Why would I have a Core 2 Quad when I can pick up a Sandy or Ivy i5 from any second-hand store or electronics recycling centre for literally a couple of pounds? Building my i7 4765T system cost me about £100. The whole system! And you don't need an i7 for Youtube, I just wanted the top of the line 35 W chip because why not.
> 
> The point is: there are countless options for building a low-spec PC for nearly free to watch Youtube. There is absolutely zero need to stick to a 10+ year-old / extremely weak CPU.


Real life example: my school had shitty Pentium D machines, that were quite woeful. It guy put some GT 610, so that they could run videos for time being. It worked. All for like 40 EUR per machine, instead of 100.

Hypothetical example: You got free computer with BD drive. You wanna play movies, but GPU is too old to have decoding and CPU is not cutting it. You get lowest end card that decodes.



AusWolf said:


> Bitching? What the...?  I said that the 6400 can decode all the video formats that the 710 can, and H.265 on top of that. Does this look like bitching to you?


Yet another completely misunderstood sentence. I said that you got GT 710 for playing back videos, but when it arrived it turned out to be older variant (Fermi?) and AFAIK it didn't decode or something. You bitched about that and later got GT 1030, which was good enough.

Doesn't RX 6400 look exactly like the same trap?



AusWolf said:


> Correct. The 4330 was released in 2013, the X4 845 in 2016. I don't give a damn about what architecture it is. The only thing that matters is that it's newer and slower and was selling within a similar price range.


And I don't give a damn about Athlon either, but I use it as *EXAMPLE* of something you may find in older machine (performance wise only). And it does play 1080p fine, but you want more, so you get decoding capable card. 

BTW that old i3 was not comparable to Athlon. Athlon was going for 80-90 EUR, meanwhile that i3 went for 130 EUR and that's without normalizing for inflation. That was very significant price increase for not really a lot more. And since I bought it late and for already existing system, which was not intended for anything in particular, yeah it ended up clearly unplanned and not the most economically efficient. Due to buying parts late I got Athlon for ~40 EUR, which was not great deal, but okay deal. And if you want to compare performance, don't use userbenchmark. It's a meme page of utter incompetence, huge bias, shilling... Using Cinebench alone would be better than userbenchmark for any reference of CPU performance.

I was able to find data about about their Cinebench scores. In R15 multicore test Athlon X4 845 stock got 320 points. i3 3220 got 294 points. You would think that Athlon is better, but nah it's not. It shares two FPUs with 4 integer cores. As long as all cores are utilized, it does have better multicore performance, but if not, then it quite weak. It doesn't have L3 cache at all. i3 on the other hand has two faster FPU/Integer cores, but to improve multicore performance, it uses HT. So it's faster if you don't need 4 threads or can't utilize them all, but once you do, it overall performs worse than Athlon. Also HT can sometimes reduce performance, if scheduling is poor in software code. Lack of L3 cache, means that Athlon can stutter badly in games. There's higher chance of experiencing downsides of cache miss. Or if you have code that doesn't fit in L2, you gonna have shitty time. i3 on the other hand also tends to stutter, but it's due to the fact that software is made to use more than 2 cores and thus it has to reprocess whole code to make it function on two cores. It does lead to lag spikes, if code is complicated enough. HT isn't efficient either and is code dependent and can only work, if cores aren't already saturated enough with core (aka if all of their instruction sets and pipeline width isn't utilized). So it's very hard to predict their performance, it can be very inconsistent too. Still, i3 is better with older or code that is difficult to multithread or FPU heavy code, meanwhile Athlon is better at more recent code with mostly integer operations. Athlon may also be a lot better due to having more modern instruction array. i3 may not be able to launch some software, due to old instructions being available. Good ole K10 architecture literally became obsolete, not due to lack of performance, but mostly by how quickly software started to require newer versions of SSE or FMA. In terms of efficiency, it's a tie, because Athlon X4 is not a bulldozer or excavator part, it's Carrizo part, taken from laptops with jacked up clock speeds. It still retained quite a lot of efficiency advantage of its origins, meanwhile i3 was just decent part from get go. Athlon can be overclocked technically, but its multi is locked, so you need to raise base speed, without changing other speeds (like PCIe bus speed, RAM speed, HyperTransport speed, RAM controller speed, legacy PCI speed and anything else you might have). So practically, it's too damn hard to do and FM2+ boards don't have good locks for speed separation, like in socket 754 days. i3 is not tweakable at all. Still thee are people, who managed to reach 4.9 GHz with Athlon, so YMMV. Athlon also has a lot of room for undervolting. You can reduce voltage by 0.2-0.3 volts on them.


----------



## AusWolf (Apr 30, 2022)

Okay fellas, I've got my very first first-hand experiences with the 6400. 

Here's the *comparison with the 1050 Ti* that I promised:

_1. The main system_: Asus Tuf B560M-Wifi, Core i7 11700 with 200 W power limits, 32 GB 3200 MHz RAM (dual channel of course), and the 1050 Ti (Palit KalmX).
_2. The "ultimate bottleneck" system_: Asus Tuf A520M, Ryzen 3 3100, 16 GB 3200 MHz RAM in single channel  for the ultimate bottleneck, and the 6400 (Sapphire Pulse) in PCI-e 3.0 (because of the A520).

Here are my results:

3DMark TimeSpy graphics score:
1050 Ti: 2253,
6400: 3681.
No comment here. What works, works.

Superposition 1080p Extreme:
1050 Ti: 1279,
6400: 2012.
No comment here, either.

Cyberpunk 2077 (1080, low/med/high/ultra) average FPS:
1050 Ti: 29.54 / 24.03 / 18.57 / 14.89,
6400: 44.98 / 35.10 / 26.18 / 16.33.
Comment: The minimum FPS consistently followed the same trend as the average with the 1050 Ti, while it always stayed around 13-15 FPS with the 6400, regardless of setting used. It only shows in the data, though - it did not affect the smoothness of the game. Another weirdness with the second system: the main menu kept freezing for a quarter of the second every half minute or so. I have played the game with the same CPU without issues before. I don't know why this thing happened now. It might be the single-channel RAM, it might be the PCI-e 3.0 bus. Who knows? All in all, the game is enjoyable with the 6400 at low and medium settings, while the 1050 Ti struggles.

Red Dead Redemption 2 (1080, no advanced setting chosen, "overall quality" slider all the way to the left / all the way to the right):
1050 Ti: 50.87 / 29.69,
6400: 90.73 / 88.99.
Comment: It seems to me that the game chooses different "minimum" and "maximum" settings for you depending on your hardware, which is weird (though I'm not sure). Apart from this, both systems ran the game well on minimum, while the 1050 Ti struggled with the maximum setting (which may or may not be a higher graphical detail level than what the 6400 ran at because of the aforementioned weirdness).

Metro Exodus (1080, low/normal/high preset):
1050 Ti: 54.88 / 30.75 / 22.87,
6400: 20.34 / 19.19 / 15.74.
Comment: Something completely killed the 6400 system in this game. The benchmark looked unplayable on all settings. Again, I don't know if it's the single channel RAM, or the PCI-e 3.0. I might test this again later with a dual channel RAM config.

Mass Effect Andromeda (1080, max settings):
1050 Ti: between 20-30 FPS,
6400: between 25-50 FPS.
Comment: Both cards ran the game okay-ish, though it was a touch more enjoyable with the 6400. Asset loading seems to be an issue with both, as the FPS dropped when I was entering a new area or engaging a conversation with an NPC. It looks like a VRAM limit to me, although the 6400 was a tiny bit more affected, which again, might be due to the single-channel RAM, or the PCI-e 3.0 bus. I would still have the 6400 rather than the 1050 Ti for this game without a second thought.

Conclusion: If you're looking for a 6400 for gaming with an old system, the main thing you need to ask yourself is what games you want to play. It can give you a decent experience, or a totally shitty one depending on the answer.

Other stuff: This low profile Sapphire 6400 has a thick, flat heatpipe running through the length of the cooler, so the GPU runs as cool as the MSi one in the review while holding a relatively steady 2310 MHz, despite being a lot smaller. It also has idle fan stop, which is triggered at 50 °C. It will not be a gaming card, so all in all, I'm happy with the purchase. 

Edit: The real test of the card will be its intended use: a 4K 60 Hz home theatre experience.


----------



## catulitechup (Apr 30, 2022)

AusWolf said:


> Edit: The real test of the card will be its intended use: a 4K 60 Hz home theatre experience.


with av1 hardware acceleration ?


----------



## AusWolf (Apr 30, 2022)

catulitechup said:


> with av1 hardware acceleration ?


I don't need it. The Ryzen 3 can handle it, no problem. 



The red spirit said:


> Real life example: my school had shitty Pentium D machines, that were quite woeful. It guy put some GT 610, so that they could run videos for time being. It worked. All for like 40 EUR per machine, instead of 100.
> 
> Hypothetical example: You got free computer with BD drive. You wanna play movies, but GPU is too old to have decoding and CPU is not cutting it. You get lowest end card that decodes.


Real life example: a school not wanting to spend money, or not having the funding is an entirely different matter altogether. I could mention countless of examples when we don't have basic necessities available at my workplace, even though I work for a multimillion GBP revenue company.

Hypothetical example: That computer is not suited for the task you want to use it for. It still doesn't mean that you can't pick up a Sandy i5 for a couple of quid and use that with the BD drive.



The red spirit said:


> Yet another completely misunderstood sentence. I said that you got GT 710 for playing back videos, but when it arrived it turned out to be older variant (Fermi?) and AFAIK it didn't decode or something. You bitched about that and later got GT 1030, which was good enough.
> 
> Doesn't RX 6400 look exactly like the same trap?


The what did I "bitch" about now? (I still hate this word) I'm starting to lose track of all the things that you think I said.



The red spirit said:


> And I don't give a damn about Athlon either, but I use it as *EXAMPLE* of something you may find in older machine (performance wise only). And it does play 1080p fine, but you want more, so you get decoding capable card.


... or you upgrade your CPU which is a lot cheaper, considering current GPU prices.



The red spirit said:


> BTW that old i3 was not comparable to Athlon. Athlon was going for 80-90 EUR, meanwhile that i3 went for 130 EUR and that's without normalizing for inflation. That was very significant price increase for not really a lot more. And since I bought it late and for already existing system, which was not intended for anything in particular, yeah it ended up clearly unplanned and not the most economically efficient. Due to buying parts late I got Athlon for ~40 EUR, which was not great deal, but okay deal. And if you want to compare performance, don't use userbenchmark. It's a meme page of utter incompetence, huge bias, shilling... Using Cinebench alone would be better than userbenchmark for any reference of CPU performance.


Don't forget about the timescale. That i3 was selling for 130 EUR *3 years before* the Athlon even appeared. In 2016, Intel was already pushing Skylake (or Kaby Lake? I'm not sure), so the 4330 was already used market territory by then.


----------



## The red spirit (Apr 30, 2022)

AusWolf said:


> Real life example: a school not wanting to spend money, or not having the funding is an entirely different matter altogether. I could mention countless of examples when we don't have basic necessities available at my workplace, even though I work for a multimillion GBP revenue company.


I thought that UK would know better than that, as it has quite a reputation for being rich country and having quite good overall welfare with century old capitalism. I thought that Lithuania might not provide that, because basically whole commerce is only 30 year old, and due to being rather "poor" (by EU standards) country with some sketchy bosses it would have been too interested in cost cutting, but it seems to be universal thing worldwide. Maybe with some exceptions like Germany, Netherlands, Denmark being somewhat different. 




AusWolf said:


> Hypothetical example: That computer is not suited for the task you want to use it for. It still doesn't mean that you can't pick up a Sandy i5 for a couple of quid and use that with the BD drive.


Machines like that can totally play 1080p videos just fine in VLC, just not in Youtube. They don't lack decoding capabilities altogether, but they lack support for newer codecs and Youtube's Javascript is too much for them. My Athlon 64 3200+ on socket 754 indeed could play back a fricking BD movie just fine, but Youtube at 360p is complete no go. Say what you want, but that's stupid. But to be fair, I used ATi X800 Pro AGP (PCIe version used newer core) card with Athlon, so it may have helped there. As far as I know that old card has ATi AVIVO decoder and it can decode H264, VC-1, WMV-9, MPEG-2. BD is probably lossless H264. That old card even had streaming capabilities. Sure it was at potato 320x240 at 15 fps, but it's accelerated and it was in 2004. In that one aspect it beats RX 6400.




AusWolf said:


> The what did I "bitch" about now? (I still hate this word) I'm starting to lose track of all the things that you think I said.


You complained about GT 710



AusWolf said:


> ... or you upgrade your CPU which is a lot cheaper, considering current GPU prices.


I told you it was an example, I don't need your advice here. But maybe CPU is cheaper to upgrade, but depending on your needs, perhaps GT 1030 may be enough. At least it has Vp9 and HEVC decoding. VP9 is common on YT. H264 is used for <720p videos only. However for Netflix, you may need AV1 capable card (AV1 was experimental I think). In that case, it is worth it to upgrade to Alder Lake Pentium or i3. 




AusWolf said:


> Don't forget about the timescale. That i3 was selling for 130 EUR *3 years before* the Athlon even appeared. In 2016, Intel was already pushing Skylake (or Kaby Lake? I'm not sure), so the 4330 was already used market territory by then.


You could have bought Athlon X4 740 back then too. It was only 70 USD at launch. I don't see value in i3. For i3's price, you could have got yourself FX 8320. Even FX 6300 was cheaper. Now tell me about i3's superior value. And way before i3, you could have bought FM1 based Athlon II X4 6xx series chip for less with basically the same performance as later bulldozer FM2 chips. i3 was awfully overpriced. Even today i3 doesn't cost 130 USD.


----------



## Nuke Dukem (Apr 30, 2022)

W1zzard said:


> The review has been updated with GT1030 numbers .. I hate you guys .. testing at these FPS rates is such a shitshow







Oh my, I think I can physically feel the pain...

And speaking of pain -- I glanced over the last few pages of this thread and all I could think of was this eternally relevant comic:








AusWolf said:


> *comparison with the 1050 Ti*
> 
> _1. The main system_: Core i7 11700, 32 GB 3200 MHz RAM (dual channel), 1050 Ti
> _2. The "ultimate bottleneck" system_: Ryzen 3 3100, 16 GB 3200 MHz RAM in single channel, the 6400, PCI-e 3.0



I don't get it. Why would you use two wildly different PCs for comparing two different cards? Why cripple one card and not the other one? Why not just use the faster PC, bench both cards in succsession, have some comparable data, then change the PCI-E link speed, rebench and investigate PCI-E 3.0 losses?


----------



## AusWolf (May 1, 2022)

Nuke Dukem said:


> And speaking of pain -- I glanced over the last few pages of this thread and all I could think of was this eternally relevant comic:


I love this image! I'll take it if you don't mind. 



Nuke Dukem said:


> I don't get it. Why would you use two wildly different PCs for comparing two different cards? Why cripple one card and not the other one? Why not just use the faster PC, bench both cards in succsession, have some comparable data, then change the PCI-E link speed, rebench and investigate PCI-E 3.0 losses?


There was an argument earlier in this forum. One forum member said that the 1050 Ti is a much better value than the 6400 even at the same price. I think it was also said that the 1050 Ti was faster. I wanted to test this claim by giving the 1050 Ti a head start in a faster PC, and cripple the 6400 as much as possible.

You might call my other reason laziness: my main PC has a 2070 in it, so I didn't have to reinstall the driver to test the 1050 Ti, and the 6400 is meant for my secondary rig anyway. 



The red spirit said:


> I thought that UK would know better than that, as it has quite a reputation for being rich country and having quite good overall welfare with century old capitalism. I thought that Lithuania might not provide that, because basically whole commerce is only 30 year old, and due to being rather "poor" (by EU standards) country with some sketchy bosses it would have been too interested in cost cutting, but it seems to be universal thing worldwide. Maybe with some exceptions like Germany, Netherlands, Denmark being somewhat different.


It's not a UK vs Lithuania thing. It's more like a "companies don't get rich by spending money" kind of thing. But let's stop the off-topic here. 



The red spirit said:


> Machines like that can totally play 1080p videos just fine in VLC, just not in Youtube. They don't lack decoding capabilities altogether, but they lack support for newer codecs and Youtube's Javascript is too much for them. My Athlon 64 3200+ on socket 754 indeed could play back a fricking BD movie just fine, but Youtube at 360p is complete no go.


That's exactly why you need a new PC if you want to play Youtube in HD. My point stands that just because you have an Athlon 64 or Athlon X4 or whatever at hand, it doesn't mean that it's fit for _your_ purpose. It _was_ fine, but it's not anymore. Nothing stops you from walking into a computer recycling centre and paying 5-10 EUR for a Sandy i5. It's even (a lot) cheaper than buying a new graphics card, and it won't only give you HD videos, but the whole system will be faster as well. But if you'd rather pay a hundred EUR just for a fecking video codec, be my guest.



The red spirit said:


> Say what you want, but that's stupid.


Is that your point? Closing ears, closing eyes, _"lalala, I'm not listening because you're stupid"_? Very mature indeed.



The red spirit said:


> You complained about GT 710


I did not. I said that the 6400's decoder supports all the formats that the 710's does, plus H.265. This is not a complaint. This is a fact.



The red spirit said:


> I told you it was an example, I don't need your advice here.


It's a counter-example, not advice.



The red spirit said:


> But maybe CPU is cheaper to upgrade, but depending on your needs, perhaps GT 1030 may be enough. At least it has Vp9 and HEVC decoding. VP9 is common on YT. H264 is used for <720p videos only. However for Netflix, you may need AV1 capable card (AV1 was experimental I think). In that case, it is worth it to upgrade to Alder Lake Pentium or i3.


The 6400 has VP9 and HEVC decode as well. I agree that the 1030 is enough for 99% of HTPC uses, as long as you don't need HDMI 2.1. It's only that the 1030 costs around £100, which is a terrible deal, imo.

As for an Alder Lake upgrade, I played with the thought before buying the 6400, but a motherboard and a CPU would have cost me around £150-200, and then I would have ended up with a slower CPU than what I have now. An i3 with the same performance as my Ryzen 3 would have cost me £250 for the whole system. Not worth it. As for people coming from older systems, they might also need to buy some DDR4 RAM which makes it even more expensive. If they have some DDR3 laying around, picking up an old i5 from a second-hand store for a couple of quid is still a lot better deal.



The red spirit said:


> But to be fair, I used ATi X800 Pro AGP (PCIe version used newer core) card with Athlon, so it may have helped there. As far as I know that old card has ATi AVIVO decoder and it can decode H264, VC-1, WMV-9, MPEG-2. BD is probably lossless H264. That old card even had streaming capabilities. Sure it was at potato 320x240 at 15 fps, but it's accelerated and it was in 2004. In that one aspect it beats RX 6400.


Are you seriously comparing the 6400 to a top tier card from 20 years ago? Jesus...


----------



## Nuke Dukem (May 1, 2022)

AusWolf said:


> I love this image! I'll take it if you don't mind.



Go for it. It's xkcd that should take the credit anyway. Just try not to do as is shown 



AusWolf said:


> There was an argument earlier in this forum.
> You might call my other reason laziness.



Fair enough.



AusWolf said:


> walking into a computer recycling centre and paying 5-10 EUR for a Sandy i5



I feel obliged to say this, because I saw you bring up this argument a few times:
_You should feel blessed that you have that option!_ I just checked my local used ads (in Bulgaria) and the good old 2500K sells for anything from $30 to $50. This is in stark contrast to our purchasing power, which is half of yours or even lower. I hope you can understand that even old used stuff isn't exactly cheap for us. The used market doesn't make any sense sometimes, but it is what it is.

Be grateful for what you have and don't just assume everyone has your (good) options.

I'm just leaving this here as a thinking point. If anyone agrees to disagree, he is free to do so. I will not engage on the topic any further, as you guys already have a pretty heated debate.


----------



## AusWolf (May 1, 2022)

Nuke Dukem said:


> I feel obliged to say this, because I saw you bring up this argument a few times:
> _You should feel blessed that you have that option!_ I just checked my local used ads (in Bulgaria) and the good old 2500K sells for anything from $30 to $50. This is in stark contrast to our purchasing power, which is half of yours or even lower. I hope you can understand that even old used stuff isn't exactly cheap for us. The used market doesn't make any sense sometimes, but it is what it is.
> 
> Be grateful for what you have and don't just assume everyone has your (good) options.
> ...


That's sad.  Does this make buying a new graphics card a better deal, though?


----------



## Valantar (May 1, 2022)

Nuke Dukem said:


> I don't get it. Why would you use two wildly different PCs for comparing two different cards? Why cripple one card and not the other one? Why not just use the faster PC, bench both cards in succsession, have some comparable data, then change the PCI-E link speed, rebench and investigate PCI-E 3.0 losses?


I like this comparison for exactly that reason: people are up in arms around the 6400/6500 XT being bottlenecked, especially on slower PCIe standards, so this should represent a kind of worst-case-scenario comparison. And it still (mostly) soundly beats the 1050 Ti, despite people here claiming that it _matches_ it on PCIe 4.0. Also, we have plenty of sources for 6400 testing on PCIe 4.0 and with a fast, current-gen CPU - inlcuding the review this thread is commenting on. We hardly need more of those, while what we _do_ need is testing on older, slower platforms where the 6400 is more relevant as a potential upgrade. Of course one could then argue that the 1050 Ti also should be tested on that. But ... does it ultimately matter? Judging by the test results, no. There's no way the 1050 Ti will perform better on a slower system, so that point is moot.


----------



## Nuke Dukem (May 1, 2022)

AusWolf said:


> That's sad.  Does this make buying a new graphics card a better deal, though?



OK, market analysis time. I'll keep it about the 6400 as to not wander off topic:

The most popular model I could find is the AsRock Challenger ITX. Prices are in the $200-260 range for it (standard 20% VAT included), most often around ~$210. Every shop I checked out claimed they have stock.

Saw several offers for the ASUS Dual going from $265 to $285.

Just for laughs I also got some offers from one of our bigger IT shops and they have:
ASUS Dual for $285
MSI Aero ITX for $390
They also claim they have no stock...

Then, for even bigger laughs, we have our biggest e-tailer with these wonderful offers:
AsRock Challenger ITX for $350
MSI Aero ITX for $435
ASUS Dual for $460
All of these are not in stock with a delivery of 8 days.

Now, the ~$210 price doesn't sound half bad once you find out that you average 1050 Ti model retails for around $230 and more. The 6400 is probably the most sensible card you can buy new over here right now.

The average net wage here is 1/3 to 1/4 of what you have over there. Unless you have disposable income - yes, it's a f****** expensive hobby.



Valantar said:


> I like this comparison for exactly that reason



Alright, I gotcha.


----------



## TheinsanegamerN (May 1, 2022)

The total lockdown of the 6400xt clocks is terrible. There's no reason for that, the 6500xt is unlocked, as is the older rx 550 and 560. AMD is really trying to cover up just how horribly gimped the 6500 is.

Now, if nvidia did this with the 3050 or GT 1030 there'd be screeching from the rooftops. When AMD does it, crickets....


AusWolf said:


> What point? This is a 1650-eqivalent card with the same amount of VRAM and the same VRAM bandwidth. The 1650 does it with 128-bit GDDR5, the 6400 with 64-bit GDDR6. Only that low profile 1650s go for £250-300 on ebay while the 6400 costs £160 new. What's not to like?


Well, off the top of my head:

1. it's 3% SLOWER then the 1650 at 1080p. The 1650 launched 3 years ago for the same $159 price point. That is technically negative price/perf movement over 3 years, an absolute embarrassment. rDNA 2 is much more capable then this.
2. it's still limited to 4GB of VRAM, even at 1080p this is an issue for my much slower RX 560 in certian games. Paying 1650 prices for a GPU with the same limitation is just silly.
3. it's got the same PCIe limitation as the 6500xt. Many of us are running pci gen 3 systems where the 6400 will lose additional performance, widening the gap with the now 3 year old 1650.

Yeah, I know the 1650 is more expensive on ebay. I dont care. MSRP on the 6400 is WAY too high. What should have been released was a 6500xt at this $159 price point, with a 6GB 96 bit bus and PCI x8 connectivity, clocked more reasonably so as to say in 75w. The 6500xt can easily do it, it's clocked way out of its efficiency curvy to try and maintain more performance.



Valantar said:


> That's definitely a part of it, but it doesn't explain how prices seemed to take off in the early 2010s - the first mining boom was around 2017-2018, after all, and the silicon shortage started in 2020. As with everything, it's likely a complex combination of factors. In addition to the two you point out, there's also
> - increasing production costs due to higher complexity
> - BOM cost increases due to more/more advanced components required
> - increasing material costs in recent years
> ...


Slight correction, the first mining boom was in 2013-2014. That's what drove prices of the R9 290x through the roof, at the time GCN was far and away superior to kepler at mining. That coupled with the titan's success is what helped pushed prices as high as they are now.


----------



## AusWolf (May 1, 2022)

Nuke Dukem said:


> OK, market analysis time. I'll keep it about the 6400 as to not wander off topic:
> 
> The most popular model I could find is the AsRock Challenger ITX. Prices are in the $200-260 range for it (standard 20% VAT included), most often around ~$210. Every shop I checked out claimed they have stock.
> 
> ...


With those prices, I get where you're coming from. My Sapphire 6400 was 160 GBP (not USD, but whatevs), but I wouldn't have paid more than this.



TheinsanegamerN said:


> 1. it's 3% SLOWER then the 1650 at 1080p. The 1650 launched 3 years ago for the same $159 price point. That is technically negative price/perf movement over 3 years, an absolute embarrassment. rDNA 2 is much more capable then this.
> 2. it's still limited to 4GB of VRAM, even at 1080p this is an issue for my much slower RX 560 in certian games. Paying 1650 prices for a GPU with the same limitation is just silly.
> 3. it's got the same PCIe limitation as the 6500xt. Many of us are running pci gen 3 systems where the 6400 will lose additional performance, widening the gap with the now 3 year old 1650.


1. It may be 3% slower, but it is also around 50-80% cheaper than low profile 1650 models at the moment. Launch prices don't matter, as we all know that every single graphics card's price has shot up to the moon in the last 2 years (the 1030 costs 90-100 GBP right now which is ridiculous). Why should the 6400 be an exception from this?
2. You're paying original 1650 release prices for a current 1650 level card. Don't forget about the previous point. If the 6400 had been released in 2020, it probably would have been an 80 USD card. But it's 2022 now.
3. That I agree with. How much of a limitation it is, it's in the review, and I also made a post about it earlier when I paired it with a Ryzen 3 CPU and single channel RAM in a PCI-e 3.0 system. Based on this, everybody can decide for themselves whether a 6400 is worth it _for them_ or not.


----------



## Valantar (May 1, 2022)

TheinsanegamerN said:


> The total lockdown of the 6400xt clocks is terrible. There's no reason for that, the 6500xt is unlocked, as is the older rx 550 and 560. AMD is really trying to cover up just how horribly gimped the 6500 is.
> 
> Now, if nvidia did this with the 3050 or GT 1030 there'd be screeching from the rooftops. When AMD does it, crickets....


Crickets? There's been quite a bit of complaining about it that I've seen around here. IMO, not allowing OCing on <75W cards  is perfectly fine - saves idiots from burning out the power traces on their motherboards. As for how this is an attempt at covering up how the 6500 XT is "gimped", you'll have to explain that one. Gimped = artificially held back somehow. The 6400 is the same GPU, with 4 fewer CUs and lower clocks, plus a locked down power limit. Best case scenario for that if it wasn't locked down would be it performing close to the 6500 XT, but a bit worse due to fewer CUs. Which ... would be exactly as with every other cut-down GPU? The 6600 isn't a demonstration of the 6600 XT being gimped after all - it's just a lower tier SKU. A demonstration of how the 6500 XT is gimped would be somehow showing how it would perform with a wider PCIe bus or memory bus, which ... well, the 6400 can't do that, and unless you have access to some very exotic AMD engineering samples, that's not something that can be done reasonably. Beyond that, benchmarks speak for themselves.

Of course, the sane scenario would be this selling at ~$120, with the 6500 XT at ~$150 for those wanting an unlocked, higher power card. (Though my first move if I had a 6500 XT would be to underclock it, not overclock it!)


TheinsanegamerN said:


> Slight correction, the first mining boom was in 2013-2014. That's what drove prices of the R9 290x through the roof, at the time GCN was far and away superior to kepler at mining. That coupled with the titan's success is what helped pushed prices as high as they are now.


Sure, first _major_ mining boom them. That early one might have pushed R9 290X prices up, but overall didn't affect the GPU market in a major way.


AusWolf said:


> 1. It may be 3% slower, but it is also around 50-80% cheaper than low profile 1650 models at the moment. Launch prices don't matter, as we all know that every single graphics card's price has shot up to the moon in the last 2 years (the 1030 costs 90-100 GBP right now which is ridiculous). Why should the 6400 be an exception from this?
> 2. You're paying original 1650 release prices for a current 1650 level card. Don't forget about the previous point. If the 6400 had been released in 2020, it probably would have been an 80 USD card. But it's 2022 now.
> 3. That I agree with. How much of a limitation it is, it's in the review, and I also made a post about it earlier when I paired it with a Ryzen 3 CPU and single channel RAM in a PCI-e 3.0 system. Based on this, everybody can decide for themselves whether a 6400 is worth it _for them_ or not.


This is the kind of perspective we need. While it's also important to stay grounded in what would be _reasonable _GPU prices in a sensible world - which is definitely not the current situation - for real-world comparisons of things for sale today, that can't be the benchmark (unless the overall advice is "don't buy anything unless you have to, pricing is insane", which I mostly agree with, but that "unless you have to" is a rather wide-open door for complications). So we need to not only account for performance and MSRPs of products currently on the market, but also the realities of current pricing - however terrible it may be.


----------



## TheinsanegamerN (May 2, 2022)

Valantar said:


> Crickets? There's been quite a bit of complaining about it that I've seen around here. IMO, not allowing OCing on <75W cards  is perfectly fine - saves idiots from burning out the power traces on their motherboards.


When nvidia used that exact same reasoning for locking down mobile maxwell overclocking, tech website were lighting torches. There may be some rumbling from some users here, but far from the fiery reactions nvidia prompts for the exact same behavior. Hell you make my point, finding reasons why it's OK when AMD does it. You can use the same justification to prevent idiots from frying their 300 watt 3090s. 


Valantar said:


> As for how this is an attempt at covering up how the 6500 XT is "gimped", you'll have to explain that one. Gimped = artificially held back somehow. The 6400 is the same GPU, with 4 fewer CUs and lower clocks, plus a locked down power limit. Best case scenario for that if it wasn't locked down would be it performing close to the 6500 XT, but a bit worse due to fewer CUs. Which ... would be exactly as with every other cut-down GPU? The 6600 isn't a demonstration of the 6600 XT being gimped after all - it's just a lower tier SKU. A demonstration of how the 6500 XT is gimped would be somehow showing how it would perform with a wider PCIe bus or memory bus, which ... well, the 6400 can't do that, and unless you have access to some very exotic AMD engineering samples, that's not something that can be done reasonably. Beyond that, benchmarks speak for themselves.


Limiting the 6500 to 4x PCIe lanes artificially holds back performance of the GPU on pcie 3.0 systems. 4GB of VRAM on a 64 bit bus bandwidth starves the GPU of both bandwidth and capacity. This can be demonstrated in games like DOOM eternal.

That's gimping, pure and simple. The 6500 is held back artificially be decisions made by AMD to cut costs. Given how close the 6400 and 6500 can be, being able to OC the 6400 would expose how held back the 6500xt is, especially on older systems or budget systems. 


Valantar said:


> Sure, first _major_ mining boom them. That early one might have pushed R9 290X prices up, but overall didn't affect the GPU market in a major way


I'm sure increasing MSRP by over 50% and rendering the AMD GCN lineup of cards unavailable to gamers for almost an entire year, resulting in the same "will the market ever go back to normal" posts you see right now, counts as affecting the market "in a major way". 


Valantar said:


> This is the kind of perspective we need. While it's also important to stay grounded in what would be _reasonable _GPU prices in a sensible world - which is definitely not the current situation - for real-world comparisons of things for sale today, that can't be the benchmark (unless the overall advice is "don't buy anything unless you have to, pricing is insane", which I mostly agree with, but that "unless you have to" is a rather wide-open door for complications). So we need to not only account for performance and MSRPs of products currently on the market, but also the realities of current pricing - however terrible it may be.





AusWolf said:


> 1. It may be 3% slower, but it is also around 50-80% cheaper than low profile 1650 models at the moment. Launch prices don't matter, as we all know that every single graphics card's price has shot up to the moon in the last 2 years (the 1030 costs 90-100 GBP right now which is ridiculous). Why should the 6400 be an exception from this?
> 2. You're paying original 1650 release prices for a current 1650 level card. Don't forget about the previous point. If the 6400 had been released in 2020, it probably would have been an 80 USD card. But it's 2022 now.
> 3. That I agree with. How much of a limitation it is, it's in the review, and I also made a post about it earlier when I paired it with a Ryzen 3 CPU and single channel RAM in a PCI-e 3.0 system. Based on this, everybody can decide for themselves whether a 6400 is worth it _for them_ or not.


1. Unless you are willing to give into the scalpers and pay them bloated prices for used products, the 6400xt moving the price/perf needle backwards absolutely matters. Saying "well other cards have gone up in price too!!!!" is pure whataboutism, and doesnt change the fact that the 6400 offers worse performance/$ then the 1650 did at LAUNCH price, 3 YEARS ago. 
2. Again, see whataboutism. My point was that AMD has moved the price/perf needle backwards from the 1650. I dont care what an out of production GPU from 3 years ago goes for on ebay today, I care about how much a new GPU costs compared to options from the previous generation's launch price. The 2080ti was available on ebay for $500 briefly after the 3080 launch, that did not change the fact that the 3080 offered a SUBSTANTIAL improvement in perf/$ over the 2080. 
3 Whether people think it's worth it or not, it does not change the objective fact that the 6400 in a PCIe 3.0 system will do worse then the review here shows, and further widen the gap between it and 3 year old options, reinforcing the point that AMD has horrendously overpriced the 6400, just like no matter how many people can justify buying a 3090ti for gaming it doesnt change the face that the 3090ti is a horrendously priced product.  No amount of meatshielding AMD will change this fact.


----------



## The red spirit (May 2, 2022)

AusWolf said:


> It's not a UK vs Lithuania thing. It's more like a "companies don't get rich by spending money" kind of thing. But let's stop the off-topic here.


Sort of. Having good conditions attracts more talent, even the best talent sometimes. That's desirable generally, on the other hand there are penny pinchers that do things as cheaply as possible for maximum output and lower quality. 



AusWolf said:


> That's exactly why you need a new PC if you want to play Youtube in HD. My point stands that just because you have an Athlon 64 or Athlon X4 or whatever at hand, it doesn't mean that it's fit for _your_ purpose. It _was_ fine, but it's not anymore. Nothing stops you from walking into a computer recycling centre and paying 5-10 EUR for a Sandy i5. It's even (a lot) cheaper than buying a new graphics card, and it won't only give you HD videos, but the whole system will be faster as well. But if you'd rather pay a hundred EUR just for a fecking video codec, be my guest.


Unless you want something as old as first gen Core i stuff, sure, you can find them for dirt. They have cores as slow as that Athlon X4. If you want Haswell, you will pay. And anything newer used is is bad as buying new, even worse if you need a generation or two generations old boards. Scalping for those is insane. I looked at market and one of the cheaper i5s is i5-3470s, bloke want 29 EUR for that. Another bloke sells i5-2320 for 15 EUR. And third bloke sells i5-4590 for 40 EUR. Never mind the boards. At this point, new Pentium or Celeron is way better deal. And there aren't dirt cheap i5s locally. Cheapest i5 on eBay is 29 EUR + 12 EUR shipping with unknown import fees. It's from Italy. Cheaper i5 computer with gt 1030 is 170 EUR, but godness gracious, it has a bomb like PSU, case without ventilation, which looks like it was from early 2000s and it looks like single stick of RAM. It's with i5 2500 tho. That's ancient, barely better than Athlon X4, but 1030 saves the day, there's SSD too. 




AusWolf said:


> Is that your point? Closing ears, closing eyes, _"lalala, I'm not listening because you're stupid"_? Very mature indeed.


That's literally you here. Ignoring scenarios where GPU functions are important and lalala CheEP i5 StOoPiD. Very mature, indeed. I would understand such moronic statement if you haven't ever been out of your town, but that's not the case. There aren't cheap i5s everywhere and replacing one e-waste with another is monkey business.




AusWolf said:


> I did not. I said that the 6400's decoder supports all the formats that the 710's does, plus H.265. This is not a complaint. This is a fact.


In another thread ffs.




AusWolf said:


> The 6400 has VP9 and HEVC decode as well. I agree that the 1030 is enough for 99% of HTPC uses, as long as you don't need HDMI 2.1. It's only that the 1030 costs around £100, which is a terrible deal, imo.


Either GT 1030 or Quadro T400. Or buying a whole new platform altogether, which is 200 EUR minimum.




AusWolf said:


> As for an Alder Lake upgrade, I played with the thought before buying the 6400, but a motherboard and a CPU would have cost me around £150-200, and then I would have ended up with a slower CPU than what I have now. An i3 with the same performance as my Ryzen 3 would have cost me £250 for the whole system. Not worth it. As for people coming from older systems, they might also need to buy some DDR4 RAM which makes it even more expensive. If they have some DDR3 laying around, picking up an old i5 from a second-hand store for a couple of quid is still a lot better deal.


You really have a bad upgrading habit. A bit hypocritical of you to complain about price, when you more or less buy a new CPU or GPU every generation. I hope you sell some, but you certainly don't save by going through parts so often. Your GT 1030 isn't even 1 year old, 3100 is at best 2 years old if that. BTW what happened to i7 10700? Wasn't it for HTPC too? That should have decoding capabilities.




AusWolf said:


> Are you seriously comparing the 6400 to a top tier card from 20 years ago? Jesus...


It's that poo, so yeah. And X800 Pro was upper end card. High end card back then was X800 XT PE AGP. I have X800 XT PE too, but it's basically the same as X800 Pro. It's only marginally faster, but was going for way more dosh back then. Even as cheap upgrade it was very disappointing indeed. Definitely not as big leap as from FX 5200 to X800 Pro, but even then same games were playable, just at more fps and better graphics. Due to X800 series lacking DirectX 9c support and pixel shader version (can't recall which), it was stupidly crippled card. nVidia 6800 cards didn't have such limitations, but still aged badly due to way too bad power consumption and then soon after launched 8000 series, which were insanely good and had very long lifespan. RX 6400 and 6500 XT will have similarly terrible fate.



Nuke Dukem said:


> The average net wage here is 1/3 to 1/4 of what you have over there. Unless you have disposable income - yes, it's a f****** expensive hobby.


Aye, during last 3 years budget gaming pretty much died. Not sure about Bulgaria, but not even APUs were available. Even 1050 Tis, 750 Tis, RX 550 was gone or going for 300+ EUR. The only options for budget were GT 1030 (even that was nearly sold out all the time), GT 730 GDDR5 and Quadro T600. Quadro T600 was 200 EUR and was as fast as 1050 Ti. With i3 10100f it was the only sort of passable configuration. At least now there are a bit more options like 1650 or RX 6600 (which is 400 EUR, but relatively awesome value). I'm not stoked that all new we got is that decoderless Radeon e-waste and nVidia isn't exactly planning to release GTX 3030 either. Even such low end configurations would now takes nearly two months of averagely paid work in medium size city. If you are in smaller city (with less than 50k people), then it sucks to be you. If you want anything more high end, you better work in IT in capital city or save for it for half of whole year. Lithuania also has 15+% inflation and imports have quite high fees, so your money loses value quite fast and using eBay isn't exactly an option.

The only strategy for budget gamers is to either play older games on modest hardware or paly new games at 720p low with 30-40 fps. I'm blessed to have RX 580, but 6500 XT is still slower than it and RX 580 won't last forever and at some point will become obsolete due to having old arch, lack of DX support, old shader support or some similar reason. The only saving grace is that quite a lot of modern games are quite boring, rehashed version of older one or buggy mess. Not sure about you, but I haven't really seen much to play. I only have Horizon 5 from newer games, but that's it. I gotta admit, that I had a blast this year playing Battlefield 1942, which is old as fuck. Even UHD 710 would have been enough for it. It's quite nice offline, but AI sometimes is quite dumb and gets stuck in places. I played some Stalker too. Again old as fuck and would run with UHD 710. I tried to run it with FX 5200 128MB and it almost could at 640x480, which X800 Pro it's not a problem. Obviously with RX 580 it runs perfectly fine at 1440p, ultra settings and some control panel settings cranked.


----------



## catulitechup (May 2, 2022)

The red spirit said:


> Aye, during last 3 years budget gaming pretty much died. Not sure about Bulgaria, but not even APUs were available.
> 
> Even 1050 Tis, 750 Tis, RX 550 was gone or going for 300+ EUR.
> 
> ...


EU gpu prices are terrible compared usa

In my case think i dont want any money to fucking gpu scumbags (until now this include amd and nvidia) companies, personally wait for arc and after this see prices

however in this point i use mainly old games, maybe wait for meteor lake next year because stay interested in arc title igpu*



> *hopefully intel aka pat dont continue fuck desktop igps because actually intel laptops igps are more better than intel desktop igps
> 
> for example pentium g7400 have uhd 710 with 16 eus aka 128 shaders meanwhile pentium 8500 igp have 48 eus aka 384 shaders and               another example is core i3 12100 igp have uhd 730 with 24 eus aka 192 shaders (i5 12500 have uhd 770 with 32 eus aka 256 shaders)
> and i3 1210u igp have 64 eus aka 512 shaders


----------



## AusWolf (May 2, 2022)

TheinsanegamerN said:


> 1. Unless you are willing to give into the scalpers and pay them bloated prices for used products, the 6400xt moving the price/perf needle backwards absolutely matters. Saying "well other cards have gone up in price too!!!!" is pure whataboutism, and doesnt change the fact that the 6400 offers worse performance/$ then the 1650 did at LAUNCH price, 3 YEARS ago.
> 2. Again, see whataboutism. My point was that AMD has moved the price/perf needle backwards from the 1650. I dont care what an out of production GPU from 3 years ago goes for on ebay today, I care about how much a new GPU costs compared to options from the previous generation's launch price. The 2080ti was available on ebay for $500 briefly after the 3080 launch, that did not change the fact that the 3080 offered a SUBSTANTIAL improvement in perf/$ over the 2080.
> 3 Whether people think it's worth it or not, it does not change the objective fact that the 6400 in a PCIe 3.0 system will do worse then the review here shows, and further widen the gap between it and 3 year old options, reinforcing the point that AMD has horrendously overpriced the 6400, just like no matter how many people can justify buying a 3090ti for gaming it doesnt change the face that the 3090ti is a horrendously priced product.  No amount of meatshielding AMD will change this fact.


1. I wasn't talking about scalpers. If you strictly consider retail availability only, then the low profile 1050 Ti and 1650 aren't even there, so your only options are the GT 710 for £60, the GT 1030 for £100 or the 6400 for £170. Don't tell me that the 1030 is so great that it's worth 100 quid, or that the 710 is worth spending any amount of money on in 2022.
2. It's not whatabautism. Retail prices and MSRP three years ago don't concern me, nor do they concern anyone else who walks into a computer store, or looks one up online right now. Show me a store where you can buy a 1650 for 150 USD, a 3060 for 329 USD, or a 3080 for 699 USD.
3. I've tested it. Gameplay experience is subjective, of course, but for me, it wasn't that bad in most cases. Metro Exodus sucked on it for some reason, I acknowledge that.



The red spirit said:


> Sort of. Having good conditions attracts more talent, even the best talent sometimes. That's desirable generally, on the other hand there are penny pinchers that do things as cheaply as possible for maximum output and lower quality.


That's very true. I wish the logistics sector worked on this principle, too. Unfortunately, this industry isn't like that. Our company leaders are only concerned about numbers most of the time.



The red spirit said:


> Unless you want something as old as first gen Core i stuff, sure, you can find them for dirt. They have cores as slow as that Athlon X4. If you want Haswell, you will pay. And anything newer used is is bad as buying new, even worse if you need a generation or two generations old boards. Scalping for those is insane. I looked at market and one of the cheaper i5s is i5-3470s, bloke want 29 EUR for that. Another bloke sells i5-2320 for 15 EUR. And third bloke sells i5-4590 for 40 EUR. Never mind the boards. At this point, new Pentium or Celeron is way better deal. And there aren't dirt cheap i5s locally. Cheapest i5 on eBay is 29 EUR + 12 EUR shipping with unknown import fees. It's from Italy. Cheaper i5 computer with gt 1030 is 170 EUR, but godness gracious, it has a bomb like PSU, case without ventilation, which looks like it was from early 2000s and it looks like single stick of RAM. It's with i5 2500 tho. That's ancient, barely better than Athlon X4, but 1030 saves the day, there's SSD too.


I guess that's a country vs country difference, then. Here, the i5 4460 sells for 15 quid with warranty.



The red spirit said:


> That's literally you here. Ignoring scenarios where GPU functions are important and lalala CheEP i5 StOoPiD. Very mature, indeed. I would understand such moronic statement if you haven't ever been out of your town, but that's not the case. There aren't cheap i5s everywhere and replacing one e-waste with another is monkey business.


You presented a case. I presented a solution that is cheaper than buying a graphics card. You ignored it. Let's leave it at that.



The red spirit said:


> In another thread ffs.


OK, show me. It may have been ages ago, as I've had the 1030 for a while now.



The red spirit said:


> Either GT 1030 or Quadro T400. Or buying a whole new platform altogether, which is 200 EUR minimum.


GT 1030: sure. T400: too expensive and rare. New platform: too expensive.



The red spirit said:


> You really have a bad upgrading habit. A bit hypocritical of you to complain about price, when you more or less buy a new CPU or GPU every generation. I hope you sell some, but you certainly don't save by going through parts so often. Your GT 1030 isn't even 1 year old, 3100 is at best 2 years old if that. BTW what happened to i7 10700? Wasn't it for HTPC too? That should have decoding capabilities.


If you have such a good memory regarding what I said in other threads, then you might recall me saying that I don't only buy computer parts to upgrade.  PC building is a hobby of mine. I buy most of my stuff out of curiosity, or through a dirt cheap deal, not because I actually need it.



The red spirit said:


> It's that poo, so yeah. And X800 Pro was upper end card. High end card back then was X800 XT PE AGP. I have X800 XT PE too, but it's basically the same as X800 Pro. It's only marginally faster, but was going for way more dosh back then. Even as cheap upgrade it was very disappointing indeed. Definitely not as big leap as from FX 5200 to X800 Pro, but even then same games were playable, just at more fps and better graphics. Due to X800 series lacking DirectX 9c support and pixel shader version (can't recall which), it was stupidly crippled card. nVidia 6800 cards didn't have such limitations, but still aged badly due to way too bad power consumption and then soon after launched 8000 series, which were insanely good and had very long lifespan. RX 6400 and 6500 XT will have similarly terrible fate.


Guess what... high end cards tend to come with all the features and gimmicks while low end ones don't. What do you find so surprising about this?


----------



## Valantar (May 2, 2022)

AusWolf said:


> I guess that's a country vs country difference, then. Here, the i5 4460 sells for 15 quid with warranty.


Wow, I wish I knew of a store like that here in Sweden or in the EU generally. I just ordered an i7-2600 from Ebay for my secondary PC, and that cost me €35+€15 in shipping. Then again an i7 is always more expensive than an i5, and I specifically wanted the HT support. Now I'm just looking for a way to spend less than SEK 1000 on an ITX motherboard for a Haswell i5 that I got my hands on for free recently. There are those "new" Chinese brand motherboards that look quite interesting (M.2 slots even!), but they're so expensive


----------



## The red spirit (May 2, 2022)

AusWolf said:


> That's very true. I wish the logistics sector worked on this principle, too. Unfortunately, this industry isn't like that. Our company leaders are only concerned about numbers most of the time.


Just asking, but wouldn't it be better to learn some programming and get job in that? From what I see, at least in Lithuania, there is quite a lot of demand for them, often you can work from home and making 2-4 or even 10 times as much as other degree holders seems entirely possible. That looks like crazy shortcut in life and you only need to learn for one year initially. Job offers say that after first 2-5 years you can get full wage, but even entry level programmer gets average national wage. I don't know any other profession that seems to be rather easy to get in and pays a lot. You can either work from home and have a nice office at home or you can save a lot of money and keep putting it into investments and after decade, you can just retire and live off your pot until you die. And by saving, I mean you are basically making times more dosh than basically everyone else, so you can live basically like them and not spend more. I wonder if it really works out like that, if it does that sounds amazing. But even in non programming fields, you can work from home and if you hate your office, there are possibilities like that. Not only that, but you can just buy cheaper property in middle of nowhere too, so even if you don't make more money than others, you have unique expense lowering opportunities with improved quality of life. As long as you don't mind WFH, it seems like a bit of no brainer thing to do. 



AusWolf said:


> I guess that's a country vs country difference, then. Here, the i5 4460 sells for 15 quid with warranty.


That's not bad deal, but the real problem with old CPUs is that their motherboards are getting less common and people scalp them badly. 




AusWolf said:


> OK, show me. It may have been ages ago, as I've had the 1030 for a while now.


That was your HTPC thread, I certainly won't find it.




AusWolf said:


> GT 1030: sure. T400: too expensive and rare. New platform: too expensive.


GT 1030 and T400 basically cost the same and T400 has some GDDR6.




AusWolf said:


> If you have such a good memory regarding what I said in other threads, then you might recall me saying that I don't only buy computer parts to upgrade.  PC building is a hobby of mine. I buy most of my stuff out of curiosity, or through a dirt cheap deal, not because I actually need it.


Maybe, but that's still really often. 




AusWolf said:


> Guess what... high end cards tend to come with all the features and gimmicks while low end ones don't. What do you find so surprising about this?


Except that wasn't exactly that way. R420 core in X800 Pro and X800 XT PE had 12 and 16 pipelines. That exact same core was cut down to 4 pipes for lower end GPUs like ATi Radeon HD X550 XT. You got the same capabilities, but for a lot less. And even X800 GT or SE were affordable cards. They had 8 pipes. With enough moding and luck, you may have been able to unlock all 16 pipes and overclock it. You could raise voltage manually with potentiometer and slap ATi Silencer from Arctic. But anyway, those cards were cheap, modable and feature wise identical to flagship cards. You even got the exactly the same cooler on it and often very similar PCB too. And I remember Apple bragging about Intel GMA from similar era and how it could play BluRays just fine and that was the lowest of the low, not even integrated into CPU, but on board graphics. RX 6400 really has some inexcusable regressions that don't really save much, but makes user experiences crippled. If I remember well, nVidia cards handle decoding with CUDA, so there's no easy way to cut out such functionality too. Even if you manage to cut it out, you same barely anything anyway. A CU or two likely take up same space, so it's pretty silly to mess with VCN hardware.



Valantar said:


> Now I'm just looking for a way to spend less than SEK 1000 on an ITX motherboard for a Haswell i5 that I got my hands on for free recently. There are those "new" Chinese brand motherboards that look quite interesting (M.2 slots even!), but they're so expensive


I would rather avoid those Chinese boards. You don't have any support, some of them are gimped on HW level (like botched RAM channels), some lack some BIOS options, questionable VRMs, some features might not work (like turbo) or they will only support some very specific models of CPUs.


----------



## TheinsanegamerN (May 3, 2022)

I love being proven right:









						AMD Radeon RX 6400 Review
					

The Radeon RX 6400 is a brand new RDNA2 GPU that uses TSMC's cutting-edge 6nm process. Sounds exciting, right? Well, maybe not so much once you get...




					www.techspot.com
				




TL;DR: on PCIe 3.0 the 6400 loses roughly 15% on average vs 4.0. One again, AMD has gimped their latest card....


----------



## Valantar (May 3, 2022)

TheinsanegamerN said:


> I love being proven right:
> 
> 
> 
> ...


You know Techspot's reviews are the same as HWUB, right? So in essence, that review has already been posted. Nothing new there. 




The red spirit said:


> I would rather avoid those Chinese boards. You don't have any support, some of them are gimped on HW level (like botched RAM channels), some lack some BIOS options, questionable VRMs, some features might not work (like turbo) or they will only support some very specific models of CPUs


Yeah, I'll definitely have to do some research before committing - at least reading some reviews. At least there are some Haswell ITX boards to be found - pickings are far slimmer for my poor old Sandy Bridge CPUs. There are a few that exist, but man are they expensive.


----------



## catulitechup (May 3, 2022)

TheinsanegamerN said:


> I love being proven right:
> 
> 
> 
> ...



very good another review and back to confirm this card suffer so much beetween poorly decision to cut pci-e lanes and feature capabilities

another detail dont like is this, maybe for tiny heatsink and fan:



> The GPU temperature was reasonable though, peaking at 78 C



and stay agree with some conclusions like this:



> Ultimately, the Radeon RX 6400 *sucks just as much as we knew it would*.
> 
> Those of you seeking a budget graphics card for gaming should certainly look elsewhere, especially if you have a PCIe 3.0 system.
> The best alternative for those wanting to get their hands on a graphics card for under $200 is to shop second hand if possible.
> ...



rx 6400 in sff use maybe a option and if you live in EU because gpu prices there are a crazy, but sff use are a minimal part or users because most users have typical pc aka no sff

and in another topic apparently vcn 4.0 in rdna 3 dont come with av1 encode support for now:


__ https://twitter.com/i/web/status/1521291701050032128


----------



## AusWolf (May 3, 2022)

The red spirit said:


> Just asking, but wouldn't it be better to learn some programming and get job in that? From what I see, at least in Lithuania, there is quite a lot of demand for them, often you can work from home and making 2-4 or even 10 times as much as other degree holders seems entirely possible. That looks like crazy shortcut in life and you only need to learn for one year initially. Job offers say that after first 2-5 years you can get full wage, but even entry level programmer gets average national wage. I don't know any other profession that seems to be rather easy to get in and pays a lot. You can either work from home and have a nice office at home or you can save a lot of money and keep putting it into investments and after decade, you can just retire and live off your pot until you die. And by saving, I mean you are basically making times more dosh than basically everyone else, so you can live basically like them and not spend more. I wonder if it really works out like that, if it does that sounds amazing. But even in non programming fields, you can work from home and if you hate your office, there are possibilities like that. Not only that, but you can just buy cheaper property in middle of nowhere too, so even if you don't make more money than others, you have unique expense lowering opportunities with improved quality of life. As long as you don't mind WFH, it seems like a bit of no brainer thing to do.


I've got personal reasons for not going down that way. I'm happy to talk about it in private, but let's not spam the thread even more. 



The red spirit said:


> That's not bad deal, but the real problem with old CPUs is that their motherboards are getting less common and people scalp them badly.


That again sounds like a country vs country thing. I've seen H81 boards selling for £20-30 on ebay. So 30 quid for the board, 15 for the CPU, another 8 for two 4 GB sticks of RAM, that's £53 altogether.



The red spirit said:


> That was your HTPC thread, I certainly won't find it.


Oh, you mean my small form factor build thread (that has a link my signature). I'll update it soon with my recent experiences with the 6400 and with my other HTPC that I just built a week ago.

I remember it now. Even though the GT 710 supports H.264 decoding, it can't do it in 4K (which isn't mentioned anywhere on its datasheet). It's trying at 100% usage while the video is basically unwatchable and the CPU is sitting idle. That's why I bought the 1030. This wasn't a complaint, either. Pure fact.

Edit: If you want complaint, I think this is more of a trap situation than the 6400 where AMD states it outright that it doesn't support AV-1.



The red spirit said:


> Maybe, but that's still really often.


And?



The red spirit said:


> Except that wasn't exactly that way. R420 core in X800 Pro and X800 XT PE had 12 and 16 pipelines. That exact same core was cut down to 4 pipes for lower end GPUs like ATi Radeon HD X550 XT. You got the same capabilities, but for a lot less. And even X800 GT or SE were affordable cards. They had 8 pipes. With enough moding and luck, you may have been able to unlock all 16 pipes and overclock it. You could raise voltage manually with potentiometer and slap ATi Silencer from Arctic. But anyway, those cards were cheap, modable and feature wise identical to flagship cards. You even got the exactly the same cooler on it and often very similar PCB too. And I remember Apple bragging about Intel GMA from similar era and how it could play BluRays just fine and that was the lowest of the low, not even integrated into CPU, but on board graphics. RX 6400 really has some inexcusable regressions that don't really save much, but makes user experiences crippled. If I remember well, nVidia cards handle decoding with CUDA, so there's no easy way to cut out such functionality too. Even if you manage to cut it out, you same barely anything anyway. A CU or two likely take up same space, so it's pretty silly to mess with VCN hardware.


We're talking about a different age here. Graphics cards in general were a lot more affordable back then. Heck, I bought an X800 XT and then a 7800 GS from pocket money as a high school kid with no job. I work full time now, but anything above a 3070 Ti or 6700 XT is out of my range (even those are iffy at 5-600 GBP). But this is besides the point...

All in all, the 6400 doesn't have "regressions". It's more advanced than previous generations. It only lacks certain features that more expensive models of the same generation have. If having no AV-1 decoder is inexcusable for you because you want to use it with a CPU that can't handle it, that's a unique problem. That CPU is a terrible pair with any modern GPU, including the 6400.

Edit: If someone doesn't have the money to replace a 10+ year-old CPU, they most definitely won't have the money for a 6400, either.



TheinsanegamerN said:


> I love being proven right:
> 
> 
> 
> ...


IMO, talking about average is pointless. It can offer relatively the same performance with any PCI-e version in some games, but perform like ass in others.



catulitechup said:


> another detail dont like is this, maybe for tiny heatsink and fan:
> 
> _"The GPU temperature was reasonable though, peaking at 78 C"_


That's strange. My Sapphire Pulse peaks at 67 °C. I guess the heatpipe that runs across its cooler helps more than I thought.


----------



## The red spirit (May 3, 2022)

AusWolf said:


> That again sounds like a country vs country thing. I've seen H81 boards selling for £20-30 on ebay. So 30 quid for the board, 15 for the CPU, another 8 for two 4 GB sticks of RAM, that's £53 altogether.


Not really, if you watch Brian from TechYesCity, there's basically universal i5 or i7 "tax". And motherboards on eBay have always ben terribly overpriced.




AusWolf said:


> Oh, you mean my small form factor build thread (that has a link my signature). I'll update it soon with my recent experiences with the 6400 and with my other HTPC that I just built a week ago.
> 
> I remember it now. Even though the GT 710 supports H.264 decoding, it can't do it in 4K (which isn't mentioned anywhere on its datasheet). It's trying at 100% usage while the video is basically unwatchable and the CPU is sitting idle. That's why I bought the 1030. This wasn't a complaint, either. Pure fact.


lol that's quite sad. Even cards as ancient as Radeon HD 7750 can decode H264 and in 4K too. 



AusWolf said:


> We're talking about a different age here. Graphics cards in general were a lot more affordable back then. Heck, I bought an X800 XT and then a 7800 GS from pocket money as a high school kid with no job. I work full time now, but anything above a 3070 Ti or 6700 XT is out of my range (even those are iffy at 5-600 GBP). But this is besides the point...


But what about Intel UHD 710. It decode everything on budget. Even 4K AV-1. Can be found even in Celeron. That's stupidly affordable. Kinda shitty excuse for RX 6400 to not be able to do as much at nearly 4 times the cost of Celeron. GT 1030 is also somewhat superior as it supports VP9 decoding and has ShadowPlay. 



AusWolf said:


> All in all, the 6400 doesn't have "regressions". It's more advanced than previous generations. It only lacks certain features that more expensive models of the same generation have.


um, literally RX 5500 XT is superior to RX 6500 XT. Performs the same, used to be available at same or lower price, had ReLive, had PCIe x8 and has VP9 decoding/encoding, has more outputs. Sure, it doesn't have AV-1 decoder, but when it was launched AV-1 was more in experimental stage. Even RX 5300 is superior to RX 6400, not only smashing it at feature set, but offering superior performance in even lower tier. If AMD just haven't stopped production of 5000 series low end cards, they would have cheap and competitive products. I don't see any reason why low end RX 6000 series have to suck so much. This snafu reminds me of FX launch, when Phenoms beat FXs, while being more efficient and cheaper. 



AusWolf said:


> If having no AV-1 decoder is inexcusable for you because you want to use it with a CPU that can't handle it, that's a unique problem. That CPU is a terrible pair with any modern GPU, including the 6400.


I have i5 10400f and it skips some frames at 4k60 in YT with VP9. Also skipping parts of video is quite sluggish with it. 8k is completely out of question, it just doesn't work well at all. You can argue that my screen is only 1440p, but, mate, I love extra bitrate and supersampling. Youtube is just not the same, once you try that. I used to do it on phone too, before YT app had options for above native resolution, but there isn't as much benefit there as on bigger screen. But yeah, I will keep my "crappy" i5 away from RX 6400 snafu edition, it's not worthy of proper CPU. BTW FX and Athlon chips become sluggish and skippy with 1440p60, 1440p is usually fine. They can sort of handle 4k30, but skipping is too laggy then.


----------



## catulitechup (May 3, 2022)

The red spirit said:


> lol that's quite sad. Even cards as ancient as Radeon HD 7750 can decode H264 and in 4K too.
> 
> But what about Intel UHD 710. It decode everything on budget. Even 4K AV-1. Can be found even in Celeron. That's stupidly affordable.
> 
> ...


sadly amd will be more mediocre and greedy (more notable with ryzen 5 5600X) with lastest products, for this reason i dont have interest in give any money to this scumbag company

and are youre said rx 5300 xt is better than rx 6400 and curiously have same performance than rx 6500xt without forget be x3xx tier



> 4gb variant https://www.techpowerup.com/gpu-specs/radeon-rx-5300-xt.c3465
> 
> 3gb variant https://www.techpowerup.com/gpu-specs/radeon-rx-5300.c3584



resuming rx 5300 have pci-e gen 4 at 8x - 1408 shaders - 128bit memory bus with 112gbs on 4gb variant and 64bit memory bus with 168gbs on 3gb variant - h264/h265 encode capabilities


----------



## AusWolf (May 3, 2022)

The red spirit said:


> Not really, if you watch Brian from TechYesCity, there's basically universal i5 or i7 "tax". And motherboards on eBay have always ben terribly overpriced.


Did you check the links I sent? Here's the 4460 for 15 GBP with warranty. There is an i7 "tax", but old i5 CPUs are dirt cheap. H81 motherboards from ebay (link) between 20-30 quid. The RAM is in the post you replied to (with warranty again). £50-60 for the whole system. How much cheaper do you want it?



The red spirit said:


> lol that's quite sad. Even cards as ancient as Radeon HD 7750 can decode H264 and in 4K too.


Did you try that? I didn't even know that 4K existed when I had a 7000-series Radeon in my PC. Here it says that they only decoded up to 2K, by the way.



The red spirit said:


> But what about Intel UHD 710. It decode everything on budget. Even 4K AV-1. Can be found even in Celeron. That's stupidly affordable. Kinda shitty excuse for RX 6400 to not be able to do as much at nearly 4 times the cost of Celeron. GT 1030 is also somewhat superior as it supports VP9 decoding and has ShadowPlay.


A new Intel CPU + motherboard + RAM combo is too expensive just to watch movies. I thought we talked about this before.



The red spirit said:


> um, literally RX 5500 XT is superior to RX 6500 XT. Performs the same, used to be available at same or lower price, had ReLive, had PCIe x8 and has VP9 decoding/encoding, has more outputs. Sure, it doesn't have AV-1 decoder, but when it was launched AV-1 was more in experimental stage. Even RX 5300 is superior to RX 6400, not only smashing it at feature set, but offering superior performance in even lower tier. If AMD just haven't stopped production of 5000 series low end cards, they would have cheap and competitive products. I don't see any reason why low end RX 6000 series have to suck so much. This snafu reminds me of FX launch, when Phenoms beat FXs, while being more efficient and cheaper.


Did you visit any 6500 XT review thread by any chance? I was one of the few who said in one of them what a terrible card it was for its identity crisis: it tries to be low-power, but has a power connector. It tries to be a gaming card, but it's too slow and restricted on PCI-e 3.0. It doesn't even have low profile variants, making it useless in SFF cases. It's also too expensive for 200 USD. But that's not the topic here. I like the 6400 because it's the complete opposite: it really is low-power, has low profile options for SFF maniacs (like myself), it isn't trying to be a gaming card, and while it isn't 2018 levels cheap, its price is a bit closer to what it offers than that of the 6500 XT. It's everything the 6500 XT should have been.



The red spirit said:


> I have i5 10400f and it skips some frames at 4k60 in YT with VP9.


How? Even my Ryzen 3 3100 + RX 6400 HTPC doesn't do that.  (Or maybe the 6400 isn't so useless after all?)


----------



## The red spirit (May 4, 2022)

AusWolf said:


> Did you check the links I sent? Here's the 4460 for 15 GBP with warranty. There is an i7 "tax", but old i5 CPUs are dirt cheap. H81 motherboards from ebay (link) between 20-30 quid. The RAM is in the post you replied to (with warranty again). £50-60 for the whole system. How much cheaper do you want it?


I mean, outside of UK.




AusWolf said:


> Did you try that? I didn't even know that 4K existed when I had a 7000-series Radeon in my PC. Here it says that they only decoded up to 2K, by the way.


4K sort of existed, AMD was pushing Eyefinity technology. It wasn't for video exactly, but for gaming. Here's a blast from the past:









In theory, card supported 6 displays, but good luck trying to to connect them all and then trying to look at all those displays. Anyway, in NCIX demo it ran with 3 displays at 5720x1080, which is 6.2 million pixels. 4k is 8.2 million pixels. Radeon 5870 could run 6 displays, with DVIs maxing out at roughly 1440p. So, it seems that in theory 5870 could run 6 1440p displays. One 1440p display is nearly 3.7 million pixels, multiplied by 6 it ends up being 22.1 million pixels. That's quite above 4K. 4K itself became known 9-8 years ago in monitor scene, which was 2013-2014. And 7750 was launched in 2012, so that's close enough. 7750 had DP 1.2, which supports 4k75 or 5k30, but card seemingly was limited to outputting only 4k60.

As far as decoding goes, I guess I will take your word for it. But there's ancient R9 285 that could do that. And then after that, even RX 460 could do it. On nV side, it seems that GT 610 already could decode 4k with h264 codec. So, my point remains the same. 4k decoding is old and should be nothing special. Lately, it has been a norm for cards to be able to decode it.




AusWolf said:


> A new Intel CPU + motherboard + RAM combo is too expensive just to watch movies. I thought we talked about this before.


But you say that buying 4k decoding capable card is too expensive for ancient machine too. Buying Celeron upgrade is roughly 200 EUR and there will be a tons of other benefits of doing that, unlike with card only.




AusWolf said:


> Did you visit any 6500 XT review thread by any chance? I was one of the few who said in one of them what a terrible card it was for its identity crisis: it tries to be low-power, but has a power connector. It tries to be a gaming card, but it's too slow and restricted on PCI-e 3.0. It doesn't even have low profile variants, making it useless in SFF cases. It's also too expensive for 200 USD. But that's not the topic here. I like the 6400 because it's the complete opposite: it really is low-power, has low profile options for SFF maniacs (like myself), it isn't trying to be a gaming card, and while it isn't 2018 levels cheap, its price is a bit closer to what it offers than that of the 6500 XT. It's everything the 6500 XT should have been.


yeah, I watched HWUB review. But crucially, cards like RX 6400 and 6500 XT fail to be HTPC cards due to nerfed decoding support. You are better off with Quadro T400 or GT 1030 or just Alder Lake Celeron with UHD 710. And in case of RX 6400 being fast, wouldn't have GTX 1650 served you equally as well in gaming, few years earlier and for the same price without gimped decoding? It's not like there wasn't LP GTX 1650 either.




AusWolf said:


> How? Even my Ryzen 3 3100 + RX 6400 HTPC doesn't do that.  (Or maybe the 6400 isn't so useless after all?)


Like this:




That's just 4k60 VP9 video not in full screen mode. I skipped some parts of it myself, but yeah it dropped some frames. CPU was loaded to ~70% all the time too with some spikes, which lead to dropped frames. I tried 8k demo too and that just left CPU pegged to 100% and basically dropping half of frames. 1440p60 usually uses 35-45% CPU with as high as 75% CPU usage spikes, when manually skipping parts of video. During skipping video or going fullscreen, frames are dropped. So, while 1440p videos are perfectly watchable, experience isn't exactly immaculate. For such reasons I hope you can see why I say that GPU decoding is important. Because even pretty modern i5 in software can't truly ensure proper 4k video decoding experience. Sure it works fine, if you don't ever skip parts of video or you never go fullscreen while videos is playing, but then there could be some particularly intensive parts of 4k60 video, where even fast CPU may drop frames. Just pausing and starting video leads to big CPU usage spikes and dropped frames. If there was VP9 and AV-1 decoder card available, I might actually think of getting it, just for HW decoding. 

BTW if you wonder why RX 580 doesn't do anything, it's because AMD said that there was an embargo for VP9 HW decoding in 2016 and VP9 decoding will come with driver updates. It turned out that there was either any or full VP9 decoding hardware in GPU and all AMD did was briefly offer hybrid VP9 decoding capabilities, which were buggy and didn't work right. So soon VP9 decoding was completely scrapped from drivers and Polaris ended up not having any proper VP9 decoding. BTW hybrid VP9 decoding means partial acceleration, meaning that card couldn't do complete job by itself and would be partially offloading decoding to CPU. That sounds exactly as shit as it was. I'm quite surprised that nobody sued AMD for lying about non-existant features


----------



## eidairaman1 (May 4, 2022)

I would use the 6400 or 6500 for htpc, business, or as a diagnostic board.

I had a R7 250X Ghost by XFX from 2016 but its driving a 5800 rig now.


----------



## AusWolf (May 5, 2022)

The red spirit said:


> I mean, outside of UK.


Fair enough - I can only talk for the UK market.



The red spirit said:


> 4K sort of existed, AMD was pushing Eyefinity technology. It wasn't for video exactly, but for gaming. Here's a blast from the past:
> 
> 
> 
> ...


Yes, 4K "sort of" existed through Eyefinity and other multi-display technologies, but 4K decoding did not (nor did anyone need it).



The red spirit said:


> But you say that buying 4k decoding capable card is too expensive for ancient machine too. Buying Celeron upgrade is roughly 200 EUR and there will be a tons of other benefits of doing that, unlike with card only.


Well, if you absolutely cannot live without 4K 60 fps Youtube, then you need hardware decoding either in a new CPU or a new graphics card. The 6400 can apparently do it (it has VP9), so there you go. 4K 30 fps works on anything I mentioned above.



The red spirit said:


> BTW if you wonder why RX 580 doesn't do anything, it's because AMD said that there was an embargo for VP9 HW decoding in 2016 and VP9 decoding will come with driver updates. It turned out that there was either any or full VP9 decoding hardware in GPU and all AMD did was briefly offer hybrid VP9 decoding capabilities, which were buggy and didn't work right. So soon VP9 decoding was completely scrapped from drivers and Polaris ended up not having any proper VP9 decoding. BTW hybrid VP9 decoding means partial acceleration, meaning that card couldn't do complete job by itself and would be partially offloading decoding to CPU. That sounds exactly as shit as it was. I'm quite surprised that nobody sued AMD for lying about non-existant features


Wow! Now that's what I'd call a trap! The 6400 is nothing of the sort - it does exactly what AMD say it does. Nothing more, nothing less.


----------



## The red spirit (May 5, 2022)

AusWolf said:


> Yes, 4K "sort of" existed through Eyefinity and other multi-display technologies, but 4K decoding did not (nor did anyone need it).


To be fair, even Matrox Parhellia had surround capabilities. Imagine 3 CRTs running games in 2002 (*at slideshow framerates)




AusWolf said:


> Well, if you absolutely cannot live without 4K 60 fps Youtube, then you need hardware decoding either in a new CPU or a new graphics card. The 6400 can apparently do it (it has VP9), so there you go. 4K 30 fps works on anything I mentioned above.


Nah, I can live without that and CPU does the job, but it just wasn't as decent as one might expect. This wouldn't be a problem at all, if YT allowed to force h264. 




AusWolf said:


> Wow! Now that's what I'd call a trap! The 6400 is nothing of the sort - it does exactly what AMD say it does. Nothing more, nothing less.


Aye, RX580 was a big trap, not only that, but hype around Polaris in general was insane. I have to admit, that in terms of PR, that was one of AMD's best campaigns, meanwhile the actual product ended up sort of okay (og Polaris) or quite poor (Polaris refresh). It was also quite ironic to see AMD advertising power efficiency, when RX 480s burned PCIe slots and then RX 580 consumed as much power as GTX 1080. Still the best thing about Polaris cards is how long lasting they are. They were launched like 5-6 years ago, but they are still fast, fully supported, get updates, got Finewine, have enough VRAM (it didn't end well for GTX 970 owners with 3.5 GB or GTX 1060 3GB owners). Actual functionality of cards improved too. They got professional drivers, special compute mode, RIS, enhanced sync, Chill, Anti-Lag, integer scaling, 10 bit pixel format, AMD Wattman. That's a lot of stuff. Besides that, Polaris and Vega cards were the last without encrypted vBIOS, so vBIOS mods are super simple and easy. AMD also was very generous with voltage, therefore these cards are easy to overclock and undervolt. We also have heard a lot of Polaris/Vega capabilities in mining. Really, besides some lies and stupid hype, Polaris (or better said GCN) was one of the best releases by AMD. It's definitely up there with Radeon HD 4870. it also happened to avoid awful QC problems, like og GCN being poo at any tessellation at all, or entire RX 5000 series suffering common voltage/driver malfunctioning (anyone remembers black screen issues? Turns out it was hardware defect, voltage was set too low on many cards and the only fix was sacrificing boost speed or doing some software magic to avoid getting into super low power states). Despite TDP reduction and speed cap, my RX 580 still runs games at 1440p at medium-high.

BTW I actually managed to almost make my computer run 8k30 Youtube. I only needed to use Xubuntu, some script blocker in web browser and it was very close to playable. Gotta say, that supersampling looked awesome and I miss that kind of bitrate. At that point, it really starts to look like lossless video. I did the same with another Athlon X4 machine, but with 4K on 1080p screen. That ended up not as great. There was so much resolution, that it started to have aliasing . Wanna try running 8k60 video on that poor Ryzen? It's literally harder than running Crysis. Funny thing is that RX 580 most likely could run GTA 5 at normal settings at 8K. But supersampling in games is something entirely different than in videos. RX 580 with 8x SSAA struglles to run Colin McRae Rally 2005. Original resolution was 1920x1440, so actual resolution was 7680x5760. To be fair, it mostly ran well, but smoke kills performance. Looked very nice and sharp.


----------



## catulitechup (May 5, 2022)

for courtesy of videocardz as tagged as rumor appear possible some prices of arc gpus:








> Intel rumored to announce Arc A750, A380 desktop cards in late May/early June for 350/150 USD respectively - VideoCardz.com
> 
> 
> Initial Intel Arc desktop launch and pricing emerges According to the fresh report from Wccftech quoting their own sources, Intel has just provided a new launch timeline for its Arc Alchemist desktop cards. Intel reportedly did not confirm the exact release date to its partners yet, but new...
> ...


----------



## Trov (May 6, 2022)

Update on my Single Slot RX 6400 for Lenovo Thinkcentre Tiny analysis now that my XFX card arrived:

The card didn't immediately fit into my Thinkstation P330 Tiny; I had to dremel a little bit off of the front frame section that the front wifi antenna sits on. Alternatively this metal frame piece can simply be removed entirely with just 1 screw. After that the XFX RX 6400 fits inside. I dont think the Sapphire model, being about 1 or 2 cm longer has any chance of fitting inside the Lenovo Tiny. The PowerColor model appears to be the same length as the XFX version so should also fit.

The fan holes on the P330 cover don't line up with the RX 6400 fan, so eventually I will drill more holes later.

My first impression is that the XFX Single Slot RX 6400's fan is way louder (probably a good 2x-3x louder) at the same RPMs and has way more of a 'tone' to it than my Quadro T600/T1000. Unfortunately it's loud enough that I am sufficiently annoyed enough to consider staying with the Quadro instead as the final choice. The fan kicks in even in games such as Risk Of Rain 2 at 1080p. However, it does run about 10C cooler than the T1000 in the same game.(tested with P330 top cover off in both cases). I wonder if my particular fan is faulty or if it's just a crappy fan that XFX used. Maybe there is an alternative fan I can jerry-rig in place instead, since the fan can be removed without having to remove the heatsink. It does not appear that I can alter the fan curve of the RX 6400, at least with Afterburner.

Time Spy will not run for some reason as the benchmarks close as soon as loading is complete. Wonder if either a recent TimeSpy update or AMD Drivers broke it. Will try again in a week.

Since Time Spy isn't working I haven't done a whole lot of performance testing yet. I can say though that at PCIe 3.0, FurMark 1080p is about 5fps faster on the RX 6400 vs the Quadro T1000 but a few frames slower at 4K than the T1000.

Unless I can come up with a satisfactory fan solution I don't think I will want to keep the card, unfortunately. I was hoping for something a little quieter than the T1000, but it's much louder, despite running much cooler. Since the RPM and fan size are more or less equal between the two I don't think this is a case of "the card is much cooler because the fan is working much harder" so I think the noise can be totally solved with a better fan part.


----------



## AusWolf (May 6, 2022)

Trov said:


> Update on my Single Slot RX 6400 for Lenovo Thinkcentre Tiny analysis now that my XFX card arrived:
> 
> The card didn't immediately fit into my Thinkstation P330 Tiny; I had to dremel a little bit off of the front frame section that the front wifi antenna sits on. Alternatively this metal frame piece can simply be removed entirely with just 1 screw. After that the XFX RX 6400 fits inside. I dont think the Sapphire model, being about 1 or 2 cm longer has any chance of fitting inside the Lenovo Tiny. The PowerColor model appears to be the same length as the XFX version so should also fit.
> 
> ...


That's sad.  My Sapphire is very quiet. I can barely hear it even when the fan reaches 3500 rpm at 66 °C GPU temp. It's weird because it looks like the same fan that any other low profile 6400 uses.



Trov said:


> Time Spy will not run for some reason as the benchmarks close as soon as loading is complete. Wonder if either a recent TimeSpy update or AMD Drivers broke it. Will try again in a week.


Try updating your motherboard BIOS. My 6400 displayed a weird, distorted green boot image on my 4K TV (but only on that - every other TV or monitor was fine) until I updated the BIOS on my TUF A520M.



Trov said:


> Since Time Spy isn't working I haven't done a whole lot of performance testing yet. I can say though that at PCIe 3.0, FurMark 1080p is about 5fps faster on the RX 6400 vs the Quadro T1000 but a few frames slower at 4K than the T1000.
> 
> Unless I can come up with a satisfactory fan solution I don't think I will want to keep the card, unfortunately. I was hoping for something a little quieter than the T1000, but it's much louder, despite running much cooler. Since the RPM and fan size are more or less equal between the two I don't think this is a case of "the card is much cooler because the fan is working much harder" so I think the noise can be totally solved with a better fan part.


I'm not sure if that's your card's fault, or XFX in general, or if the Sapphire's cooler is really that much better. It does have a flat heatpipe running through the length of it, which I haven't seen on other models.



The red spirit said:


> Still the best thing about Polaris cards is how long lasting they are. They were launched like 5-6 years ago, but they are still fast, fully supported, get updates, got Finewine, have enough VRAM (it didn't end well for GTX 970 owners with 3.5 GB or GTX 1060 3GB owners). Actual functionality of cards improved too.


You can say the same about the 1060 6 GB, 1070 (Ti) and 1080 as well. Other than that, I agree. I personally never thought much of the RX 500 series, but looking at their popularity, I have to give them some credit.


----------



## Trov (May 6, 2022)

AusWolf said:


> I'm not sure if that's your card's fault, or XFX in general, or if the Sapphire's cooler is really that much better. It does have a flat heatpipe running through the length of it, which I haven't seen on other models.


The XFX heatsink also has a heatpipe and is made out of skived fins. The sapphire heatsink is probably a couple cm longer but I doubt that makes a massive difference.


----------



## The red spirit (May 6, 2022)

AusWolf said:


> You can say the same about the 1060 6 GB, 1070 (Ti) and 1080 as well. Other than that, I agree. I personally never thought much of the RX 500 series, but looking at their popularity, I have to give them some credit.


I wouldn't say that 1060 aged as well. It was initially a bit faster than RX 580 8GB, but years later, RX 580 beats it, not to mention that RX 580 loves vulkan, where it beats 1060. And if you need to run any computational software, then Polaris cards are a lot faster than Pascal cards. It's so ridiculous, that in floating point (double precision) operations, RX 560 beats GTX 1060. RX 580 is beating GTX 1080 Ti. I managed to take advantage of that in BOINC, but yeah I know that this isn't particularly interesting thing for average consumer. Even in single precision FP tasks, Polaris cards beat pascal cards significantly. In double precision floating point compute, RX 580 is still faster than RTX 3070 Ti. That's nuts. The old Vega 64 is still faster than RTX 3090 Ti. So if you need FP64 compute card, Polaris was insanely good. Today, you would need to buy RTX A6000 or Radeon Pro W6800 to beat Vega 64 in FP64 compute. Just to match (kind of Vega 64), minimum spec would be RTX A4000 or Radeon Pro W6600. If you wanted to help C19 vaccine research via BOINC or Folding@Home, you basically had to have GCN based card. 
You know what? Polaris and Vega cards strongly remind me of Tesla arch cards. They guzzle power like no tomorrow, but in terms of processing power architecture was very well balanced and just lasted a hella long time. Tesla card like 8800 GTS lasted at least good 6 years and was bearable for 8 years. There just wasn't any architectural flaw (like in Kepler) to make them useless way before their time. GCN is AMD's equivalent of Tesla arch, but more modern and still relevant today. I honestly couldn't say the same about Terrascale or rDNA, GCN was special. Even on nVidia side Fermi, Kepler felt quite disposable architecture and then Turing 1 and Maxwell just weren't great either. Tesla is still the best arch with Pascal being close second.


----------



## AusWolf (May 6, 2022)

The red spirit said:


> I wouldn't say that 1060 aged as well. It was initially a bit faster than RX 580 8GB, but years later, RX 580 beats it, not to mention that RX 580 loves vulkan, where it beats 1060. And if you need to run any computational software, then Polaris cards are a lot faster than Pascal cards. It's so ridiculous, that in floating point (double precision) operations, RX 560 beats GTX 1060. RX 580 is beating GTX 1080 Ti. I managed to take advantage of that in BOINC, but yeah I know that this isn't particularly interesting thing for average consumer. Even in single precision FP tasks, Polaris cards beat pascal cards significantly. In double precision floating point compute, RX 580 is still faster than RTX 3070 Ti. That's nuts. The old Vega 64 is still faster than RTX 3090 Ti. So if you need FP64 compute card, Polaris was insanely good. Today, you would need to buy RTX A6000 or Radeon Pro W6800 to beat Vega 64 in FP64 compute. Just to match (kind of Vega 64), minimum spec would be RTX A4000 or Radeon Pro W6600. If you wanted to help C19 vaccine research via BOINC or Folding@Home, you basically had to have GCN based card.
> You know what? Polaris and Vega cards strongly remind me of Tesla arch cards. They guzzle power like no tomorrow, but in terms of processing power architecture was very well balanced and just lasted a hella long time. Tesla card like 8800 GTS lasted at least good 6 years and was bearable for 8 years. There just wasn't any architectural flaw (like in Kepler) to make them useless way before their time. GCN is AMD's equivalent of Tesla arch, but more modern and still relevant today. I honestly couldn't say the same about Terrascale or rDNA, GCN was special. Even on nVidia side Fermi, Kepler felt quite disposable architecture and then Turing 1 and Maxwell just weren't great either. Tesla is still the best arch with Pascal being close second.


I don't know much about compute (and I don't care, either), so I take your word for it.

As for which architecture is better, I somewhat disagree. Terascale was awesome at the time of release, but newer games killed it. GCN was also great.

On nvidia's side, I agree with what you said about Kepler and Fermi - they were hot, hungry, but otherwise kind of meh. Maxwell was a huge improvement on them, just like Pascal was on Maxwell. They're both great architectures up to this day, imo. Turing never aimed for more performance over Pascal. It only introduced RT and DLSS. One could call it "Pascal RT" as well. For this, I cannot say that it's bad because it's not. Just a bit different. Ampere on the other hand, is nothing more than a dirty trick. Nvidia added FP32 capability to the INT32 cores to say that they "doubled" the CUDA cores without actually doubling them. Performance per watt stayed the same, though, so technically, it's "Pascal RT v2.0".



Trov said:


> The XFX heatsink also has a heatpipe and is made out of skived fins. The sapphire heatsink is probably a couple cm longer but I doubt that makes a massive difference.


Probably not. Although, it's strange to see that the Sapphire runs as cool as the full-height MSi card in the review. I have no issues with noise, either.


----------



## The red spirit (May 6, 2022)

AusWolf said:


> I don't know much about compute (and I don't care, either), so I take your word for it.


Have you ever tried Folding@Home or BOINC?



AusWolf said:


> As for which architecture is better, I somewhat disagree. Terascale was awesome at the time of release, but newer games killed it. GCN was also great.


That's exactly why I don't say that Terascale was good. Perhaps it was alright arch, but wasn't as universally good as others like GCN or Tesla, thus lacked in longevity. I have some Terascale cards and you could see how immature arch was. First iteration was downright awful and once it matured, it was already quite old. Plus, there were quite big core design differences between different Terascale versions too. And those weren't just incremental improvements, but more like complete overhauls of the whole thing. Terascale 1 in last revision was good, but second version was eh. Terascale 1, rev 1 was  flawed and slow. ATi was also quite behind nV in compute stuff, only 4000 series got good at that and later. And at that time ATi was awful at writing drivers, there were Omega drivers that managed to extract 20% more performance. 



AusWolf said:


> On nvidia's side, I agree with what you said about Kepler and Fermi - they were hot, hungry, but otherwise kind of meh. Maxwell was a huge improvement on them, just like Pascal was on Maxwell. They're both great architectures up to this day, imo. Turing never aimed for more performance over Pascal. It only introduced RT and DLSS. One could call it "Pascal RT" as well. For this, I cannot say that it's bad because it's not. Just a bit different. Ampere on the other hand, is nothing more than a dirty trick. Nvidia added FP32 capability to the INT32 cores to say that they "doubled" the CUDA cores without actually doubling them. Performance per watt stayed the same, though, so technically, it's "Pascal RT v2.0".


Wait, wasn't it Turing that "doubled" cores? Either way, higher driver overhead and lower 1080p performance, was a bit of fail. Turing 1 couldn't ray trace fast, DLSS 1 was clearly flawed and unpleasant to use. And RTX 2080 Ti was the first consumer card to cost more than 1k dollars. It was just rough. Ampere was needed to fix it, but it still didn't solve price, RT or DLSS problems, just made them not as bad. Ampere is still rough and is disposable arch. The only reason for it to exist is to be a step towards something better.


----------



## catulitechup (May 6, 2022)

thanks to @W1zzard are official pci-e scaling test



> AMD Radeon RX 6400 Tested on PCI-Express 3.0
> 
> 
> AMD is using a fairly narrow PCIe x4 interface on their Radeon RX 6400 GPU. We're taking AMD's new budget offering for a spin in a PCI-Express 3.0 configuration to determine how big the performance hit will be when running on an older system.
> ...


----------



## Valantar (May 6, 2022)

The red spirit said:


> I wouldn't say that 1060 aged as well. It was initially a bit faster than RX 580 8GB, but years later, RX 580 beats it, not to mention that RX 580 loves vulkan, where it beats 1060. And if you need to run any computational software, then Polaris cards are a lot faster than Pascal cards. It's so ridiculous, that in floating point (double precision) operations, RX 560 beats GTX 1060. RX 580 is beating GTX 1080 Ti. I managed to take advantage of that in BOINC, but yeah I know that this isn't particularly interesting thing for average consumer. Even in single precision FP tasks, Polaris cards beat pascal cards significantly. In double precision floating point compute, RX 580 is still faster than RTX 3070 Ti. That's nuts. The old Vega 64 is still faster than RTX 3090 Ti. So if you need FP64 compute card, Polaris was insanely good. Today, you would need to buy RTX A6000 or Radeon Pro W6800 to beat Vega 64 in FP64 compute. Just to match (kind of Vega 64), minimum spec would be RTX A4000 or Radeon Pro W6600. If you wanted to help C19 vaccine research via BOINC or Folding@Home, you basically had to have GCN based card.
> You know what? Polaris and Vega cards strongly remind me of Tesla arch cards. They guzzle power like no tomorrow, but in terms of processing power architecture was very well balanced and just lasted a hella long time. Tesla card like 8800 GTS lasted at least good 6 years and was bearable for 8 years. There just wasn't any architectural flaw (like in Kepler) to make them useless way before their time. GCN is AMD's equivalent of Tesla arch, but more modern and still relevant today. I honestly couldn't say the same about Terrascale or rDNA, GCN was special. Even on nVidia side Fermi, Kepler felt quite disposable architecture and then Turing 1 and Maxwell just weren't great either. Tesla is still the best arch with Pascal being close second.


There's a reason why AMD's CDNA arch is much more of a continuation of GCN than what RDNA is - GCN was far better at pure compute than at translating that compute into gaming performance. Of course, it's mostly silly to complain that FP64 performance is poor on cards that have FP64 explicitly left out as a design decision due to it being entirely unnecessary for their intended purpose. Like it or not, FP64 is almost entirely limited to scientific computing, which is generally not something consumers do. BOINC and the like are exceptions, but also projects based precisely on making use of "left over" compute capabilities of PCs - capabilities they no longer have, which thus begs the question of whether that model is no longer feasible. Useful or not, spending valuable silicon area on FP64 capabilities that <0.00001% of GPU owners will make use of is just plain wasteful.


----------



## The red spirit (May 6, 2022)

Valantar said:


> There's a reason why AMD's CDNA arch is much more of a continuation of GCN than what RDNA is - GCN was far better at pure compute than at translating that compute into gaming performance. Of course, it's mostly silly to complain that FP64 performance is poor on cards that have FP64 explicitly left out as a design decision due to it being entirely unnecessary for their intended purpose. Like it or not, FP64 is almost entirely limited to scientific computing, which is generally not something consumers do. BOINC and the like are exceptions, but also projects based precisely on making use of "left over" compute capabilities of PCs - capabilities they no longer have, which thus begs the question of whether that model is no longer feasible. Useful or not, spending valuable silicon area on FP64 capabilities that <0.00001% of GPU owners will make use of is just plain wasteful.


Sort of, I still remember reading that FP performance matters in some games. nVidia FX 5000 series had fucked up floating point stuff and thus performance in games was disappointing. But those are single precision floating point operations. In the past, FP64 performance used to be intentionally crippled on consumer cards and you could flash Quadro or FirePro vBIOS to regain it back. Anyway, here's a more modern explanation of when and where FP operations are used:









The FP64 performance mostly depends on how existing FP32 are interconnected probably. AFAIK Kerbal Space Program uses some FP64.


----------



## AusWolf (May 7, 2022)

The red spirit said:


> Have you ever tried Folding@Home or BOINC?


I have, but I can't see it being a reason for anyone to buy a graphics card. That is, you don't buy a compute capable graphics card for F@H then game on it. You buy a gaming card and then run F@H when you're not playing anything.



The red spirit said:


> That's exactly why I don't say that Terascale was good. Perhaps it was alright arch, but wasn't as universally good as others like GCN or Tesla, thus lacked in longevity. I have some Terascale cards and you could see how immature arch was. First iteration was downright awful and once it matured, it was already quite old. Plus, there were quite big core design differences between different Terascale versions too. And those weren't just incremental improvements, but more like complete overhauls of the whole thing. Terascale 1 in last revision was good, but second version was eh. Terascale 1, rev 1 was  flawed and slow. ATi was also quite behind nV in compute stuff, only 4000 series got good at that and later. And at that time ATi was awful at writing drivers, there were Omega drivers that managed to extract 20% more performance.


We had some Terascale 2 cards in my family that were good (until The Witcher 3 came out). I don't really have much experience with Terascale 1, so I'll take your word for it.



The red spirit said:


> Wait, wasn't it Turing that "doubled" cores?


Nope, it was Ampere. Turing has separate INT32 and FP32 cores - a pair of which counts as a CUDA core. For example, my 2070 has 2304 INT32 and 2304 FP32 cores. If the INT32 cores could also do FP32, you could say that it has 4608 CUDA cores. A 3060 has 3584 CUDA cores, but only half of them are "full" FP32 cores. The other half are INT32/FP32 multifunctional units.



The red spirit said:


> Either way, higher driver overhead and lower 1080p performance, was a bit of fail. Turing 1 couldn't ray trace fast, DLSS 1 was clearly flawed and unpleasant to use. And RTX 2080 Ti was the first consumer card to cost more than 1k dollars. It was just rough. Ampere was needed to fix it, but it still didn't solve price, RT or DLSS problems, just made them not as bad. Ampere is still rough and is disposable arch.


I tend to disagree. Driver overhead only affects you on a slow CPU and Turing's raytracing is about the same as Ampere's. Sure, Ampere does the same work with fewer RT cores, but since the performance per Watt of the whole architecture is the same (for example in a 1080-2070-3060 comparison), it doesn't really matter.



The red spirit said:


> Ampere is still rough and is disposable arch. The only reason for it to exist is to be a step towards something better.


This I agree with. I'd much rather call it Turing Refresh and the chips TU2xx, to be honest.



The red spirit said:


> Sort of, I still remember reading that FP performance matters in some games. nVidia FX 5000 series had fucked up floating point stuff and thus performance in games was disappointing. But those are single precision floating point operations. In the past, FP64 performance used to be intentionally crippled on consumer cards and you could flash Quadro or FirePro vBIOS to regain it back. Anyway, here's a more modern explanation of when and where FP operations are used:
> 
> 
> 
> ...


The failure of the FX series was a much more complex story.











catulitechup said:


> thanks to @W1zzard are official pci-e scaling test


It's strange that it doesn't add up with other people's findings. 










Here, God of War plays the same on PCI-e 3.0. On the other hand, Metro: Exodus runs fine in W1zzard's review, but it's absolutely unplayable on my PC.


----------



## The red spirit (May 7, 2022)

AusWolf said:


> I have, but I can't see it being a reason for anyone to buy a graphics card. That is, you don't buy a compute capable graphics card for F@H then game on it. You buy a gaming card and then run F@H when you're not playing anything.


But when you have a choice of GTX 1060 and RX 580, those things count. Except that RX 580 was often significantly cheaper (as much as 30%), a bit faster than GTX 1060, more tweakable with only disadvantages being higher power consumption and lies about VP9 decoding support. RX 570 was often going for crazy low prices, I saw 4GB Red Devil RX 570 going for 150 EUR, when low end GTX 1060 6GB was at around 280 EUR. My own RX 580 was 209 EUR. With so many advantages over 1060, it was kind of no brainer deal, until I realized how loud my RX 580 model was due to fucked up vBIOS. Then I really started to hate it. In games it was screaming with 3000-3200 rpms, despite rather low 66C temperature. It wasn't just a bit loud, but it was completely overpowering speakers loud. Luckily I discovered vBIOS mods, so I "fixed" it, but I vowed to myself to never again buy low end models or at least low end models without reviews. I'm not RGinHD, so I can give up on some "great" deals. I thought that RGinHD was quite ignorant about noise, until Radeon showed him who's the boss:









For real, I remember those R9 290, R9 290X, R9 390, R9 390X cards being stupidly loud. They looked good, but they made you deaf. I honestly can't begin to imagine what torture it was to own R9 X2 295X. Two R9 290s on same PCB with total TDP of 500 wats with just tiny whiny 120mm AIO. It should have been named S.T.D. (Special Torture Device, forgive the pun), the AMD R9 S.T.D. XD




AusWolf said:


> We had some Terascale 2 cards in my family that were good (until The Witcher 3 came out). I don't really have much experience with Terascale 1, so I'll take your word for it.


I messed a bit with Terascale 1 and 2. I had V3750 and v5800. Both woefully slow, but I was curious about Pro cards and I wanted to find out why both of them had very obviously superior 3D depth rendering than my GTX 650 Ti. I mainly said that Terascale 1 rev 1 was poor, due to Radeon 3870 failure to do anything meaningful in market. It was AMD's first fully programmable pipeline architecture (actually their very first was ATi Xenos Xbox GPU, but it was only partly programmable and half fixed pipeline), AMD promised a lot and said how fucking lit it will be, until 3870 launched and performance claims went to toilet, although programmability existed, but to me at least, it just didn't change games as much as AMD said it will. It took nV to do the same and launch CUDA to make it somewhat different. Some people at the time were a bit cross about poor performance and said that ATi would have been better off with faster fixed pipeline GPU, but frankly whole 2000 series also stank and x1000 series were also not that great. Despite 3000 series actually being a big improvement over past failures, they went against nVidia's Tesla, which was superior:










AMD later launched Radeon HD 3870 X2, but that didn't really matter much in market and soon nVidia launched their own 9800 GX2. But to be fair, nVidia was in bad place too. They weren't able to deliver Fermi fast enough and had to rebrand 8000 series several times (9000, 200 series and for low end GPUs way longer) and all the time their dies just grew, efficiency tanked and price rose and they did run hot, so much that their solder melted. The last Tesla cards had comically large dies. Once they managed to make Fermi yields good enough, they ran stupidly hot, were also big dies and complete mess. Took them rather uneventful Kepler to just fix their shit and then they started to revert back a bit with 600 series. 700 series was a good hack for a while, until cards aged awfully due to deep architectural flaws, which made their support with full performance a nightmare. They finally came clean with only 900 series, until they lied about 970's vRAM. And Pascal finally completely solved every known flaw. Until Turing launched. That's quite a long history of incompetence, but what pushed nVidia forwards were lots of gimmicks (3D surround, Gameworks, PhysX), monopolistic attitude (PhysX, tesellation sabotage) and tons of PR (the way it's meant to be played). Meanwhile AMD/ATi often had better hardware, but drivers were poor, which was a big anti-PR in reviews. Despite those being later fixed, people already had nVidia cards. Nobody cared about Finewine. 




AusWolf said:


> The failure of the FX series was a much more complex story.



I'm aware of that. nV also lied about actual Anti-aliasing, about FP precision, about shader capabilities. It was just really bad shit show. 3DMark had to finally kick out nV from their result charts, due to nV drivers lying about actual level of AA applied. There was a huge architectural level penalty for actually applying AA on those cards. Then bad PR about power usage and infamous OG leafblower. Some people say that its failure was mostly due to former 3dfx employees intentionally sabotaging nVidia, but my own take is that nVidia just failed to integrate such a huge ammount of employess properly and they pushed them to make FX 5000 series too fast, thus they cut corners. Cooler failure, however, was just general computer market's inexperience with higher wattage coolers. There jsut weren't any high performance coolers before and thus there was a lot of experimenting. Most coolers back then sucked. Some awfully failed to perform, some were too expensive (all copper coolers, some were loud (AMD stock cooler, CM Jet), some were just awful in engineering (fucking globe cooler, yep that existed, it literally looked like globe and I have no idea why it was supposed to ever work well). That was awful time for cooling products, you can probably find some of those thermal horrors at Maximum PC magazines at Internet Archive. There was one aluminum cooler literally made from flat bad and screws at top with fan attached. It's like those companies never had any thermal engineers and many products were made just for looks, although taste on early 2000s was questionable (Thermaltake Xaser with shit ton of cold cathode tubes, UV fans, tons of junk in 5.25 bays was the tits, bonus points for deafening noise from TT volcano 80mm fans, spider and skull decals, UV DFI board and BD-ROM drive even without HDCP capable GPU in machine). But on the other hand, today computer HW is very bland, full of black shrouds, with loads of tasteless RGB puke and just either full of huge hunks of aluminum or AIO aluminum. Now it's not just tasteless, but somehow boring too. I want my Palit mecha-frog on graphics card back! And copper everywhere! XD


----------



## AusWolf (May 7, 2022)

This is hugely off-topic now, so I'll only give a short answer to some points, and leave it at that. 



The red spirit said:


> But when you have a choice of GTX 1060 and RX 580, those things count.


For gamers, they don't. If Card A is 10% faster in games, but Card B is 100% faster in compute, I'll still choose Card A.



The red spirit said:


> Except that RX 580 was often significantly cheaper (as much as 30%), a bit faster than GTX 1060, more tweakable with only disadvantages being higher power consumption and lies about VP9 decoding support. RX 570 was often going for crazy low prices, I saw 4GB Red Devil RX 570 going for 150 EUR, when low end GTX 1060 6GB was at around 280 EUR.


What's a "low end 1060 6 GB"? Personally, my vote is on the 1060 6 GB. Based on TechPowerUp! review data, it's performance is on par with the 580, but has a significantly lower TDP, which results in smaller and quieter coolers, and smaller form factors where AMD wasn't competitive at that time. That's why the 570/580 pair became so cheap after a while.


----------



## The red spirit (May 7, 2022)

AusWolf said:


> What's a "low end 1060 6 GB"?


Something like GTX 1060 Ventus compared to Gaming X or Palit Dual model compared to Palit Jetstream. Basically low end cooler, VRM, shroud models, that are often loud, have poor cooling or bad power delivery. 

Something as pathetic as this:








						ASUS TUF GTX 1660 GAMING Specs
					

NVIDIA TU116, 1785 MHz, 1408 Cores, 88 TMUs, 48 ROPs, 6144 MB GDDR5, 2001 MHz, 192 bit




					www.techpowerup.com
				




Yep, its recycled intel aluminum heatsink with two 80mm fans (instead of standard 92mm), exhaust-blocking shroud galore, no memory cooling and likely very low end and hot VRMs. Unsurprisingly, card ran hot, was failing to reach typical GTX 1660 performance and I don't think that will last a long time with such awful cooling.

My own RX 580 doesn't look low end:








						PowerColor Red Dragon RX 580 OC V2 Specs
					

AMD Polaris 20, 1350 MHz, 2304 Cores, 144 TMUs, 32 ROPs, 8192 MB GDDR5, 2000 MHz, 256 bit




					www.techpowerup.com
				




But Powercolour recycled same heatsink from RX 480/RX 470 (https://www.techpowerup.com/gpu-specs/powercolor-red-dragon-rx-470.b3753) and even on those cards, it was quite loud and ineffective. It was even recycled for RX 590. Yikes.




AusWolf said:


> Personally, my vote is on the 1060 6 GB. Based on TechPowerUp! review data, it's performance is on par with the 580, but has a significantly lower TDP, which results in smaller and quieter coolers, and smaller form factors where AMD wasn't competitive at that time. That's why the 570/580 pair became so cheap after a while.


That is until Finewine kicked in and now RX 580 is faster than GTX 1060. But in retrospect, maybe GTX 1060 was nicer of them two. At this point it's just whichever you personally prefer more. They each have their advantages. Except RX 570. It kind of solves power usage issue, performs close to 1060 6GB and was a lot cheaper. Meanwhile, product like RX 590 really had no place. It was more expensive that GTX 1060, guzzled power and was just better yield of RX 580. It just shouldn't have existed at all. Vega cards were also inexcusable disasters due to their insane power draw and heat output. Those were sold at huge discounts quite soon. And the peak of Vega, the Radeon VII. That was a total abortion with bad drivers, terrible support from day one, high price and no competitiveness at all. AMD finally fixed Vega, but AMD never asked if the whole idea of Vega like card was sound.


----------



## Valantar (May 7, 2022)

The red spirit said:


> Sort of, I still remember reading that FP performance matters in some games. nVidia FX 5000 series had fucked up floating point stuff and thus performance in games was disappointing. But those are single precision floating point operations. In the past, FP64 performance used to be intentionally crippled on consumer cards and you could flash Quadro or FirePro vBIOS to regain it back. Anyway, here's a more modern explanation of when and where FP operations are used:
> 
> 
> 
> ...


You're missing the point here, or misrepresenting what I said. FP performance is hugely important - FP32 is the basis of modern GPU operations in a broad sense. That's also what your video is about - and what all current gaming GPU architectures prioritize. But I've never said anything about FP performance. What we were talking about was FP_64_ performance, double precision, which is utterly meaningless for consumers. Sure, it's possible Kerbal Space Program makes some use of it - but that is also essentially a scientific simuation system masquerading as a game. The number of games or other consumer applications making meaningful use of FP64 is _tiny_, and has been steadily shrinking over the past decade - if anything, there's a move towards lower precision, with FP16 and INT8 growing in use across many fields, including some gaming uses.

As for saying it was "intentionally crippled" on consumer cards, I think that's a completely unreasonable point of view. It was (essentially) disabled, as it had no use for consumers, and not doing so would have caused enterprise/datacenter users to gobble up (much) cheaper consumer cards. This would not only have screwed consumers out of their GPUs, but _seriously_ hurt GPU makers - none of them were even close to the size where they could afford developing separate architectures for compute and gaming at this point, and giving enterprise customers a way to avoid paying for the fancy drivers and support that comes with a Quadro/FirePro (which many of the larger entities at the time could most likely have developed on their own for their uses, given that end users were making custom GPU drivers for gaming). That double whammy of the loss of GPU access for gamers and loss of enterprise revenues for GPU makers could probably have killed this industry. Limiting consumer FP64 access was a necessity in light of that. And, crucially, calling it "intentionally crippled" strongly implied that the features disabled were useful or desirable for consumers, which ... well, they weren't, outside of a few very narrow edge cases. FP64 has never been important for consumer usage. It was mentioned as an architectural advantage for some architectures in reviews, but that was never an advantage with any meaningful use cases. It was just _there_. And that's also where using those capabilities for BOINC and the like came from - why not, when people just had the hardware sitting around? But it's ultimately no loss to the good of the world that these things are disappearing. The entities running BOINC projects likely have access to compute power of their own today that _vastly_ outstrips what BOINC contributed 5-10 years ago. You could of course argue that if current GPUs had better FP64, they could also contribute vastly more than 5-10 years ago, but such an argument overlooks both the _extremely _low adoption rate (BOINC currently counts 34k volunteers with 121k computers; F@H counts ~35k GPUs and 73k CPUs) and the inherent cost to including this hardware (increased silicon area would mean larger die sizes and lower yields, more embodied energy in these larger dice, more energy and materials spent producing the same number of GPUs). Even if adding _great_ FP64 capabilities to current consumer GPUs cost, say, a 1% increase in die area, would that cost-benefit analysis add up? If a few tens of thousands of users actually made beneficial use of that added 1% of area, would that make sense for the millions of GPUs made each generation? IMO, not even close.

The historical disabling of FP64 in consumer GPUs has only ever been a problem to those wanting a Quadro/Firepro but not willing/able to pay those prices; the removal of these features from consumer architectures makes them more efficient and fit for purpose. The former was a necessity, the latter is a significant net benefit.


----------



## The red spirit (May 7, 2022)

Valantar said:


> You're missing the point here, or misrepresenting what I said. FP performance is hugely important - FP32 is the basis of modern GPU operations in a broad sense. That's also what your video is about - and what all current gaming GPU architectures prioritize. But I've never said anything about FP performance. What we were talking about was FP_64_ performance, double precision, which is utterly meaningless for consumers. Sure, it's possible Kerbal Space Program makes some use of it - but that is also essentially a scientific simuation system masquerading as a game. The number of games or other consumer applications making meaningful use of FP64 is _tiny_, and has been steadily shrinking over the past decade - if anything, there's a move towards lower precision, with FP16 and INT8 growing in use across many fields, including some gaming uses.


Don't you think that it's due to its crippling?




Valantar said:


> As for saying it was "intentionally crippled" on consumer cards, I think that's a completely unreasonable point of view. It was (essentially) disabled, as it had no use for consumers, and not doing so would have caused enterprise/datacenter users to gobble up (much) cheaper consumer cards.


And none of that happened. nVidia launched Tesla 2 cards with 1 8th of FP32 performance, datacenters didn't give a shit. ATi released Terascale cards with 1 5th of FP32 performance and datacenters didn't give a shit. Guys that used BOINC or Folding@Home bought quite a bit Radeons and later simple enterprise card flash gave you that "nerfed" performance back. I don't think that enterprise or big data care about consumer cards either way. nVidia soon took away more and more FP64 performance and later sold less disabled cards as Titans, until they locked them down too. AMD just screwed consumers less with lower nerfing multiplier. 




Valantar said:


> This would not only have screwed consumers out of their GPUs, but _seriously_ hurt GPU makers - none of them were even close to the size where they could afford developing separate architectures for compute and gaming at this point, and giving enterprise customers a way to avoid paying for the fancy drivers and support that comes with a Quadro/FirePro (which many of the larger entities at the time could most likely have developed on their own for their uses, given that end users were making custom GPU drivers for gaming). That double whammy of the loss of GPU access for gamers and loss of enterprise revenues for GPU makers could probably have killed this industry. Limiting consumer FP64 access was a necessity in light of that.


What if you are wrong?



Valantar said:


> And, crucially, calling it "intentionally crippled" strongly implied that the features disabled were useful or desirable for consumers, which ... well, they weren't, outside of a few very narrow edge cases. FP64 has never been important for consumer usage.


FP64 was super new thing too and at first was crippled with just simple vBIOS flash. You could just reflash your card and get the performance. Therefore crippled. Very similar to automakers putting heated seat hardware, but charging a fee to activate it. Same shit, different industry.




Valantar said:


> It was mentioned as an architectural advantage for some architectures in reviews, but that was never an advantage with any meaningful use cases. It was just _there_. And that's also where using those capabilities for BOINC and the like came from - why not, when people just had the hardware sitting around? But it's ultimately no loss to the good of the world that these things are disappearing.


So crippling volunteer charity project is good, right?




Valantar said:


> The entities running BOINC projects likely have access to compute power of their own today that _vastly_ outstrips what BOINC contributed 5-10 years ago. You could of course argue that if current GPUs had better FP64, they could also contribute vastly more than 5-10 years ago, but such an argument overlooks both the _extremely _low adoption rate (BOINC currently counts 34k volunteers with 121k computers; F@H counts ~35k GPUs and 73k CPUs) and the inherent cost to including this hardware (increased silicon area would mean larger die sizes and lower yields, more embodied energy in these larger dice, more energy and materials spent producing the same number of GPUs). Even if adding _great_ FP64 capabilities to current consumer GPUs cost, say, a 1% increase in die area, would that cost-benefit analysis add up? If a few tens of thousands of users actually made beneficial use of that added 1% of area, would that make sense for the millions of GPUs made each generation? IMO, not even close.


Sure you can say that, but when C19 started, Folding@Home became the most powerful supercomputer on Earth and nVidia was giving them Quadros for free. Such compute power isn't exactly affordable for researchers. The increase in die size of those features is negligible, performance is left on table, the only ones benefitting from that is nV and AMD. Considering that they gave as good FP64 as they could before in their older cards and made money from that, I fail to see how it would ruin them.




Valantar said:


> The historical disabling of FP64 in consumer GPUs has only ever been a problem to those wanting a Quadro/Firepro but not willing/able to pay those prices; the removal of these features from consumer architectures makes them more efficient and fit for purpose. The former was a necessity, the latter is a significant net benefit.


Not it doesn't. Terascale is a good example of that. Beat nVidia on price, efficiency and yields with HD 4870. nVidia had no answer to that. Only cost around 200 USD and it beat 500 USD nVidia equivalents. nV and AMD just later nicked that "feature" to give more reasons why to buy their FirePros and Quadros, because otherwise those cards were just downclocked GeForces or Radeons.


----------



## Valantar (May 8, 2022)

The red spirit said:


> Don't you think that it's due to its crippling?


No, I think it's due to games and other consumer applications having no real use for calculations with this degree of precision. It just insn't necessary, which makes adopting it wasteful on so many levels, from development to execution. Hence why nobody did so! If there were common or relevant consumer scenarios where FP64 was a noticeable advantage, it would have been adopted. It hasn't been, despite accelerated FP64 being available on every single consumer dGPU for a decade - just at considerably lower rates than FP32 on most of them.


The red spirit said:


> And none of that happened. nVidia launched Tesla 2 cards with 1 8th of FP32 performance, datacenters didn't give a shit. ATi released Terascale cards with 1 5th of FP32 performance and datacenters didn't give a shit. Guys that used BOINC or Folding@Home bought quite a bit Radeons and later simple enterprise card flash gave you that "nerfed" performance back. I don't think that enterprise or big data care about consumer cards either way. nVidia soon took away more and more FP64 performance and later sold less disabled cards as Titans, until they locked them down too. AMD just screwed consumers less with lower nerfing multiplier.


None of that happened, in part because there were no widely available high-FP64 cards outside of a few generations, and in part because FP64 is relatively niche _even in HPC and enterprise_. So, limiting FP64 on certain Tesla models was fine _because the people needing FP64 knew not to buy those_. But if you're kitting out a supercomputer or HPC cluster on a medium budget - not one of those high profile ones, but at a university or the like - don't you think a lot of those people would have jumped at the opportunity to get the same performance at a quarter of the price or less? Because they would have - and they would have gobbled up GPUs by the thousands, if not more.


The red spirit said:


> What if you are wrong?


Ah, yes, what if I was wrong, and we would have seen ... a continuation of cards with features nobody makes use of because they have no tangible benefits and a lot of downsides, alongside increase production costs, materials costs, embodied energy and die size? Yes, what a terrible possibility to have avoided.

Seriously, what are the lost benefits you're alluding to here? Games doing calculations at a level of precision that has zero benefit to them? Consumer software adoption FP64, despite not seeing any benefits from it? Those levels of performance simply aren't necessary for these applications.


The red spirit said:


> FP64 was super new thing too and at first was crippled with just simple vBIOS flash. You could just reflash your card and get the performance. Therefore crippled. Very similar to automakers putting heated seat hardware, but charging a fee to activate it. Same shit, different industry.


Except that heated seats benefit essentially everyone (it can get cold no matter where you are on the planet), while high performance FP64 has _no benefits whatsoever_ for the vast majority of users. Which is the point I was making that you seem to have missed entirely: it's unreasonable to say it was "crippled" (besides the point of using dumb ableist language, of course) because the disabled functionality _had no use_. Crippling implies a loss of useful functionality, so when the functionality had no meaningful use, the term loses its meaning. If you break your leg, you lose the ability to walk. This is not equivalent to that - it's equivalent to breaking a vestigial limb that has no use except for some really weird niche thing that essentially nobody does.


The red spirit said:


> So crippling volunteer charity project is good, right?


... Sigh. I mean, you do love your bad-faith arguing tactics, don't you? "You like to kick puppies, right?" Can we please at least try to discuss things the other person has actually said, and not project our own nonsensical straw man figures onto them? That would be nice.

It's not being crippled - the basis upon which it was based, an inherent excess in an otherwise unrelated product, ceased to exist. Whether this is bad or not is a) essentially irrelevant in this context, and b) was addressed in the previous post. When people no longer have excess resources to contribute, the basis for the contribution disappears. That's not necessarily either good nor bad, it just is a thing that happened. I mean, personally, I would much rather live in a world where we didn't need volunteer charity project to contribute to important medical research, but that's how the world works under capitalism. Arguing for a wasteful and unnecessary feature to be included in mass-market products because a tiny fraction of people make use of them for good is ... well, just woefully inefficient. Those people could rather just donate money to the relevant causes instead - that would be far more efficient.


The red spirit said:


> Sure you can say that, but when C19 started, Folding@Home became the most powerful supercomputer on Earth and nVidia was giving them Quadros for free. Such compute power isn't exactly affordable for researchers.


Well aware of that. But ... that rough cost-benefit analysis I did in the post you're responding to addressed this. Specifically. It just wouldn't be worth it. Not whatsoever. The proportion of people contributing to these projects is _so damn small_ that literally any increase in resource expenditures in order to implement it broadly would be a waste.

Also, Nvidia giving away Quadros to the project kind of indicates that ... well, there are far more efficient ways of doing these types of things than moderately wealthy people (i.e. owners of powerful GPUs) "generously" sharing things they aren't making use of anyhow. Not to undermine the contributions of people doing this - it's definitely good that they do - but this is a woefully inefficient way of solving anything at all.

Let's do some simple math:
- In Q3 of 2021, 12.1 million desktop GPUs were shipped. Let's base ourselves on ~10m/quarter, as a nice round number that is around the average per-quarter sales in the graph from that link.
- Assuming _every_ PC contributing to BOINC has two dGPUs (which is _definitely_ more than the real number), the best-case scenario for GPUs contributing to BOINC and F@H combined are ~280k. But let's assume that we're at a historical low point, and let's (almost) quadruple that to a nice, round million GPUs contributing to these projects at all times.
- Of course it is extremely unlikely that those GPUs are all new. Most likely most of them are at least a year old. But, for the sake of simplicity, let's assume that every one of those GPUs is brand-new.
Those GPUs? They represent 0,1% of the GPU sales _of a single quarter_. Over a year, that's 0,025%.
- A 3-5 year replacement rate for GPUs is far more reasonable. So let's assume every BOINC and F@H GPU was bought during the past three years (which, IMO, is likely an optimistic estimate - 5 years is far more likely, though I'm sure even older hardware is still contributing). In that context, our optimistic estimate of all contributing GPUs represents 0,008333% of all GPUs sold globally during those three years.

So, if keeping high FP64 capabilities meant a 1% die size increase, or even a 0,1%, _or even 0,001% die size increase_, that would be a massive waste, as those die size increases would mean linear or higher-than-linear increases in waste, energy and raw material consumption, which would be _completely wasted_ on the overwhelming majority of GPUs. 99,991777% of those extra materials and that extra energy would most likely _never_ provide _any_ benefits to anyone or anything, ever.

It would be cheaper and far less wasteful for both GPU makers and the world in general to just give a few HPC GPUs to these projects than to prioritize FP64 in consumer GPUs.


The red spirit said:


> The increase in die size of those features is negligible, performance is left on table, the only ones benefitting from that is nV and AMD. Considering that they gave as good FP64 as they could before in their older cards and made money from that, I fail to see how it would ruin them.


Negligible per single die? Sure. Did you get some other impresison from my given example of 1% in the post you quoted? But what performance is left on the table? None at all in scenarios where FP64 isn't used or doesn't provide any benefit. You keep arguing as if FP64 has practical uses for consumers - it doesn't.

They didn't restrict FP64 in consumer cards early on as the actual real-world uses of FP64 had yet to crystallize, and they thus left the door open for new use cases. As it became apparent that the only real use case for it is in extreme high precision scientific calculations, they started segmenting their product lines accordingly, ensuring that the extra QC, software development, and support required to service highly demanding enterprise and scientific customers were actually paid for.


The red spirit said:


> Not it doesn't. Terascale is a good example of that. Beat nVidia on price, efficiency and yields with HD 4870. nVidia had no answer to that. Only cost around 200 USD and it beat 500 USD nVidia equivalents. nV and AMD just later nicked that "feature" to give more reasons why to buy their FirePros and Quadros, because otherwise those cards were just downclocked GeForces or Radeons.


... that literally does not relate to what I was saying in any way, shape or form. I was talking about the development of GPU architectures over time, not comparisons of individual GPU architectures. Arbitrarily picking a point in time to demonstrate some kind of point is meaningless in this context, as I'm not talking about individual architectures at all, but design choices and factors affecting those, and how this in turn affects products down the line. Removing high-performance FP64 as an important consideration for gaming GPUs frees up developmental resources to focus on other, more useful features.

Consumers don't need, and have never needed FP64. It is also _extremely_ likely that they never will, as game development, if anything, is moving more towards FP16 and INT8 than it is towards FP64. Prioritizing FP64 hardware in products where it's actually likely to be made use of is thus the sensible approach. Stripping it down to a functional minium where it isn't important is also the most sensible approach.


----------



## YESSS (May 8, 2022)

ModEl4 said:


> Lol what a dud, in a PCI-express 3 system should be -10%, so RX570 is going to be around 30% faster in 1080p.
> Why in the early design stages AMD team thought that this is going to be acceptable for the desktop segment I don't know. It would be preferable not to launch a desktop Navi 24 model at all, and have only mobile solutions.
> The performance will not be so easily tracked, no PCI-express 3 deficit , you would have the APU option for encoding and less negativity all around, now even the mobile suffers from all this negativity potentially influencing the future OEM RX6400 based contracts.


To be fair, this stupid PCIe 3.0 limitation is a PITA, indeed! Should be at least x8 or even better x16 for PCIe 3.0 mode. When I built my current rig (Autumn 2021) I just didn't have enough funds to buy an 11th gen. Intel CPU, so I went for the possibly 'cheapest' 10400F. Although my motherboard would support PCIe 4.0, it only does it with an 11th gen. CPU. FYI I had to rebuild my rig because of a fire accident, so this was a fairly expensive process back then for me. Yeah, and in my country Hungary we do have that horrendous 27% of VAT, so I think, my rig will stay that way. Thanks for reading.


----------



## The red spirit (May 8, 2022)

Valantar said:


> None of that happened, in part because there were no widely available high-FP64 cards outside of a few generations, and in part because FP64 is relatively niche _even in HPC and enterprise_. So, limiting FP64 on certain Tesla models was fine _because the people needing FP64 knew not to buy those_. But if you're kitting out a supercomputer or HPC cluster on a medium budget - not one of those high profile ones, but at a university or the like - don't you think a lot of those people would have jumped at the opportunity to get the same performance at a quarter of the price or less? Because they would have - and they would have gobbled up GPUs by the thousands, if not more.


Those people bought PS3 for supercomputer too. They had to do some hacking to make it work. Buying a ready to use GPU is nothing for them. The thing is they just didn't do that.



Valantar said:


> Ah, yes, what if I was wrong, and we would have seen ... a continuation of cards with features nobody makes use of because they have no tangible benefits and a lot of downsides, alongside increase production costs, materials costs, embodied energy and die size? Yes, what a terrible possibility to have avoided.


That certainly wouldn't be as complex as you say, neither as expensive. Intel literally added AVX-512, that even less people used with actually big cost of die space and it didn't take off, but it didn't impact cost of chips much either. 



Valantar said:


> Seriously, what are the lost benefits you're alluding to here? Games doing calculations at a level of precision that has zero benefit to them? Consumer software adoption FP64, despite not seeing any benefits from it? Those levels of performance simply aren't necessary for these applications.


Potential further software development that can utilize FP64. I read more about FP64 and it may have been useful in physics simulations. Which is becoming more of a thing in games and already sort of was in games (PhysX) before. There is deep learning, which could utilize FP64 capabilities, considering the push to DL cores on nVidia side, it might get relevant (DL ASIC already supports FP64, but I mean integration into GPU die itself without ASIC).  



Valantar said:


> It's not being crippled - the basis upon which it was based, an inherent excess in an otherwise unrelated product, ceased to exist. Whether this is bad or not is a) essentially irrelevant in this context, and b) was addressed in the previous post. When people no longer have excess resources to contribute, the basis for the contribution disappears. That's not necessarily either good nor bad, it just is a thing that happened. I mean, personally, I would much rather live in a world where we didn't need volunteer charity project to contribute to important medical research, but that's how the world works under capitalism. Arguing for a wasteful and unnecessary feature to be included in mass-market products because a tiny fraction of people make use of them for good is ... well, just woefully inefficient. Those people could rather just donate money to the relevant causes instead - that would be far more efficient.


It's literally the same as crippling hash rate of current cards.





Valantar said:


> Let's do some simple math:
> - In Q3 of 2021, 12.1 million desktop GPUs were shipped. Let's base ourselves on ~10m/quarter, as a nice round number that is around the average per-quarter sales in the graph from that link.
> - Assuming _every_ PC contributing to BOINC has two dGPUs (which is _definitely_ more than the real number), the best-case scenario for GPUs contributing to BOINC and F@H combined are ~280k. But let's assume that we're at a historical low point, and let's (almost) quadruple that to a nice, round million GPUs contributing to these projects at all times.
> - Of course it is extremely unlikely that those GPUs are all new. Most likely most of them are at least a year old. But, for the sake of simplicity, let's assume that every one of those GPUs is brand-new.
> ...


It's not even a die space issue. It's all about interconnecting already existing FP cores. Perhaps benefits are small, but if you are already making a GPU, it's stupidly easy to add. 




Valantar said:


> Negligible per single die? Sure. Did you get some other impresison from my given example of 1% in the post you quoted? But what performance is left on the table? None at all in scenarios where FP64 isn't used or doesn't provide any benefit. You keep arguing as if FP64 has practical uses for consumers - it doesn't.


I'm arguing, that it's way lower than 1% and that it's just wasteful not to do it as it takes nearly no effort for GPU makers to make it. perhaps not often used or whatever, but pointless to cut off too.



Valantar said:


> They didn't restrict FP64 in consumer cards early on as the actual real-world uses of FP64 had yet to crystallize, and they thus left the door open for new use cases. As it became apparent that the only real use case for it is in extreme high precision scientific calculations, they started segmenting their product lines accordingly, ensuring that the extra QC, software development, and support required to service highly demanding enterprise and scientific customers were actually paid for.


And do you honestly think that it's enough to give just a few years for new technology to take off? You could basically argue that CUDA was sort of useless too using exactly the same argument.




Valantar said:


> ... that literally does not relate to what I was saying in any way, shape or form. I was talking about the development of GPU architectures over time, not comparisons of individual GPU architectures. Arbitrarily picking a point in time to demonstrate some kind of point is meaningless in this context, as I'm not talking about individual architectures at all, but design choices and factors affecting those, and how this in turn affects products down the line. Removing high-performance FP64 as an important consideration for gaming GPUs frees up developmental resources to focus on other, more useful features.


My point was that those cards could be made stupidly cheap and FP64 capabilities don't have any significant effect on price of end product. Meanwhile their competitor made far worse mistakes, that weren't even FP64 related and those cost them way more.


----------



## AusWolf (May 8, 2022)

The red spirit said:


> Something like GTX 1060 Ventus compared to Gaming X or Palit Dual model compared to Palit Jetstream. Basically low end cooler, VRM, shroud models, that are often loud, have poor cooling or bad power delivery.
> 
> Something as pathetic as this:
> 
> ...


Ah, I see! Personally, I avoid those cards like the plague.



The red spirit said:


> That is until Finewine kicked in and now RX 580 is faster than GTX 1060. But in retrospect, maybe GTX 1060 was nicer of them two. At this point it's just whichever you personally prefer more. They each have their advantages. Except RX 570. It kind of solves power usage issue, performs close to 1060 6GB and was a lot cheaper. Meanwhile, product like RX 590 really had no place. It was more expensive that GTX 1060, guzzled power and was just better yield of RX 580. It just shouldn't have existed at all. Vega cards were also inexcusable disasters due to their insane power draw and heat output. Those were sold at huge discounts quite soon. And the peak of Vega, the Radeon VII. That was a total abortion with bad drivers, terrible support from day one, high price and no competitiveness at all. AMD finally fixed Vega, but AMD never asked if the whole idea of Vega like card was sound.


The problem is that you can't sell a product based on a vague assumption that it will _probably_ be better than the competition 2-3 years down the line, so in my opinion, the "fine wine" argument is moot - it's just another AMD-style marketing BS that they threw in to attract customers because they had nothing else at the time of release.

Anyway, the 580's TDP is 54% higher than that of the 1060, so the fact that newer drivers helped it isn't much of a reason to celebrate. Maybe it could match the 1060 when new, maybe it can surpass it in some titles with new drivers, but it's still a highly inefficient GPU. It's only advantage _right now_ is the fact that it's relatively common and cheap on the used market, but personally, I'd still rather have a 1060 because better efficiency usually means less noise when paired with a fairly decent cooler.



The red spirit said:


> Don't you think that it's due to its crippling?


If by "crippling" you mean not spending time and resource to include a feature that no one ever needs, then I guess that's what @Valantar meant.



The red spirit said:


> That certainly wouldn't be as complex as you say, neither as expensive. Intel literally added AVX-512, that even less people used with actually big cost of die space and it didn't take off, but it didn't impact cost of chips much either.


It did cause problems on Alder Lake however, and instead of trying to fix it, Intel just disabled it (it's funny that Rocket Lake still has it). That says something about costs vs gains on a corporate level.



The red spirit said:


> I'm arguing, that it's way lower than 1% and that it's just wasteful not to do it as it takes nearly no effort for GPU makers to make it. perhaps not often used or whatever, but pointless to cut off too.


"Cutting off" and "not including" are two different things. Yes, it takes effort to cut parts of a chip that are already there. But it also takes effort to design a more complex chip for a single feature. If they know that the target audience won't use that feature, then there's no point designing the chip around it - that is, it's easier and cheaper to design a smaller, simpler chip. That way, you're not "cutting" anything, because the parts you would have to cut aren't even there to start with.



The red spirit said:


> So crippling volunteer charity project is good, right?


I think you misunderstand the point of these charity projects, which is to use the idle time of your _gaming card_ to run calculations in the background. They're not meant to be the ultimate goal and purpose for Random Joe to buy a new GPU. You're meant to donate whatever processing power _you already have_ - you're not meant to upgrade your system just so that you can donate more. Thus, designing a new GPU around something that is never meant to be its main purpose, and only maybe 1% of the target audience will ever use, is wasteful and pointless.

Heck, I used to run F@H on my GT 1030 simply because it only eats 30 Watts. Sure, it took 2-2.5 days to finish a work block, but who cares? A donation is a donation, however small it may be.


----------



## YESSS (May 8, 2022)

There's also another issue with Asus TUF cards not mentioned by anybody: they are slightly (about 20-40 mm) longer, than their non-TUF versions. Just enough, to NOT fit in my Apevia case. However, the regular DUAL version of RX 6500XT which I bought, fits just fine...


----------



## The red spirit (May 8, 2022)

AusWolf said:


> Ah, I see! Personally, I avoid those cards like the plague.


Asus TUF card was a shitty deal for sure, but PowerColor RX 580 frankly looked every bit as decent as any other RX 580 just without crazy gamer plastics. Turns out it was a trap.



AusWolf said:


> The problem is that you can't sell a product based on a vague assumption that it will _probably_ be better than the competition 2-3 years down the line, so in my opinion, the "fine wine" argument is moot - it's just another AMD-style marketing BS that they threw in to attract customers because they had nothing else at the time of release.


It's actually fan made term. AMD never officially acknowledged it, other than once saying that they really don't like it and would prefer all performance on day one.




AusWolf said:


> Anyway, the 580's TDP is 54% higher than that of the 1060, so the fact that newer drivers helped it isn't much of a reason to celebrate. Maybe it could match the 1060 when new, maybe it can surpass it in some titles with new drivers, but it's still a highly inefficient GPU. It's only advantage _right now_ is the fact that it's relatively common and cheap on the used market, but personally, I'd still rather have a 1060 because better efficiency usually means less noise when paired with a fairly decent cooler.


fair enough, but there's stated TDP and real power use. RX 580 uses as much power as GTX 1080 in tests. 



AusWolf said:


> If by "crippling" you mean not spending time and resource to include a feature that no one ever needs, then I guess that's what @Valantar meant.


Time spent on it would be negligible. Few traces to connect existing FP32 cores and voila.



AusWolf said:


> It did cause problems on Alder Lake however, and instead of trying to fix it, Intel just disabled it (it's funny that Rocket Lake still has it). That says something about costs vs gains on a corporate level.


Yeah, but that was obviously poor idea and took up die space. FP64 is neither.



AusWolf said:


> "Cutting off" and "not including" are two different things. Yes, it takes effort to cut parts of a chip that are already there. But it also takes effort to design a more complex chip for a single feature. If they know that the target audience won't use that feature, then there's no point designing the chip around it - that is, it's easier and cheaper to design a smaller, simpler chip. That way, you're not "cutting" anything, because the parts you would have to cut aren't even there to start with.


Except they design basically whole architecture and for both gaming cards, as well as datacenter or enterprise cards. And no, parts are there. At first functionality was disabled in vBIOS, but later it was fused off, meaning that they put more effort to remove it, than to just give it. Thus crippling. That's why I say that it's literally the same as LHR cards. Capabilities are just there, but now nV goes extra mile to sell you CMPs. 



AusWolf said:


> I think you misunderstand the point of these charity projects, which is to use the idle time of your _gaming card_ to run calculations in the background. They're not meant to be the ultimate goal and purpose for Random Joe to buy a new GPU. You're meant to donate whatever processing power _you already have_ - you're not meant to upgrade your system just so that you can donate more. Thus, designing a new GPU around something that is never meant to be its main purpose, and only maybe 1% of the target audience will ever use, is wasteful and pointless.


Except, if you want to get more serious about it and actually have some impact, then it starts to matter. And if more FP64 performance was left on table, then with same effort, you could do more with your idle GPU. 



AusWolf said:


> Heck, I used to run F@H on my GT 1030 simply because it only eats 30 Watts. Sure, it took 2-2.5 days to finish a work block, but who cares? A donation is a donation, however small it may be.


At that point it's borderline waste of electricity. It would make more sense for you to just contribute to WCG project on BOINC. You would achieve more in way shorter time with your CPU.


----------



## YESSS (May 8, 2022)

The red spirit said:


> Asus TUF card was a shitty deal for sure, but PowerColor RX 580 frankly looked every bit as decent as any other RX 580 just without crazy gamer plastics. Turns out it was a trap.


Here in Hungary both brands are overpriced. Powercolor is OK, IMHO. I have had several in my earlier rigs--no issues, at all! ASUS is even more expensive, simply because they (or the retailer, or even both) do like to charge a premium, just because a sticker with the name 'Asus' is slapped on them--and, it's working!


----------



## AusWolf (May 8, 2022)

The red spirit said:


> It's actually fan made term. AMD never officially acknowledged it, other than once saying that they really don't like it and would prefer all performance on day one.


Fair enough. All the more reason not to give into it.



The red spirit said:


> fair enough, but there's stated TDP and real power use. RX 580 uses as much power as GTX 1080 in tests.


That makes it even worse. A card that eats as much power as a 1080, but performs at 1060 levels. I remember my 5700 XT that had a TDP of 220 Watts - but that only meant chip power draw, while my 2070 needs 175 Watts for the whole board, and offers the similar performance.



The red spirit said:


> Time spent on it would be negligible. Few traces to connect existing FP32 cores and voila.


It would have increased complexity in both design and manufacturing.



The red spirit said:


> Except they design basically whole architecture and for both gaming cards, as well as datacenter or enterprise cards. And no, parts are there. At first functionality was disabled in vBIOS, but later it was fused off, meaning that they put more effort to remove it, than to just give it. Thus crippling. That's why I say that it's literally the same as LHR cards. Capabilities are just there, but now nV goes extra mile to sell you CMPs.


So you're saying that FP64 capability is there in Navi 24 by design, just fused off? Do you have a source on this?

I don't think it's the same as LHR and CMP cards. Those are just a cash grab to milk gamers _and_ miners as much as possible. FP64 isn't needed in a low-end GPU.



The red spirit said:


> Except, if you want to get more serious about it and actually have some impact, then it starts to matter. And if more FP64 performance was left on table, then with same effort, you could do more with your idle GPU.


If you want to get serious about it, then you buy a server farm, or order a supercomputer from IBM or Nvidia. Running F@H faster on the low-end GPU in your home PC doesn't make a difference.



The red spirit said:


> At that point it's borderline waste of electricity. It would make more sense for you to just contribute to WCG project on BOINC. You would achieve more in way shorter time with your CPU.


I agree, though electricity isn't free. Running the 1030 on full power 24/7 is one thing, doing the same with my main gaming PC would be another.


----------



## The red spirit (May 8, 2022)

AusWolf said:


> That makes it even worse. A card that eats as much power as a 1080, but performs at 1060 levels. I remember my 5700 XT that had a TDP of 220 Watts - but that only meant chip power draw, while my 2070 needs 175 Watts for the whole board, and offers the similar performance.


Sort of. It's inconsistently at such power draw. TDP for AMD is more like average, not maximum value. At least this is how I understand it from vBIOS values available. The only hard limits are TDC and maybe peak power draw. I haven't done this in a while. Anyway, there's a TPU review of Sapphire card:








						Sapphire Radeon RX 580 Nitro+ Limited Edition 8 GB Review
					

The Sapphire RX 580 Nitro+ Limited Edition is a highly overclocked custom variant of the just-launched AMD Radeon RX 580. Performance now beats the NVIDIA GTX 1060, and in the box, Sapphire has bundled two user-replaceable semi-transparent fans with blue LEDs, if you want a little bit more bling.




					www.techpowerup.com
				




Fail on efficiency front. It's even worse than I remember it. Literally eats more power than 1080. It seems that public didn't really know about that and just thought that Polaris is good. You know what, 1060 was better card, but Polaris for once was somewhat competitive. 



AusWolf said:


> So you're saying that FP64 capability is there in Navi 24 by design, just fused off? Do you have a source on this?


Not on Navi24 specifically, in the past it was, but now Pro cards have full performance FP64 fused off. The last cards with artificial segmentation were Radeon Hawaii cards (R9 290 and Pro equivalent) 




AusWolf said:


> I don't think it's the same as LHR and CMP cards. Those are just a cash grab to milk gamers _and_ miners as much as possible. FP64 isn't needed in a low-end GPU.


But GPU makers in the past attempted to milk people for that too. You want good FP64 performance? Bin Radeon, get FirePro. They did this until FP64 became obscure. And of course, for same hardware with FirePro title, you pay times more money.




AusWolf said:


> If you want to get serious about it, then you buy a server farm, or order a supercomputer from IBM or Nvidia. Running F@H faster on the low-end GPU in your home PC doesn't make a difference.


having more GPUs with 1:16 FP64 ration won't help you any. You can compare this Navi monstrosity (https://www.techpowerup.com/gpu-specs/radeon-pro-w6800x-duo.c3824) with quite old GCN proper card (https://www.techpowerup.com/gpu-specs/firepro-w9100.c2562). 8 year old card smokes dual GPU recent card. That's ridiculous. It's even worse if you compare that old FirePro to RTX A6000. Honestly, this sucks for enterprises that actually use FP64 often as they are stuck using old hardware or downgrading to inferior new product. It's so sad, that some basement dweller who got his 7970 can still outperform brand new Quadro or Radeon Pro cards. I really hope that IBM has their own hardware for FP64, nVidia seems to be lost cause.




AusWolf said:


> I agree, though electricity isn't free. Running the 1030 on full power 24/7 is one thing, doing the same with my main gaming PC would be another.


Ryzen 3100 would beat it. That's your HTPC1.


----------



## Valantar (May 9, 2022)

The red spirit said:


> Those people bought PS3 for supercomputer too. They had to do some hacking to make it work. Buying a ready to use GPU is nothing for them. The thing is they just didn't do that.


... so you're admitting that delimiting this feature to enterprise products only was effective? Thanks! That's what I've been saying all along.

After all, the only difference between those two is the respective difficulty of running custom software on a PS3 vs. a custom BIOS or driver unlocking disabled FP64 capabilities on the GPUs in question. So ... this demonstates the effectiveness of the GPU makers' strategy.


The red spirit said:


> That certainly wouldn't be as complex as you say, neither as expensive. Intel literally added AVX-512, that even less people used with actually big cost of die space and it didn't take off, but it didn't impact cost of chips much either.


... and we're seeing exactly the same movement of it having a test run of "open" availability, where use cases are explored and identified, before the hardware is then disabled (and likely removed in the future) from implementations where it doesn't make much sense. And, of course, AVX-512 has seen far more widespread usage in consumer facing applications than FP64 compute ever did, yet it's still being disabled. Of course that is largely attributable to that much higher die area requirement you mention, which significantly raises the threshold for relative usefulness vs. keeping it around. So while AVX-512 has far more consumer utility than FP64, it is still moving towards only being included in enterprise hardware where someone is willing to pay for the cost of keeping it around.


The red spirit said:


> Potential further software development that can utilize FP64. I read more about FP64 and it may have been useful in physics simulations. Which is becoming more of a thing in games and already sort of was in games (PhysX) before. There is deep learning, which could utilize FP64 capabilities, considering the push to DL cores on nVidia side, it might get relevant (DL ASIC already supports FP64, but I mean integration into GPU die itself without ASIC).


"Potential" - but that didn't come to pass in a decade, with half that time having widespread availability of high performance implementations? Yeah, sorry, that is plenty of time to explore whether this has merit. And the games industry has concluded that the _vast_, _overwhelming_ majority of games do not at all benefit from that degree of precision in its simulations, and will do just fine with FP32 in the same scenarios. The precision just isn't necessary, which makes programming for it wasteful and inefficient.

Also, deep learning is moving the exact opposite way: utilizing _lower_ precision calculations - FP16, INT8, and various packed versions of those operations. FP64 does have some deep learning-related use, mainly in training models. But that's a crucial difference: training models isn't something any consumer is likely to do with any frequency at a scale where such acceleration is really necessary. _Running_ those models is what end users are likely to do, in which case FP64 is completely useless.


The red spirit said:


> It's literally the same as crippling hash rate of current cards.


That's actually a good example, yes. LHR cards purposely delimit a specific subset of functionality that is of very little use to most gamers in order to avoid gamer-facing products being gobbled up by far wealthier entities seeking to use them for other purposes. Is this supposed to be an argument for this somehow being bad? It's not been even close to as effective (in part because of crypto being _only_ about profit unlike scientific endeavors which generally don't care about profitability, and instead focus on quality, reliability and reproducible results), but that is entirely besides the point. All this does is exemplify why this type of market segmentation generally is beneficial to end users.

(And no, cryptomining is not beneficial to the average gamer - not whatsoever. It's a means for the already wealthy to entrench their wealth, with a tiny roster of exceptions that give it a veneer of "democratization" that is easily disproven.)


The red spirit said:


> It's not even a die space issue. It's all about interconnecting already existing FP cores. Perhaps benefits are small, but if you are already making a GPU, it's stupidly easy to add.


Ah, so you're an ASIC design engineer? Fascinating! I'd love to hear you explain how you create those zero-area interconnect structures between features. Do they not use ... wires? Logic? 'Cause last I checked, those things take up die area.


The red spirit said:


> I'm arguing, that it's way lower than 1% and that it's just wasteful not to do it as it takes nearly no effort for GPU makers to make it. perhaps not often used or whatever, but pointless to cut off too.


It is the exact opposite of pointless - it has a clear point: design efficiency and simplicity. Bringing forward a feature that nobody uses into a new die design makes that design needlessly more complex, which drives up costs, likely harms energy efficiency, increases die sizes - and to no benefit whatsoever, as the feature isn't used. Why would anyone do that?

Putting it another way: allowing for paired FP32 cores to work as a single FP64 core requires specific structures and interconnects between these cores, and thus reduces design freedom - it puts constraints on how you design your FP32 cores, it puts constraints on how you place them relative to each other, it puts constraints on how these connect to intra-core fabrics, caches, and more. Removing this requirement - keeping FP64 at a high level of acceleration - thus increases design freedom and flexibility, allowing for a broader range of design creativity and thus a higher likelihood of higher performing and more efficient designs overall. In addition to this, it saves die area through leaving out specific physical features. Not a lot, but still enough to also matter over the tens of millions of GPUs produced each year.

How long after the electric starter motor was first designed and implemented did car makers keep putting hand cranks on the front of their cars? They disappeared within about a decade. Why? Because they just weren't needed. The exception was 4WD cars and trucks, which might see use in locations where the failure of an electric starter could leave you stranded - and hand cranks were found in these segments up until at least the 1970s. Again: features of a design are typically only brought forward when they have a clear usefulness; if not, they are discarded and left behind. That is sensible and efficient design, which allows for better fit-for-purpose designs, which are in turn more efficient designs, avoiding feature bloat. You keep a feature where it has a use; you leave it behind where it falls out of use or is demonstrated to not be used.


The red spirit said:


> And do you honestly think that it's enough to give just a few years for new technology to take off?


Pretty much, yes. FP64 and its uses were relatively well known when it started being implemented in GPU hardware. Half a decade or so to figure out just how those uses would pan out in reality was plenty.


The red spirit said:


> You could basically argue that CUDA was sort of useless too using exactly the same argument.


What? CUDA saw relatively rapid adoption, despite being entirely novel (unlike the concept of FP64). This was of course significantly helped along by Nvidia pouring money into CUDA development efforts, but it was still proven to have concrete benefits across a wide range of applications in a relateively short span of time. It is also still used across a wide range of consumer-facing applications, even if they are somewhat specialized. CUDA has _far_ wider applicability than FP64.


The red spirit said:


> My point was that those cards could be made stupidly cheap and FP64 capabilities don't have any significant effect on price of end product. Meanwhile their competitor made far worse mistakes, that weren't even FP64 related and those cost them way more.


Again: that argument is entirely irrelevant to the point I was making. That a specific architecture at a specific point in time, which is inherently bound up in the specific and concrete realities of that spatiotemporal setting (economics, available lithographic nodes, competitive situations, technology adoption, etc.), doesn't say _anything whatsoever_ that is generalizeable about future uses or implementations of specific traits of those architectures. Your entire argument here boils down to "it was easy once, so it will always be easy", which is a complete logical fallacy. It just isn't true. That it was easy (which is also up for debate - making it work definitely had costs and took time) doesn't say anything about its future usefulness, the complexity of maintaining that feature in future designs, or the extra effort involved in doing so (and on the reverse side, the lower complexity of cutting out this unused feature and simplifying designs towards actually used and useful features). Things change. Features lose relevance, or are proven to never have been relevant. Design goals change. Design contingencies and interdependencies favor reducing complexity where possible.



AusWolf said:


> If by "crippling" you mean not spending time and resource to include a feature that no one ever needs, then I guess that's what @Valantar meant.


Exactly what I was saying. "Crippling" means removing or disabling something _useful_. FP64 isn't useful to consumers or regular end users in any appreciable way.



The red spirit said:


> Time spent on it would be negligible. Few traces to connect existing FP32 cores and voila.


That is a gross misrepresentation of the complexity of designing and implementing a feature like this, and frankly quite disrespectful in its dismissiveness of the skills and knowledge of the people doing these designs.


The red spirit said:


> Except they design basically whole architecture and for both gaming cards, as well as datacenter or enterprise cards. And no, parts are there. At first functionality was disabled in vBIOS, but later it was fused off, meaning that they put more effort to remove it, than to just give it. Thus crippling. That's why I say that it's literally the same as LHR cards. Capabilities are just there, but now nV goes extra mile to sell you CMPs.


That used to be true, but no longer is. Low end enterprise cards still use consumer architectures, as they are (relatively) low cost and _still_ don't need those specific features (like FP64). High performance enterprise cards now use specific architectures (CDNA) or very different subsets of architectures (Nvidia's XX100 dice, which are increasingly architecturally different from every other die in the same generation). And this just demonstrates how maintaining these features across designs is anything buyt the trivial matter you're presenting it as. If it was trivial, and there was _any_ profit to be made from it, they would keep it around across all designs. Instead, it's being phased out, due to design costs, implementation costs, and lack of use (=profitability).



The red spirit said:


> Except, if you want to get more serious about it and actually have some impact, then it starts to matter. And if more FP64 performance was left on table, then with same effort, you could do more with your idle GPU.


Except this fundamentally undermines the idea of these charities. They were invented at a time when CPUs and GPUs didn't turbo or clock down meaningfully at idle, i.e. when they consumed just about the same amount of power no matter what. Making use of those wasted processing cycles thus made sense - otherwise you were just burning power for no use. Now, they instead cause CPUs and GPUs to boost higher, burn more power, to do this work. This by itself already significantly undermines the core argument for these undertakings.

All the while, their relative usefulness is dropping as enterprise and scientific calculations grow ever more demanding and specialized, and the hardware changes to match. You are effectively arguing that all hardware should be made equal, because a small subset of users would then donate the unused performance to a good cause. This argument is absurd on its face, as it is arguing _for_ massive waste because of _some_ of it not being wasted. That's like arguing that all cars should have 200+ bhp, 4WD, differential locking and low gear modes because a tiny subset of people use them for offroading. Just because a feature has been invented and proven to work in a niche use case is not an argument for implementing it broadly, nor is it an argument for keeping it implemented across generations of a product if it is demonstrated to not be made use of. The lack of use is in fact an explicit argument for _not_ including it.

There is nothing inherently wrong with FP64 being phased out of consumer products. It is not "crippling", it is the selective removal of an un(der)utilized feature for the sake of design efficiency. This has taken a while, mainly because up until very recently neither GPU maker has had the means to produce separate datacenter and consumer architectures. This has changed, and now both do (though in different ways). And this is _perfectly fine_. Nobody is losing anything significant from it. That a few thousand people are losing the ability to combine their love for running home servers and workstation with contributing to charity is not a major loss in this context. Heck, if they were serious about the charity, they could just donate the same amount of money to relevant causes, where it would most likely be put to far better and more efficient use than through donating compute power.


----------



## AusWolf (May 9, 2022)

The red spirit said:


> But GPU makers in the past attempted to milk people for that too. You want good FP64 performance? Bin Radeon, get FirePro. They did this until FP64 became obscure. And of course, for same hardware with FirePro title, you pay times more money.


As a consumer and/or gamer, why would I care about FP64 performance?



The red spirit said:


> Not on Navi24 specifically, in the past it was, but now Pro cards have full performance FP64 fused off. The last cards with artificial segmentation were Radeon Hawaii cards (R9 290 and Pro equivalent)


Oh. I thought we were talking about the 6400.



The red spirit said:


> having more GPUs with 1:16 FP64 ration won't help you any. You can compare this Navi monstrosity (https://www.techpowerup.com/gpu-specs/radeon-pro-w6800x-duo.c3824) with quite old GCN proper card (https://www.techpowerup.com/gpu-specs/firepro-w9100.c2562). 8 year old card smokes dual GPU recent card. That's ridiculous. It's even worse if you compare that old FirePro to RTX A6000. Honestly, this sucks for enterprises that actually use FP64 often as they are stuck using old hardware or downgrading to inferior new product. It's so sad, that some basement dweller who got his 7970 can still outperform brand new Quadro or Radeon Pro cards. I really hope that IBM has their own hardware for FP64, nVidia seems to be lost cause.


That's interesting, but not really my problem. Besides, as it's been mentioned, the likes of AMD CDNA and Nvidia Volta (soon Hopper) are available for those corporations that need it.



The red spirit said:


> Ryzen 3100 would beat it. That's your HTPC1.


I did run F@H on both, actually. An entire system's power consumption under 100 Watts in full load is mind-boggling. 



Valantar said:


> Exactly what I was saying. "Crippling" means removing or disabling something _useful_. FP64 isn't useful to consumers or regular end users in any appreciable way.


Not just that. As "crippling" means removing something useful, that useful thing has to be there first. You can't remove what isn't there.



The red spirit said:


> Potential further software development that can utilize FP64. I read more about FP64 and it may have been useful in physics simulations. Which is becoming more of a thing in games and already sort of was in games (PhysX) before. There is deep learning, which could utilize FP64 capabilities, considering the push to DL cores on nVidia side, it might get relevant (DL ASIC already supports FP64, but I mean integration into GPU die itself without ASIC).


Development for a potential future application is exactly what AMD did with the FX-series CPUs. Look how it turned out. About a decade later, they are actually okay entry-level CPUs as we live in an era when we can utilise 8 integer cores in common applications, but it doesn't matter as they were absolutely awful at the time of release (and their efficiency is still crap).



Valantar said:


> Again: that argument is entirely irrelevant to the point I was making. That a specific architecture at a specific point in time, which is inherently bound up in the specific and concrete realities of that spatiotemporal setting (economics, available lithographic nodes, competitive situations, technology adoption, etc.), doesn't say _anything whatsoever_ that is generalizeable about future uses or implementations of specific traits of those architectures. Your entire argument here boils down to "it was easy once, so it will always be easy", which is a complete logical fallacy. It just isn't true. That it was easy (which is also up for debate - making it work definitely had costs and took time) doesn't say anything about its future usefulness, the complexity of maintaining that feature in future designs, or the extra effort involved in doing so (and on the reverse side, the lower complexity of cutting out this unused feature and simplifying designs towards actually used and useful features). Things change. Features lose relevance, or are proven to never have been relevant. Design goals change. Design contingencies and interdependencies favor reducing complexity where possible.


Exactly. If we follow the above logic, we can conclude that my RTX 2070 is an absolute rubbish GPU because it can't run 3dfx Glide.


----------



## Valantar (May 9, 2022)

AusWolf said:


> As a consumer and/or gamer, why would I care about FP64 performance?
> 
> 
> Oh. I thought we were talking about the 6400.
> ...


You know, it's almost as if hardware, firmware, driver, OS and software development is this really complicated and interwoven process where the best you can hope to do is make good guesses towards future use cases and problems while at the same time performing well in current use cases, and where bad or erroneous guesses happen relatively often and are either discarded or relegated to niche markets where they have an application. Who would have thought it?


----------



## The red spirit (May 9, 2022)

Valantar said:


> ... so you're admitting that delimiting this feature to enterprise products only was effective? Thanks! That's what I've been saying all along.


No? PS3s were bought for FP32 performance. That was an example of databases/enterprises willing to go bizarre for performance they need. Just like how they buy 8 core server Atoms. Super niche product, only for them.




Valantar said:


> ... and we're seeing exactly the same movement of it having a test run of "open" availability, where use cases are explored and identified, before the hardware is then disabled (and likely removed in the future) from implementations where it doesn't make much sense. And, of course, AVX-512 has seen far more widespread usage in consumer facing applications than FP64 compute ever did, yet it's still being disabled. Of course that is largely attributable to that much higher die area requirement you mention, which significantly raises the threshold for relative usefulness vs. keeping it around. So while AVX-512 has far more consumer utility than FP64, it is still moving towards only being included in enterprise hardware where someone is willing to pay for the cost of keeping it around.


AVX-512 fate is sealed at this point. GPU does vectors in games at high framerates, why use CPU instead? FP64 however, is really slow on CPU. FP64 didn't meaningfully increase die space either. 




Valantar said:


> "Potential" - but that didn't come to pass in a decade, with half that time having widespread availability of high performance implementations? Yeah, sorry, that is plenty of time to explore whether this has merit. And the games industry has concluded that the _vast_, _overwhelming_ majority of games do not at all benefit from that degree of precision in its simulations, and will do just fine with FP32 in the same scenarios. The precision just isn't necessary, which makes programming for it wasteful and inefficient.


Maybe maybe. Internet at first also was super niche thing and "didn't take off", same with CUDA. 




Valantar said:


> Also, deep learning is moving the exact opposite way: utilizing _lower_ precision calculations - FP16, INT8, and various packed versions of those operations. FP64 does have some deep learning-related use, mainly in training models. But that's a crucial difference: training models isn't something any consumer is likely to do with any frequency at a scale where such acceleration is really necessary. _Running_ those models is what end users are likely to do, in which case FP64 is completely useless.


Incidentally, nVidia doesn't have ungimped FP64 cards at all either. 




Valantar said:


> That's actually a good example, yes. LHR cards purposely delimit a specific subset of functionality that is of very little use to most gamers in order to avoid gamer-facing products being gobbled up by far wealthier entities seeking to use them for other purposes. Is this supposed to be an argument for this somehow being bad? It's not been even close to as effective (in part because of crypto being _only_ about profit unlike scientific endeavors which generally don't care about profitability, and instead focus on quality, reliability and reproducible results), but that is entirely besides the point. All this does is exemplify why this type of market segmentation generally is beneficial to end users.


Assuming it had worked well, it takes away clearly valuable feature of their cards that people know how to utilize and then they resell exact same thing at times higher price. That's predatory. I would rather not have nV acting like some dicks that they are.




Valantar said:


> Ah, so you're an ASIC design engineer? Fascinating! I'd love to hear you explain how you create those zero-area interconnect structures between features. Do they not use ... wires? Logic? 'Cause last I checked, those things take up die area.


No, they don't. They are just another layer. It's more like die volume, than area. Still, would cost them barely anything to add those. Literally less than cent per GPU. 

It is the exact opposite of pointless - it has a clear point: design efficiency and simplicity. Bringing forward a feature that nobody uses into a new die design makes that design needlessly more complex, which drives up costs, likely harms energy efficiency, increases die sizes - and to no benefit whatsoever, as the feature isn't used. Why would anyone do that?



Valantar said:


> Putting it another way: allowing for paired FP32 cores to work as a single FP64 core requires specific structures and interconnects between these cores, and thus reduces design freedom - it puts constraints on how you design your FP32 cores, it puts constraints on how you place them relative to each other, it puts constraints on how these connect to intra-core fabrics, caches, and more. Removing this requirement - keeping FP64 at a high level of acceleration - thus increases design freedom and flexibility, allowing for a broader range of design creativity and thus a higher likelihood of higher performing and more efficient designs overall. In addition to this, it saves die area through leaving out specific physical features. Not a lot, but still enough to also matter over the tens of millions of GPUs produced each year.
> Not having proper enterprise cards also puts strain on your profits. Besides software and ECC, there's hardly any reason to shell out for RTX A or Radeon Pro.
> 
> 
> ...





AusWolf said:


> As a consumer and/or gamer, why would I care about FP64 performance?


Better physics, better AI?



AusWolf said:


> That's interesting, but not really my problem. Besides, as it's been mentioned, the likes of AMD CDNA and Nvidia Volta (soon Hopper) are available for those corporations that need it.


Have you seen price? You can still buy 4 Radeon VIIs for less.



AusWolf said:


> Development for a potential future application is exactly what AMD did with the FX-series CPUs. Look how it turned out. About a decade later, they are actually okay entry-level CPUs as we live in an era when we can utilise 8 integer cores in common applications, but it doesn't matter as they were absolutely awful at the time of release (and their efficiency is still crap).


But it lead to Ryzen. It literally had similar architecture. And you could also make an argument that their power use sucks, that Ryzen 1 sucked in games and that it was barely usable due to RAM and BIOS issues. And all that was true, but if you sell them cheap enough, consumer won't care.




AusWolf said:


> Exactly. If we follow the above logic, we can conclude that my RTX 2070 is an absolute rubbish GPU because it can't run 3dfx Glide.


nGlide, mate. You can paly your NFS Porsche Unleased on RTX card too.


----------



## Valantar (May 9, 2022)

The red spirit said:


> No? PS3s were bought for FP32 performance. That was an example of databases/enterprises willing to go bizarre for performance they need. Just like how they buy 8 core server Atoms. Super niche product, only for them.


That still just supports my argument: some business customers will gobble up any cheap consumer hardware available if it can be made to do what they want it to in a reliable way.


The red spirit said:


> AVX-512 fate is sealed at this point. GPU does vectors in games at high framerates, why use CPU instead? FP64 however, is really slow on CPU. FP64 didn't meaningfully increase die space either.


...but FP64 also _doesn't have any relevant uses. _So why keep it around? Now, comparing the die area needed for an integrated compute feature like this is extremely difficult unless you have very detailed annotated die shots with accurate measurements so that you can exclude other on-die factors (encode/decode blocks, memory controllers, etc.). But basic logic has some strong indications: if there is zero work involved in enabling FP64, and it has zero die area requirements, why did they differentiate between, say, Hawaii (1:2) and Tobago or Trinidad (1:16)? Why not just copy-paste the same CU design into the smaller die? If there wasn't a cost incentive towards leaving them out, all they would achieve by doing this would be to deny themselves the opportunity for making higher margin enterprise SKUs. This is obviously not proof, but it is a _very_ strong indication of FP64 implementations have a noticeable cost compared to an identical architecture lacking it. (And, for reference, consumer Hawaii had 1:8 ratios.)


The red spirit said:


> Maybe maybe. Internet at first also was super niche thing and "didn't take off", same with CUDA.


Yes, because one specific form of computational math is comparable to a vast interconnected network of computers across the globe and the mind-boggling technological developments that have accompanied the adoption and evolution of this over several decades. Yep, that's a reasonable comparison.


The red spirit said:


> Incidentally, nVidia doesn't have ungimped FP64 cards at all either.


Sure they do. GA100, H100. Both have 2:1 FP64 ratios. This a) tells us something about who actually needs FP64; b) tells us something about the implementations in which it is worth including it in.


The red spirit said:


> Assuming it had worked well, it takes away clearly valuable feature of their cards that people know how to utilize and then they resell exact same thing at times higher price. That's predatory. I would rather not have nV acting like some dicks that they are.


... yes, because not having LHR wouldn't _at all_ have worsened the crypto """industry""" buying up all available GPUs, of course not! There is nothing predatory about this whatsoever. That Nvidia implements a (flawed, but a good try) programme aimed at alleviating the massively detrimental effects of a massive pyramid scheme bubble is literally as opposite of "predatory" as you can get. And ... is there any problem reselling LHR cards? Yeah, no, that's not a thing. You're arguing as if it's reasonable that a GPU _increase in price after purchase_, which just demonstrates that you're arguing from a completely absurd starting point.


The red spirit said:


> No, they don't. They are just another layer. It's more like die volume, than area. Still, would cost them barely anything to add those. Literally less than cent per GPU.











						Dunning–Kruger effect - Wikipedia
					






					en.wikipedia.org
				




Seriously, what you're saying here is complete and utter nonsense, and


The red spirit said:


> Not having proper enterprise cards also puts strain on your profits. Besides software and ECC, there's hardly any reason to shell out for RTX A or Radeon Pro.


"Besides it working significantly better in the software that your enterprise runs on and having massively increased data security, there's no reason to shell out for RTX A or Radeon Pro." Yes, those are indeed tiny and insignificant reasons 


The red spirit said:


> In USSR, that feature was common until 90s tho. Hand cranking a Lada lasted a long time.


That just demonstrates that external factors can lead to otherwise obsolete factors staying useful. It does nothing to counter my argument whatsoever - heck, the analogue to that is exactly those high-end enterprise/HPC cards with tons of FP64. They are the niche, that's where it's sticking around, because that's where it has a use.


The red spirit said:


> It will be easy as long as we will have FP32 cores.
> It's really not that hard. designing GPUs in general is not very hard. You have few instructions, which you want to implement on hardware, you create core and just scale them. Each core are just ALUs or FPUs with cache.


Jesus, the sheer disrespect for people's skills and the massive complexity of these designs on show here is downright baffling. Seriously, please go read that article on the Dunning Kruger effect, as it is _extremely _relevant here. Then perhaps go read an article on ASIC design, and one on GPU design. Then try to consider how many _billions_ of transistors are involved in such designs, and how every node and every design requires transistor-level tweaks and adjustments of all designs. Then try to consider the cascading effects on layouts and interconnects from feature sizes growing. Etc., etc. GPUs aren't the most complex ICs out there as they are mainly a ton of repetitions of one structure, but summing that up as "designing GPUs in general is not very hard" is such a _vast_ underestimation of this complexity that it's just plain absurd.



The red spirit said:


> And yet they still fail to create proper FP64 card.


No they don't, as demonstrated above.


The red spirit said:


> RTX A6000 and W6800 still kinda suck in comparison to Radeon HD 7970. So much for those fancy buzzwords.


"Suck" in a use case nobody buys them for, and they aren't designed for. In the meantime, an A100 is ten times a 7970, and a H100 is _thirty times_ faster.

If you need a hammer for hammering nails, you make or buy a hammer. If you need a sledgehammer for knocking down walls, you make or buy a sledgehammer. Just because the two are vaguely similar does not make them relevant comparisons to each other. And trying to combine the two generally gives sub-optimal results in both use cases.


The red spirit said:


> Sort of. That issue was already solved by the time Athlon 64 was launched and by cards like ATi X800 Pro.


What "issue" was "solved"? I'm talking about how the fundamental operational characteristics of hardware have changed in a way that has rendered the basic premise of these charities incompatible with how hardware today operates.


The red spirit said:


> Except that FP64 was made, implemented cheaply and continued to be cheap until nV and AMD wasted more cash.


Please define "cheaply". Also, how do you know? At the time, these companies had a single architecture across both consumer and enterprise segments. How do you know they weren't simply amortizing both implementation and die area costs of FP64 in enterprise pricing? Also, you don't quite seem to grasp the concept that new ideas are implemented _as a gamble_, to see if it gains popularity and takes off. If it doesn't, then those features are scaled down or cut accordingly. As we have seen with FP64 in consumer (and even most enterprise) products.


The red spirit said:


> We, as consumers, lost performance, didn't get cheaper cards.


What? Lost performance? How? In what tasks? Kerbal Space program? 'Cause beyond that, that "performance" you're talking about is _fictional_. And you literally have zero basis for saying we didn't get cheaper cards, as it is entirely impossible for you to know whether the cards we got are cheaper than theoretical counterparts with better FP64 capabilities. And, as all additional features have a cost, it is reasonable to assume that better FP64 would have increased costs. How much? No idea. Some? Yes.


The red spirit said:


> At east their greed sort of backfired and now their enterprise cards suck and have no reason to exist, other than their highest end models.


...what? Those cards sell like never before, and have more applications than ever before.


The red spirit said:


> And GPGPU has been proven to be useful, along with FP64.


... what is it with you and straw man arguments? Has anyone here even brought up GPGPU, let alone claimed that it isn't useful? Please stop making up stuff. Also, GPGPU has nothing in particular to do with FP64.


The red spirit said:


> Intel doesn't take away many really old instructions from their CPUs, neither does AMD (usually).


Yes they do. There are _tons_ of old, deprecated instructions that are removed over time. It takes a long, long time, because doing so breaks compatibility with software in fundamental ways, but they most definitely do so.

Also, Intel is widely known to delimit _many_ features to their Xeon lineup, despite being present on silicon for Core products - ECC support being the key example.


The red spirit said:


> Tell me any reason why should GPUs be gimped and then those "premium" features scalped?


... you're arguing from a completely invalid premise here. These aren't "premium" features, they are _specialized_ features with little to no usefulness outside of niche applications. Thus, not including them in designs is not _gimping_, it is an example of design efficiency, designing for purpose, removing unnecessary bloat.


The red spirit said:


> Better physics, better AI?


Physics in games run just fine on FP32 - we don't need scientific levels of accuracy for game physics. And AI mostly needs a CPU - remember, game AI is not at all the same as the AI that the compute industry is all abuzz about (neural networks and the like). That kind of AI can indeed be applied to games in some ways, but would then most likely make use of FP16, INT16, INT8, or some similar AI-oriented low precision form of compute. Games are not going to be training neural networks while you play.


The red spirit said:


> Have you seen price? You can still buy 4 Radeon VIIs for less.


Yet the people who need them can easily afford to pay for them, and likely also want the support that comes with that purchase. For everyone else, this just doesn't matter.


The red spirit said:


> But it lead to Ryzen. It literally had similar architecture.


What? Ryzen was a _dramatic_ departure from the Heavy Machinery CPU designs. They have extremely little in common. This deep-dive from Anandtech ought to be informative.


The red spirit said:


> And you could also make an argument that their power use sucks, that Ryzen 1 sucked in games and that it was barely usable due to RAM and BIOS issues. And all that was true, but if you sell them cheap enough, consumer won't care.


Consumers definitely cared that Zen 1 had mediocre gaming performance - but they also cared about getting a better value proposition, more cores, and drastically more performance in (common, useful) compute applications. Still, Zen1 had _tons_ of detractors. It wasn't until Zen2, or even Zen3 arguably, that the world in general took Ryzen seriously as a fully worthy alternative to Intel Core.


----------



## AusWolf (May 9, 2022)

The red spirit said:


> Better physics, better AI?


In what game?



The red spirit said:


> Have you seen price? You can still buy 4 Radeon VIIs for less.


That's an extremely inefficient GPU. A Quadro GV100 has double the FP64 performance with a lower TDP. In such efficiency-sensitive applications (datacentres), it can recoup the extra initial costs in no time.



The red spirit said:


> But it lead to Ryzen. It literally had similar architecture. And you could also make an argument that their power use sucks, that Ryzen 1 sucked in games and that it was barely usable due to RAM and BIOS issues. And all that was true, but if you sell them cheap enough, consumer won't care.


It's not similar at all. Ryzen was a completely new design, as AMD openly acknowledged that the direction they wanted to take with FX was a mistake.



The red spirit said:


> nGlide, mate. You can paly your NFS Porsche Unleased on RTX card too.


Seriously? I'll have to have a look.


----------



## The red spirit (May 10, 2022)

AusWolf said:


> In what game?


I said potential games. Either way, I think it's wrong to see GPUs as single purpose device as they are all are just plain SIMD processing units. They are naturally able of any such task, unless some special neutering is done to them. Special neutering is evil. It means more e-waste, useless work and artificial demand. Nothing of value is created, it's just removed and resold for more money. Nothing more than pathetic hack to get money for being a jerk. GPGPU should have been an eye opener that programmable pipeline cards mean that they can be used for anything and they should have caused the death of "pro" cards. In fixed pipeline days, you had to pick what features to include and often re-engineer card for different task, as it simply couldn't do anything else. And even late fixed pipe cards already showed that their cores can be multipurpose. Real pro cards died with fixed pipeline architecture, cards made after that just failed to provide any significant hardware change over "gaming" cards. Quadros today have nothing more than harvested GeForce dies, that just couldn't reach speeds high enough. And only top tier Quadros give ECC and some other extra hardware bits that actually matter. Low end and mid tier Quadros are a scam. There's nothing pro about them, other than drivers. nVidia should just sell "pro" drivers instead of having a whole line ups of pointless SKUs. Same for AMD, but frankly AMD should just pull out of pro card game altogether, because they only have general purpose pro cards with inferior software and even less features than Quadros. That or finally making proper cards for pros. But they most likely can't, since nVidia already monopolized much of "special" software features already and AMD doesn't have a budget to beat them. That explains why nV successfully sell more of 2-4 times slower pro cards at the same time than AMD.  




AusWolf said:


> That's an extremely inefficient GPU. A Quadro GV100 has double the FP64 performance with a lower TDP. In such efficiency-sensitive applications (datacentres), it can recoup the extra initial costs in no time.


It's a bit of unobtanium hardware. It's literally not available in some countries. 



AusWolf said:


> It's not similar at all. Ryzen was a completely new design, as AMD openly acknowledged that the direction they wanted to take with FX was a mistake.


They just switched from SMT to CMT, added some tweaks, improvements and boom, Ryzen was made. Compared to huge leap from K10 to bulldozer, Ryzen is truly similar to FX. 



AusWolf said:


> Seriously? I'll have to have a look.


It's a Glide API wrapper. Works well for just that. Other old software compatibility problems aren't fixed by it. Frankly, there's almost no point to use it. Even GeForce 256 was superior to 3dfx.


----------



## Valantar (May 10, 2022)

The red spirit said:


> I said potential games. Either way, I think it's wrong to see GPUs as single purpose device as they are all are just plain SIMD processing units. They are naturally able of any such task, unless some special neutering is done to them. Special neutering is evil. It means more e-waste, useless work and artificial demand. Nothing of value is created, it's just removed and resold for more money. Nothing more than pathetic hack to get money for being a jerk. GPGPU should have been an eye opener that programmable pipeline cards mean that they can be used for anything and they should have caused the death of "pro" cards. In fixed pipeline days, you had to pick what features to include and often re-engineer card for different task, as it simply couldn't do anything else. And even late fixed pipe cards already showed that their cores can be multipurpose. Real pro cards died with fixed pipeline architecture, cards made after that just failed to provide any significant hardware change over "gaming" cards. Quadros today have nothing more than harvested GeForce dies, that just couldn't reach speeds high enough. And only top tier Quadros give ECC and some other extra hardware bits that actually matter. Low end and mid tier Quadros are a scam. There's nothing pro about them, other than drivers. nVidia should just sell "pro" drivers instead of having a whole line ups of pointless SKUs. Same for AMD, but frankly AMD should just pull out of pro card game altogether, because they only have general purpose pro cards with inferior software and even less features than Quadros. That or finally making proper cards for pros. But they most likely can't, since nVidia already monopolized much of "special" software features already and AMD doesn't have a budget to beat them. That explains why nV successfully sell more of 2-4 times slower pro cards at the same time than AMD.


You are presenting some rather peculiar views here, wow. First off: you'll do well to rid yourself of the misconception of hardware being worth more in the pro space than software is. Software is _by far_ the most valuable thing in most professional settings, and especially in HPC and similar scenarios. Most datacenters spend _far_ more on software than hardware, and they do so happily. After that, they want efficiency, as they run their hardware at 100% 24/7, which makes for pretty high power bills both for the hardware and for cooling. _Then_ you get to the hardware and how it performs.

Second: what in "professional GPU" indicates that it _must_ have significantly different hardware from a consumer GPU? This misunderstanding seems, again, to stem from the idea that hardware is worth more than software. Put it this way: Pro GPUs today are slightly tweaked consumer GPUs with a few features added, different warranties and service agreements, and very different driver and software packages. And that, quite clearly, is a delineation of products that is sufficiently valuable to businesses for them to not be vacuuming consumer GPUs off the market. Drivers, software, and support are what these businesses pay for, as that is what is crucial to them. Hardware, as you say, is quite flexible. Good, well optimized, stable and well written software is more important than having the fastest hardware. If businesses weren't happy with this, they would be buying consumer GPUs by the thousands. (And, of course, some do, but they are quite rare.)

Calling Quadros "harvested" as if they are inferior to Geforces is also rather ludicrous. They have _much_ stricter QC than Geforce cards, and their low clocks are for tuning for (moderate) efficiency rather than absolute peak performance - different markets have different priorities. There is no reason to equate lower clocks in Quadros with them not being able to sustain the same clocks as Geforces - it just tells us that Geforces sacrifice efficiency in the name of peak performance, whereas datacenters and workstations are very concerned about running costs, as those costs generally far outstrip hardware costs as well.

CUDA being monopolistic is absolutely a problem, but it has absolutely nothing to do with the delineation between pro and non-pro GPUs, FP64, or anything else we're discussing here.

To bring this back to FP64: "potential games" - yet the games industry has, in the time since ~2007 when FP64 appeared in GPUs, found no viable large-scale uses for FP64 in games. There also seems to be no interest in resuscitating it, and nobody is talking about it being beneficial. Remember, FP32 can do everything FP64 can do, just with less precise calculations. The only thin you'd be improving with using FP64 would be how predictable and reproducible the outcomes of your calculations would be, not what you could do with them. And that's why FP64 is useful almost exclusively for scientific computing: because they crave precision above all else. And nobody else does.



The red spirit said:


> They just switched from SMT to CMT, added some tweaks, improvements and boom, Ryzen was made. Compared to huge leap from K10 to bulldozer, Ryzen is truly similar to FX.


That is absolute, utter, pure and unadulterated nonsense. Seriously, please go read Anandtech's Zen1 architectural deep dive that I linked above. Literally every single part of the Zen core is significantly changed from previous AMD designs. They have _very_ little in common.


----------



## The red spirit (May 11, 2022)

Valantar said:


> You are presenting some rather peculiar views here, wow. First off: you'll do well to rid yourself of the misconception of hardware being worth more in the pro space than software is. Software is _by far_ the most valuable thing in most professional settings, and especially in HPC and similar scenarios. Most datacenters spend _far_ more on software than hardware, and they do so happily. After that, they want efficiency, as they run their hardware at 100% 24/7, which makes for pretty high power bills both for the hardware and for cooling. _Then_ you get to the hardware and how it performs.


Dude, that's such a joke. You only get Quadro (sorry, RTX A or whatever else) if you absolutely can't afford any downtime. It's straight up propaganda, that you need Quadro. Have you seen workstation forums? People have already questioned a value of Quadros in CAD applications. We already saw that supercomputers can be made from literal consoles. They don't see any worth in that and there is barely any advantage to those cards even in software. You only buy them for niche of niches. Perhaps you are Pixar and you need shit load of vRAM, perhaps you are medical researcher that needs super high res brain maps, but that's all. Many professionals just use "peasant" Radeons and GeForces. People have already figured out that software is not worth their time, when you get better hardware.

And if software is so important, then why it's not subscription? Also if you need mission critical software, you are better off making it custom in linux like a real pro, instead on relying on proprietary crap.



Valantar said:


> Second: what in "professional GPU" indicates that it _must_ have significantly different hardware from a consumer GPU?


Maybe because it is called a professional GPU, not professional software pack with live support 24/7? BTW AMD gives pro software for consumer grade cards like RX 580, only nV doesn't. Because nVidia.



Valantar said:


> Calling Quadros "harvested" as if they are inferior to Geforces is also rather ludicrous.


Less cores often, lower speed. Looks harvested on low end. On high end it more complicated.



Valantar said:


> They have _much_ stricter QC than Geforce cards, and their low clocks are for tuning for (moderate) efficiency rather than absolute peak performance - different markets have different priorities. There is no reason to equate lower clocks in Quadros with them not being able to sustain the same clocks as Geforces - it just tells us that Geforces sacrifice efficiency in the name of peak performance, whereas datacenters and workstations are very concerned about running costs, as those costs generally far outstrip hardware costs as well.


You can literally adjust TDP in software. Stricter QC is mostly a myth, there's no proof to that.



Valantar said:


> To bring this back to FP64: "potential games" - yet the games industry has, in the time since ~2007 when FP64 appeared in GPUs, found no viable large-scale uses for FP64 in games. There also seems to be no interest in resuscitating it, and nobody is talking about it being beneficial. Remember, FP32 can do everything FP64 can do, just with less precise calculations. The only thin you'd be improving with using FP64 would be how predictable and reproducible the outcomes of your calculations would be, not what you could do with them. And that's why FP64 is useful almost exclusively for scientific computing: because they crave precision above all else. And nobody else does.


Where nV and AMD are also abandoning it?




Valantar said:


> That is absolute, utter, pure and unadulterated nonsense. Seriously, please go read Anandtech's Zen1 architectural deep dive that I linked above. Literally every single part of the Zen core is significantly changed from previous AMD designs. They have _very_ little in common.


Sure, but it's exactly the same philosophy. Moar cores, lower price, power usage be damned, PR to the moon. And Zen 1 also launched, when SMT was still new idea, another not tried technology in consumer market, just like CMT before. In terms of specs, it seems that cache size traditionally remained big (just like FX with high latency), just like in FX chips and concept of CCX is oddly similar to module. Infinity fabric was literally refreshed HyperTransport.

Also die shots of module and CCX:








Say what you want, but layout is stupidly similar too. The only difference is that instead of doubled physical ALUs, they just made them logical instead. AMD themselves say that only 30% of performance bump came from architecture alone too:




So if they hypothetically scaled down FX chips to smaller node and used those gains to add clock speed, Zen 1 may not have been any faster and that's not considering DDR3 handicap. 

And similarities don't end here, I guess this quote explains a lot:
"The latest generation of Bulldozer, using ‘Excavator’ cores under the Carrizo name for the chip as a whole, is actually quite a large jump from the original Bulldozer design. We have had extensive reviews of Carrizo for laptops, as well as a review of the sole Carrizo-based desktop CPU, and the final variant performed much better for a Bulldozer design than expected. The fundamental base philosophy was unchanged, however the use of new GPUs, a new metal stack in the silicon design, new monitoring technology, new threading/dispatch algorithms and new internal analysis techniques led to a lower power, higher performance version. This was at the expense of super high-end performance above35W, and so the chip was engineered to focus at more mainstream prices, but many of the technologies helped pave the way for the new Zen floorplan."

BTW Carrizo also made to desktops in shape of Athlon X4 845, which was superior to older 880K. 

Further quotes:
"Former Intel engineer Sam Naffziger, who was already working with AMD when the Zen team was put together, worked in tandem with the Carrizo and Zen teams on building internal metrics to assist with power as well."

"When we reported that Jim had left AMD, a number of people in the industry seemed confused: Zen wasn’t due for another year at best, so why had he left? The answers we had from AMD were simple – Jim and others had built the team, and laid the groundwork for Zen. With all the major building blocks in place, and simulations showing good results, all that was needed was fine tuning. Fine tuning is more complex than it sounds: getting caches to behave properly, moving data around the fabric at higher speeds, getting the desired memory and latency performance, getting power under control, working with Microsoft to ensure OS compatibility, and working with the semiconductor fabs (in this case, GlobalFoundries) to improve yields. None of this is typically influenced by the man at the top, so Jim’s job was done."

"This means that in the past year or so, AMD has been working on that fine tuning. This is why we’ve slowly seen more and more information coming out of AMD regarding microarchitecture and new features as the fine-tuning slots into place"

So basically, there was a lot of concurrent work between Carrizo people and Zen people. Nowhere in article they say that Zen is completely clean sheet design built form nothing like bulldozer was. It's more like advanced enhancement of Carrizo

"With the Zen microarchitecture, AMD’s goal was to return to high-end CPU performance, or specifically having a competitive per-core performance again. Trying to compete with high frequency while blowing the power budget, as seen with the FX-9000 series running at 220W, was not going to cut it. The base design had to be efficient, low latency, be able to operate at high frequency, and scale performance with frequency." 

Which was mostly solved by Carrizo. it soundly beats Godavari core with around 20% IPC gain, 10% frequency loss and 40% lower power usage. Godavari was already a reasonable step up from Vishera and Vishera was a reasonable step up from Zambezi.

"In AMD’s initial announcements on Zen, the goal of a 40% gain in IPC was put into the ecosystem, with no increase in power. It is pertinent to say that there were doubts, and many enthusiasts/analysts were reluctant to claim that AMD had the resources or nous to both increase IPC by 40% and maintain power consumption. In normal circumstances, without a significant paradigm shift in the design philosophy, a 40% gain in efficiency can be a wild goose chase."

Yeah, but Carrizo was already made with quite similar gains so? Like mentioned earlier, Zen 1 aimed to scale performance with clock speed, but it was woeful at scaling voltage with clock speed and had quite low frequency wall. Again, sounds exactly like Carrizo. So why Carrizo didn't make any waves? Because it was only very temporary architecture made for laptops mainly and had trouble with pushing higher clock speeds. Also it was on rather old node and like other APUs, it didn't have any L3 cache and had severely limited L2 cache. 

"I mention this because one of AMD’s goals, aside from the main one of 40%+ IPC, is to have small cores. As a result, as we’ll see in this review, various trade-offs are made."

It's literally what FX concept was too. Build small cores, that clock high and compensate lack of single core perf with moar coars. 

"Zen is a pretty traditional x86 architecture as an overall machine, but there is optimization work to do. What makes this a bit different is that most of our optimization work is more on the developer side – we work with them to really understanding the bottlenecks in their code on our microarchitecture. I see many apps being tuned and getting better going on as we work forward on this." 

In human language, that means that they neutered FX's oddities, but left design philosophy mostly the same. 

The further text goes into hardcore details, but not necessarily comparison with FX chips. Either way, read about Carrizo:








						AMD Launches Carrizo: The Laptop Leap of Efficiency and Architecture Updates
					






					www.anandtech.com
				




Reads similar to Zen deep dive, just that there are less changes. Still my point was that Zen 1 was fundamentally similar to FX and its derivatives and like I said move to zen is "advanced enhancement" rather than completely new architecture build from zero, like Bulldozer was. Change is more similar to what happened when AMD moved from K8 to K10


----------



## Valantar (May 11, 2022)

The red spirit said:


> Dude, that's such a joke. You only get Quadro (sorry, RTX A or whatever else) if you absolutely can't afford any downtime. It's straight up propaganda, that you need Quadro. Have you seen workstation forums? People have already questioned a value of Quadros in CAD applications. We already saw that supercomputers can be made from literal consoles. They don't see any worth in that and there is barely any advantage to those cards even in software. You only buy them for niche of niches. Perhaps you are Pixar and you need shit load of vRAM, perhaps you are medical researcher that needs super high res brain maps, but that's all. Many professionals just use "peasant" Radeons and GeForces. People have already figured out that software is not worth their time, when you get better hardware.


For a lot of middle-ground prosumer apps, sure. For high stakes professional CAD work, medical modelling, scientific modelling, or anything remotely similar to that? No. I mean, sure, people also use consumer GPUs for stuff like that. At scale? That's debatable - a lot of people dabble in these kinds of things as a hobby or side gig, after all. But in any kind of sizeable industry perspective? No.

Remember, forums, even workstation forums, are populated by enthusiasts mainly. Most professionals are not hardware enthusiasts (though one could argue that their jobs push them in that direction, that doesn't mean most are that). And yes, it's obvious that budget-constrained small business owners/employees or freelancers will question the value of Quadros, as at those scales the price differences matter quite a lot. And quite a few have found that they do not benefit (sufficiently or at all) from the software and driver features of Pro GPUs - which renders them outside of the intended market for those GPUs in the first place. This is in no way contradictory to what I have been arguing.

Of course, you're entirely failing to demonstrate that there is somehow a desire for FP64 in these circles, which I guess is why you keep shifting the goal posts ever further into unrelated subjects.


The red spirit said:


> And if software is so important, then why it's not subscription? Also if you need mission critical software, you are better off making it custom in linux like a real pro, instead on relying on proprietary crap.


_A lot_ of it is. Professional software licencing is really, really, really complex, with many different licencing models. Some have various options, for continued support with future updates for X time, some give you a single version with stability/bug fixes for a set price, etc. Many also licence software either per CPU core or similar. There are _lots _of models out there.


The red spirit said:


> Maybe because it is called a professional GPU, not professional software pack with live support 24/7? BTW AMD gives pro software for consumer grade cards like RX 580, only nV doesn't. Because nVidia.


Those "Pro" drivers are roughly equivalent to Nvidia's Studio drivers. Drivers know what GPU they are running on, after all. Nor do these drivers contain the support infrastructure that is available to pro customers.


The red spirit said:


> Less cores often, lower speed. Looks harvested on low end. On high end it more complicated.


No, it's more complicated than that, period. The RTX A4000 is for example fully enabled, unlike the 3070 (though the 3070 Ti is - but that was also launched later). The RTX A2000 is slightly cut down (3328 cores vs. 2560 in the 3050, 3584 in the 3060, 3840 fully enabled, found only in 3060 mobile), but it's also a 70W GPU, unlike the 170W (!) 3060 that has 7.1% more cores. Different product stacks are segmented differently due to different priorities and target markets. Quadros and Nvidia RTX cards are not in any broad way more cut down than GeForce cards, they are just segmented differently due to having different priorities.


The red spirit said:


> You can literally adjust TDP in software. Stricter QC is mostly a myth, there's no proof to that.


Software TDP adjustments are quite limited, and adjusting power limits in software is not the same as Nvidia's QC process finding a desirable balance between power consumption, performance, and stability. As for stricter QC being a myth: yeah, sure. 'Cause enterprise customers wouldn't be pissed _at all_ if their GPUs failed as frequently as consumer GPUs do, of course not.


The red spirit said:


> Where nV and AMD are also abandoning it?


FP64? Yep, because even most pro customers just don't want or need it. As I've been saying the whole time - essentially only datacenters want or need FP64. And anyone else in need of it can rent a compute node on one of the dozens of cloud compute services offering it, running off of those aforementioned FP64-heavy HPC GPUs, for far less money than buying their own hardware, while running their own software still.


The red spirit said:


> Sure, but it's exactly the same philosophy. Moar cores, lower price, power usage be damned, PR to the moon.


Wait, what? Zen's _main_ advantage and improvement was its efficiency, delivering 8 cores with competitive IPC in the same power draw as Intel gave you 4, and at less than half the power of 8 FX cores.

I'll stick the rest of this _extremely _off-topic CPU nonsense in a spoiler tag so as not to entirely drown people.


Spoiler






The red spirit said:


> And Zen 1 also launched, when SMT was still new idea, another not tried technology in consumer market, just like CMT before. In terms of specs, it seems that cache size traditionally remained big (just like FX with high latency), just like in FX chips and concept of CCX is oddly similar to module. Infinity fabric was literally refreshed HyperTransport.
> 
> Also die shots of module and CCX:
> 
> ...


Well, that was a long-form explanation of how you simply do not understand what "architecture" means in CPU design. Using die-level block diagrams and how the blocks are arranged as an argument for the architectures being similar proves this unequivocally. Here's the thing: the block diagram says essentially nothing about _how those blocks are constructed_. That's the architecture. Block diagrams roughly sketch out a die layout, on an extremely simplified level. Die layout and architecture are not the same. It's like looking at a photo of a house and saying that makes you intimately familiar with the personalities and lives of the people living in it - you can probably tell a few high level details, but you have literally none of the required information to answer the question in a satisfactory way.

You also show that you don't understand the timeframes involved in CPU design: CPU architectures have 5+-year development cycles. _Of course_ there is concurrent work between Carrizo (2015) and Zen (2017) - it would literally be impossible for Zen to be designed in its entirety after Carrizo was finished. That quote you included about Jim Keller leaving just illustrates this - the architectural design of Zen was done just slightly later than Carrizo was done (and Carrizo had, at the time it was launched, itself been done for about a year - and so on and so on). Remember, Keller joined AMD in 2012, and was hired specifically to create a new architecture - Zen. With Carrizo being a refinement of an existing architecture rather than a new one it likely had a shorter development cycle - it's entirely possible that design on Zen started before design on Carrizo, and even that the Carrizo design was informed by findings from early stages of Zen design. But your argument here essentially boils down to "they were developed at the same time, so they must be very similar" which is just a fundamental misunderstanding of how CPU architectural design work happens in the real world.

Further misconceptions:
- SMT being "new" and "untested" for Zen1. The first hardware SMT implementations happened in 1968, though only in IBM's research labs. Intel introduced its SMT - HyperThreading - in 2002. IBM introduced 2-way SMT with Power5 in 2004, with 8-way SMT with Power8 in 2014. Oracle had 8-way SMT in 2010 with SPARC T3. And so on, and so on. SMT was widely established across the computing industry by the time early Zen design work started in 2012, and certainly by the time it launched in 2017. It was neither new nor untested in any reasonable understanding of those terms.
- That there is a significant parallelism between FX and Zen in what you describe as "Build small cores, that clock high and compensate lack of single core perf with moar coars". Zen1 never clocked particularly high. It couldn't, both due to node and architectural limitations. It took two architectural revisions and a major node change to change that. And even if this _was_ similar, it is so broad and vague that it doesn't describe the CPU's architectural features whatsoever.
- Zen's IPC gains are on top of Carrizo. Carrizo was an impressive refinement of an overall poor architecture, but Zen absolutely trounced it, despite its 20% gains.
- There is_ extremely_ little in common between a CCX and an FX Module. A module is a cluster of two tightly integrated INT cores with a shared FP core (which has a partial form of SMT, though not particularly effective), as well as a shared L2 cache. Even on a very high level, a CCX is  very different from this, with INT and FP cores being matched and not shared, and L2 caches being private to each int+fp core. But this is all very high level - differences are far, far greater than this on a lower level. Where do you think that 40% IPC increase comes from? From optimizations of _how the cores themselves work_.
- You claim Carrizo solved most of FX's problems. If so, then how does that non-power-limited desktop Athlon X4 845 compare to a Zen1 CPU with similar specifications? Oh, right, it gets absolutely stomped on. Sure, the 1300X doesn't win every test, but it wins the _vast_ majority, and often by very significant margins. Also: uneven performance scaling - some tests improving massively, others not - is a surefire sign of low-level architectural differences, as different parts of the core in each architecture performs differently due to them being designed differently. If the 1300X was just a minor refinement of Carrizo, it would see relatively linear gains overall.
- Saying a 20% IPC gain is "quite similar" to a 40% IPC gain is ... something. I mean, sure, both are much more than 0! But then again, one is literally twice the other. Especially considering most IPC gains for CPUs are in the 5-15% range, 40% is far more exceptional than 20%, even if 20% is also very good. And, of course, those 40% were _on top of_ the 20% gain of the other - they're not comparing to the same base number. And Carrizo was a very late revision of a mature design - which makes its 20% gains impressive, but also underscores how radical a departure Zen was from that base design. There is no CPU architecture in history where a late-generation refresh has managed to improve IPC by 40% (nor lay the groundwork for subsequent iterations improving IPC by ~50% on top of that).

The way you're arguing here and what you're taking away from the quotes you are including is highly indicative that you simply fail to understand the chasm that is between a high-level feature description of a CPU and the actual in-silicon implementation of its architecture. Zen and Carrizo share various familial traits - they are after all both X86/AMD64 CPU architectures developed by the same company. No doubt a handful of features in the core are _very_ similar too. But the overall core design, and the majority of its components? Significantly changed - otherwise we wouldnt' have seen the changes we saw with Zen, nor the subsequent changes we've seen with further Zen revisions.


----------



## AusWolf (May 11, 2022)

The red spirit said:


> I said potential games.


The potential has been there for a decade. No one used it.



The red spirit said:


> It's a bit of unobtanium hardware. It's literally not available in some countries.


Are you talking about retail channels? Don't forget that large-scale corporate customers rarely walk into a PC store, or hit up Amazon asking for 20-50-100 graphics cards. They are often directly connected with Nvidia / AMD through contracts.



The red spirit said:


> They just switched from SMT to CMT, added some tweaks, improvements and boom, Ryzen was made. Compared to huge leap from K10 to bulldozer, Ryzen is truly similar to FX.


You're oversimplifying how a CPU works. With that sentiment, my 11700 is the same as 8 Pentium 4 HTs and some L3 cache, as it has 8 Hyper-Threaded cores while the Pentium 4 had 1.



The red spirit said:


> It's a Glide API wrapper. Works well for just that. Other old software compatibility problems aren't fixed by it. Frankly, there's almost no point to use it. Even GeForce 256 was superior to 3dfx.


First you wanted to sell me on it, now you're saying there's no point. What the heck?

But then, comparing GeForce and 3dfx cards is pointless, imo. DirectX (7 or 8, I think?) was superior to Glide, but if you wanted to run Glide, you needed a 3dfx card. Not every game supported both.



The red spirit said:


> Less cores often, lower speed. Looks harvested on low end. On high end it more complicated.


Don't forget to add lower voltage and higher stability to your list. Don't you think that requires some binning?



The red spirit said:


> Sure, but it's exactly the same philosophy. Moar cores, lower price, power usage be damned, PR to the moon. And Zen 1 also launched, when SMT was still new idea, another not tried technology in consumer market, just like CMT before. In terms of specs, it seems that cache size traditionally remained big (just like FX with high latency), just like in FX chips and concept of CCX is oddly similar to module. Infinity fabric was literally refreshed HyperTransport.
> 
> Also die shots of module and CCX:
> 
> ...


I think you're confusing chip layout and core design. Here's an article comparing the Bulldozer and Zen 1 core.


----------



## The red spirit (May 11, 2022)

Valantar said:


> For a lot of middle-ground prosumer apps, sure. For high stakes professional CAD work, medical modelling, scientific modelling, or anything remotely similar to that? No. I mean, sure, people also use consumer GPUs for stuff like that. At scale? That's debatable - a lot of people dabble in these kinds of things as a hobby or side gig, after all. But in any kind of sizeable industry perspective? No.


My dad works in power plant/heating engineering I know full well what their machine have. You won't see any Quadros, FirePros or Xeons ever in those. His machine in current job has specs like this:
i7 2600K
Intel motherboard
stock cooler (thermal paste never changed)
some cheapo case with suffocated ventilation and only USB 2.0 port available
Radeon HD 7770 
Codegen power supply
4x4GB DDR3 RAM
Storage was recently upgraded from hard drive to 512GB SSD, SSD wasn't even screwed down
Windows 7

In his last job he had some random machine with AMD Phenom quad core chip. At home he used ancient PC to do some overtime, specs were:
AMD Athlon 64 3200+
DFI K8T800 Pro ALF
nVidia FX 5200 128MB
stock cooler
80GB IDE HDD
Windows XP 32 bit

I remember he once got work laptop. It was a higher end laptop... from decade ago. It only had Core 2 Duo something, 2GB RAM, 5400 rpm HDD and Intel GMA. 

I have been in hospitals too, where I was often examined and most of the time there were windows XP machine with Core 2 era hardware as late as 2018. There were some Windows 2000 machine too. My university has regular computers. They are various models with circa Phenom X4 - Sandy Bridge era i3s. Onboard graphics only. Only IT department has one beastly machine with Xeon, Titans in SLI, 32GB RAM and  (wait for it) Windows 7. My school's engineering/drafting (CAD) class only had Pentium D, 4GB RAM, Intel integrated Windows XP machines. My university's computer lab only had Sandy i3 with 4GB RAM, HDD only storage and "blazing fast" Intel HD 2000 graphics, which took forever to draw ArcGIS maps.  

I'm sorry, but I think you are heavily overestimating what businesses actually use. Xeons, Quadros and Radeon Pros are really a luxury products and nobody buys them unless strictly necessary and since those parts are basically the same as consumer hardware, they are barely ever used. It's not even about cost, but that IT people don't even know that they could benefit from those parts. You also heavily overestimate the staff in healthcare sector and their IT knowledge. Some of them are Unga Bungas with computers. They only know (hopefully) how to do healthcare, but not healthcare + IT.

BTW respect to that old ass HD 7770 for soldiering on for over decade without ever being cleaned. Old school GCN aged quite well.




Valantar said:


> No, it's more complicated than that, period. The RTX A4000 is for example fully enabled, unlike the 3070 (though the 3070 Ti is - but that was also launched later). The RTX A2000 is slightly cut down (3328 cores vs. 2560 in the 3050, 3584 in the 3060, 3840 fully enabled, found only in 3060 mobile), but it's also a 70W GPU, unlike the 170W (!) 3060 that has 7.1% more cores. Different product stacks are segmented differently due to different priorities and target markets. Quadros and Nvidia RTX cards are not in any broad way more cut down than GeForce cards, they are just segmented differently due to having different priorities.


It just depends on how they decided to segmentate their hardware that gen.




Valantar said:


> Software TDP adjustments are quite limited, and adjusting power limits in software is not the same as Nvidia's QC process finding a desirable balance between power consumption, performance, and stability. As for stricter QC being a myth: yeah, sure. 'Cause enterprise customers wouldn't be pissed _at all_ if their GPUs failed as frequently as consumer GPUs do, of course not.


Guess what, nVidia/AMD don't do any extra. Dude, I have used FirePro v5800 and v3750 cards. There wasn't any extra robustness on them at all. You get literally identical PCBs to consumer tier hardware, often the same voltage, crappy reference coolers, that just prevent GPU from melting and that's all. They are really nothing more than consumer tier cards with lower clock speed, which in turn dramatically lowers TDP. There is no magic in them, no secret features. You get a bit more knobs in control panel, but that's all. The real difference is their drivers. I have tried playing games on those cards and I immediately noticed vastly superior depth rendering. Also desktop rendering appeared to be sharper. That's compared to GTX 650 Ti. And yes I set HDMI to proper black level, as well as full RGB. There were some actual visual quality differences.

In terms of QC those cards just didn't have any extra. v3750 is literally the same HD 4670 down to same capacitors used. Literally identical. Maybe super high end workstation cards actually get higher QC, but there's no evidence for that. BTW that v3750 BSODed every time YT was launched or if YT video was embedded into website, so same crappy ATi drivers also make into FirePros. AMD today give "pro" drivers to Polaris cards, but those pro drivers are literally old mainstream drivers, which are somewhat more stable. There was no other advantage to them. You also get the same as mainstream video quality. And yes, new features take forever to get incorporated and I was finally fed up with them, when I saw driver problems with modern games and that problem just didn't exist in mainstream drivers anymore. Pro drivers weren't updated for months. 

Maybe nVidia does things differently, but AMD's pro stuff is truly nothing special. 



Valantar said:


> FP64? Yep, because even most pro customers just don't want or need it. As I've been saying the whole time - essentially only datacenters want or need FP64. And anyone else in need of it can rent a compute node on one of the dozens of cloud compute services offering it, running off of those aforementioned FP64-heavy HPC GPUs, for far less money than buying their own hardware, while running their own software still.


More like nV and AMD completely don't give a shit about them, rather than them being useless. They don't have any competition in this market, so they can do as they please.




Valantar said:


> Wait, what? Zen's _main_ advantage and improvement was its efficiency, delivering 8 cores with competitive IPC in the same power draw as Intel gave you 4, and at less than half the power of 8 FX cores.


I guess I forgot that, however Zen ran obscenely hot at launch. Just as hot as FX chips. AMD still haven't completely solved that problem with connecting chiplets to IHS, therefore heat is trapped in transmission. FX chips weren't as hot as many remember, just compared to Sandy I series they were. Initially maximum temperature spec was just 62C due to poor sensor calibration, but later it was lifted to 72C. FX chips were great at being spread out design, which was very efficient at transferring heat to IHS. I also found out that FX 6300 can run passively with Mugen 4 heatsink in enclosed case and it only reaches 58C reported temperature in long prime95 small FFT run. Do that with Ryzen 1600x and it will throttle, not to mention reach way higher temperature.  

I'll stick the rest of this _extremely _off-topic CPU nonsense in a spoiler tag so as not to entirely drown people.


Spoiler



Well, that was a long-form explanation of how you simply do not understand what "architecture" means in CPU design. Using die-level block diagrams and how the blocks are arranged as an argument for the architectures being similar proves this unequivocally. Here's the thing: the block diagram says essentially nothing about _how those blocks are constructed_. That's the architecture. Block diagrams roughly sketch out a die layout, on an extremely simplified level. Die layout and architecture are not the same. It's like looking at a photo of a house and saying that makes you intimately familiar with the personalities and lives of the people living in it - you can probably tell a few high level details, but you have literally none of the required information to answer the question in a satisfactory way.

You also show that you don't understand the timeframes involved in CPU design: CPU architectures have 5+-year development cycles. _Of course_ there is concurrent work between Carrizo (2015) and Zen (2017) - it would literally be impossible for Zen to be designed in its entirety after Carrizo was finished. That quote you included about Jim Keller leaving just illustrates this - the architectural design of Zen was done just slightly later than Carrizo was done (and Carrizo had, at the time it was launched, itself been done for about a year - and so on and so on). Remember, Keller joined AMD in 2012, and was hired specifically to create a new architecture - Zen. With Carrizo being a refinement of an existing architecture rather than a new one it likely had a shorter development cycle - it's entirely possible that design on Zen started before design on Carrizo, and even that the Carrizo design was informed by findings from early stages of Zen design. But your argument here essentially boils down to "they were developed at the same time, so they must be very similar" which is just a fundamental misunderstanding of how CPU architectural design work happens in the real world.

Further misconceptions:
- SMT being "new" and "untested" for Zen1. The first hardware SMT implementations happened in 1968, though only in IBM's research labs. Intel introduced its SMT - HyperThreading - in 2002. IBM introduced 2-way SMT with Power5 in 2004, with 8-way SMT with Power8 in 2014. Oracle had 8-way SMT in 2010 with SPARC T3. And so on, and so on. SMT was widely established across the computing industry by the time early Zen design work started in 2012, and certainly by the time it launched in 2017. It was neither new nor untested in any reasonable understanding of those terms.
- That there is a significant parallelism between FX and Zen in what you describe as "Build small cores, that clock high and compensate lack of single core perf with moar coars". Zen1 never clocked particularly high. It couldn't, both due to node and architectural limitations. It took two architectural revisions and a major node change to change that. And even if this _was_ similar, it is so broad and vague that it doesn't describe the CPU's architectural features whatsoever.
- Zen's IPC gains are on top of Carrizo. Carrizo was an impressive refinement of an overall poor architecture, but Zen absolutely trounced it, despite its 20% gains.
- There is_ extremely_ little in common between a CCX and an FX Module. A module is a cluster of two tightly integrated INT cores with a shared FP core (which has a partial form of SMT, though not particularly effective), as well as a shared L2 cache. Even on a very high level, a CCX is  very different from this, with INT and FP cores being matched and not shared, and L2 caches being private to each int+fp core. But this is all very high level - differences are far, far greater than this on a lower level. Where do you think that 40% IPC increase comes from? From optimizations of _how the cores themselves work_.
- You claim Carrizo solved most of FX's problems. If so, then how does that non-power-limited desktop Athlon X4 845 compare to a Zen1 CPU with similar specifications? Oh, right, it gets absolutely stomped on. Sure, the 1300X doesn't win every test, but it wins the _vast_ majority, and often by very significant margins. Also: uneven performance scaling - some tests improving massively, others not - is a surefire sign of low-level architectural differences, as different parts of the core in each architecture performs differently due to them being designed differently. If the 1300X was just a minor refinement of Carrizo, it would see relatively linear gains overall.
- Saying a 20% IPC gain is "quite similar" to a 40% IPC gain is ... something. I mean, sure, both are much more than 0! But then again, one is literally twice the other. Especially considering most IPC gains for CPUs are in the 5-15% range, 40% is far more exceptional than 20%, even if 20% is also very good. And, of course, those 40% were _on top of_ the 20% gain of the other - they're not comparing to the same base number. And Carrizo was a very late revision of a mature design - which makes its 20% gains impressive, but also underscores how radical a departure Zen was from that base design. There is no CPU architecture in history where a late-generation refresh has managed to improve IPC by 40% (nor lay the groundwork for subsequent iterations improving IPC by ~50% on top of that).

The way you're arguing here and what you're taking away from the quotes you are including is highly indicative that you simply fail to understand the chasm that is between a high-level feature description of a CPU and the actual in-silicon implementation of its architecture. Zen and Carrizo share various familial traits - they are after all both X86/AMD64 CPU architectures developed by the same company. No doubt a handful of features in the core are _very_ similar too. But the overall core design, and the majority of its components? Significantly changed - otherwise we wouldnt' have seen the changes we saw with Zen, nor the subsequent changes we've seen with further Zen revisions.


[/QUOTE]



AusWolf said:


> The potential has been there for a decade. No one used it.


KSP 

But for real, it's very hard to find info about how PhysX worked, I know that it did FP, but which? KSP is the only game I found example of utilizing FP64. Others might use it to, but like I said, nobody says that their game uses FP64 or even FP32. companies just don't share their super technical details so freely.

I read more and it seems that Arma 2, Universe Sandbox may use it. Star Citizen's game engine does use FP64. Hellion certainly uses FP64. 

Form what I read, FP64 is being utilized more off-screen, where compute capabilities can be utilized. But on other hand, Unity engine has used double precision floats too, but I can't find where specifically. UE5 is speculated to use FP64, but there's no confirmation of that. UE4 can support FP64 if needed, but that's avoided due to performance penalty, lack of support on certain hardware for doubles altogether and hardness of troubleshooting. So devs do a lot of work to make things work in FP32. If FP64 performance was great and capable cards common, it could take gaming to quite a different space than it is today. FP32 is a bottleneck, not a natural sweet spot. 




AusWolf said:


> Are you talking about retail channels? Don't forget that large-scale corporate customers rarely walk into a PC store, or hit up Amazon asking for 20-50-100 graphics cards. They are often directly connected with Nvidia / AMD through contracts.


Yes them, dunno about you, but in Lithuania even big businesses get their hardware from retailers, retailer do get their stuff from single big logistic center and that center gets stuff from perhaps nVidia. No business would want to deal with expense or hassle of basically doing retail themselves.




AusWolf said:


> First you wanted to sell me on it, now you're saying there's no point. What the heck?
> 
> But then, comparing GeForce and 3dfx cards is pointless, imo. DirectX (7 or 8, I think?) was superior to Glide, but if you wanted to run Glide, you needed a 3dfx card. Not every game supported both.


Sure, if your game doesn't support anything else, then nGlide saves you, but many games soon supported DirectX and it was superior in picture quality. So if you have a game that supports several APIs, DirectX or even OpenGL is better to use than Glide. And BTW I'm not selling you anything, nGlide is free software. 




AusWolf said:


> I think you're confusing chip layout and core design. Here's an article comparing the Bulldozer and Zen 1 core.


I'm not, but don't you think that FX revisions also changed core internals? It did, I just don't see any point in mentioning that, because it's obvious. Chip layout was indeed very similar, but if you compare K10 core layout to FX, there's no similarity. Zen didn't do that.


----------



## Valantar (May 12, 2022)

Again, don't want to completely pollute the thread, so...


Spoiler






The red spirit said:


> My dad works in power plant/heating engineering I know full well what their machine have. You won't see any Quadros, FirePros or Xeons ever in those. His machine in current job has specs like this:
> i7 2600K
> Intel motherboard
> stock cooler (thermal paste never changed)
> ...


I have absolutely zero idea what tasks those PCs were running, so ... okay? If anything, that just indicates that those tasks can't have been particularly performance or precision sensitive. I would be rather shocked if the people designing or building a power plant, or running CFI simulations for a reactor, a high-load heat exchange system, etc. were using that class of software.


The red spirit said:


> I have been in hospitals too, where I was often examined and most of the time there were windows XP machine with Core 2 era hardware as late as 2018. There were some Windows 2000 machine too. My university has regular computers. They are various models with circa Phenom X4 - Sandy Bridge era i3s. Onboard graphics only. Only IT department has one beastly machine with Xeon, Titans in SLI, 32GB RAM and  (wait for it) Windows 7. My school's engineering/drafting (CAD) class only had Pentium D, 4GB RAM, Intel integrated Windows XP machines. My university's computer lab only had Sandy i3 with 4GB RAM, HDD only storage and "blazing fast" Intel HD 2000 graphics, which took forever to draw ArcGIS maps.


You seem to be mistaking statements of "these products are used in these industries" with "these products are used _in every instance, everywhere_ in these industries". Nobody here has made the latter claim. The average PC running in a hospital or university isn't likely running heavy computational workloads, but might just be used for reading and writing to various databases and other organizational tasks, and only needs hardware to match. Heck, a lot of stuff in hospitals and all kinds of laboratories is run on low-power, low performance embedded terminals of various kinds as well.

Go talk to someone managing hardware for an MRI or CT scanner, and ask them what GPUs are running those tasks. Anyone in any kind of medical imaging, really. Or someone doing research on any type of molecular biology or biochemistry that involves any kind of modelling or simulation. That's the stuff we're talking about - the stuff that needs high performance compute, ECC, and to some degree also FP64, though with the introduction of more machine learning based approaches, less of that going forward (as high precision is only needed for training models, not running inference on them - that's FP32, FP16, INT8, or some AI-specific format like Bfloat16).

Of course a lot of the heavier compute tasks, particularly within research, is increasingly offloaded to off-site cloud compute services - Azure, AWS, or any of dozens and dozens of smaller providers. Unless the hospital or university is sufficiently wealthy to build and run their own HPC clusters, of course.


The red spirit said:


> I'm sorry, but I think you are heavily overestimating what businesses actually use. Xeons, Quadros and Radeon Pros are really a luxury products and nobody buys them unless strictly necessary and since those parts are basically the same as consumer hardware, they are barely ever used. It's not even about cost, but that IT people don't even know that they could benefit from those parts. You also heavily overestimate the staff in healthcare sector and their IT knowledge. Some of them are Unga Bungas with computers. They only know (hopefully) how to do healthcare, but not healthcare + IT.


See above. You're misinterpreting "these products are used in these industries" as "these products are _always, exclusively_ used in these industries." We are telling you where the relevant use cases are, we are not making all-encompassig claims about the hardware used in all aspects of these industries.


The red spirit said:


> BTW respect to that old ass HD 7770 for soldiering on for over decade without ever being cleaned. Old school GCN aged quite well.


Absolutely - GCN has aged very well, especially in terms of compute, even if its efficiency is obviously crap by today's standards. And even consumer electronics can run for a _long, long_ time - I ran my old Core2Quad for nearly a decade as my main CPU, with a 31% overclock for the last few years of my ownership of it, before selling it on to someone else (who at least never reported to me that it had failed, so I'm assuming it still works). GPUs can also work for a long time as long as they aren't hammered or some onboard component fails - MOSFETs and capacitors are the main killers of those things, on top of ensuring good overall designs that avoid internal resonance, hot spots, etc.


The red spirit said:


> It just depends on how they decided to segmentate their hardware that gen.


Yes, that's what I've been saying the whole time. That is not equal to them being "crippled" or "harvested". Different tools for different uses made to different standards.


The red spirit said:


> Guess what, nVidia/AMD don't do any extra. Dude, I have used FirePro v5800 and v3750 cards. There wasn't any extra robustness on them at all. You get literally identical PCBs to consumer tier hardware, often the same voltage, crappy reference coolers, that just prevent GPU from melting and that's all. They are really nothing more than consumer tier cards with lower clock speed, which in turn dramatically lowers TDP. There is no magic in them, no secret features. You get a bit more knobs in control panel, but that's all. The real difference is their drivers. I have tried playing games on those cards and I immediately noticed vastly superior depth rendering. Also desktop rendering appeared to be sharper. That's compared to GTX 650 Ti. And yes I set HDMI to proper black level, as well as full RGB. There were some actual visual quality differences.


I never said they had "secret features" or magic. Also, have you considered that for a resource-strapped company like AMD was back in those days, they might take the cheaper approach of engineering one good board, rather than one good and one slightly less good but cheaper board? AFAIK the workstation market was less dominated by Nvidia then than now. A lot of this work is also transferable, at least if the workstation board doesn't require something like a ton of PCB layers or more premium PCB materials that would necessitate a separate design for the consumer card.


The red spirit said:


> In terms of QC those cards just didn't have any extra. v3750 is literally the same HD 4670 down to same capacitors used. Literally identical. Maybe super high end workstation cards actually get higher QC, but there's no evidence for that. BTW that v3750 BSODed every time YT was launched or if YT video was embedded into website, so same crappy ATi drivers also make into FirePros. AMD today give "pro" drivers to Polaris cards, but those pro drivers are literally old mainstream drivers, which are somewhat more stable. There was no other advantage to them. You also get the same as mainstream video quality. And yes, new features take forever to get incorporated and I was finally fed up with them, when I saw driver problems with modern games and that problem just didn't exist in mainstream drivers anymore. Pro drivers weren't updated for months.


Pro drivers are literally never updated as frequently as consumer ones - that's a feature, not a bug. Bugfixes are ideally pushed out rapidly, but not necessarily, and regular driver updates are intentionally slow, as the last thing you want when doing work is for a driver update to break something. And yes, the same goes for adding features - unless those features add significant value to pro customers, it's safer to leave them out in case they have some unexpected side effect. I kind of doubt AMD - especially considering how little money they had to spend on driver development back then - had the resources to care about youtube crashing a Pro GPU.


The red spirit said:


> Maybe nVidia does things differently, but AMD's pro stuff is truly nothing special.


I'd like to see some more conclusive evidence as to that than decade-old pro GPUs and half-decade old consumer GPUs running stripped down "pro" drivers.


The red spirit said:


> More like nV and AMD completely don't give a shit about them, rather than them being useless. They don't have any competition in this market, so they can do as they please.


Making an FP64 accelerator isn't all that difficult - at least compared to the other hardware accelerators various startups make all the time for AI and the like. If there was a market for this, they would exist. It's reasonably clear that to the degree the market for this exists, it is sufficiently saturated by AMD and Nvidia's top-end compute accelerators, which still have 2:1 FP64 capabilities. And given the massive surge in cloud compute in recent years, the need for on-site FP64 compute is further diminishing, given its mostly specialized applicability.


The red spirit said:


> I guess I forgot that, however Zen ran obscenely hot at launch. Just as hot as FX chips.


That's a misconception - though one AMD caused for themselves with some _really friggin' weird_ sensors. Those thermals included significant (>20°C) offsets, if you remember. So when my first gen 1600X told the system it was running at 80+°C, the reality was that it was sitting in the low to mid 60s. They barely ran warm at all, they just made themselves look that way.

AFAIK this (stupid) way of doing things was done to maintain a (very) low tJmax rating without forcing motherboard makers and OEMs to fundamentally reconfigure their fan curves and BIOS configurations. Which seemed to stem from an insecurity regarding the actual thermal tolerances of this new architecture on an untested (and not all that great) node. The subsequent removal of these thermal offsets on newer series of CPUs tells us that this caution is no longer there.


The red spirit said:


> AMD still haven't completely solved that problem with connecting chiplets to IHS, therefore heat is trapped in transmission.


That ... again, just isn't a very accurate description. Zen 3 (and to some degree Zen 2) is difficult to cool due to its very high thermal density, which is in turn due to its relatively small core design on a dense node. Higher thermal density makes spreading the heat sufficiently and with sufficient speed more of a challenge, simply due to it being more concentrated - which both raises absolute temperatures in the hotspot and means heat has to travel further across an IHS. Of course the newer cores drawing as much or more power per core than older ones in order to reach higher clocks doesn't help. In the end, this comes down to physics: concentrating the same heat load in a smaller area makes it more difficult for any material contacting that area to efficiently and rapidly dissipate that heat, and to keep the temperature at the heat source at the desired temperature. This is unavoidable unless you also change the material or construction of the IHS and TIM in between die and IHS. I've speculated previously on whether we'll see vapor chamber based IHSes at some point specifically for this reason, as at some point copper alone just won't be sufficient.

There are obviously optimizations to be done: a thinner diffusion barrier on top of the die improves thermal transfer (see the Intel ... was it 10900K, where they effectively sanded down every die for those?), but runs some risk of premature hardware failure due to TIM materials diffusing into the silicon and changing its characteristics. This is generally avoidable though, with some forethought. Thinner solder between the die and IHS will also help, though that's difficult in practice for mass production. Liquid metal outperforms solder, so that's another possible solution - and one that's been used in mass produced electronics with great success for quite a few years now. So there are still ways to improve things. But none of that boils down to "AMD not having solved that problem with connecting chiplets to IHS".


The red spirit said:


> FX chips weren't as hot as many remember, just compared to Sandy I series they were. Initially maximum temperature spec was just 62C due to poor sensor calibration, but later it was lifted to 72C. FX chips were great at being spread out design, which was very efficient at transferring heat to IHS. I also found out that FX 6300 can run passively with Mugen 4 heatsink in enclosed case and it only reaches 58C reported temperature in long prime95 small FFT run. Do that with Ryzen 1600x and it will throttle, not to mention reach way higher temperature.


FX didn't run all that hot unless you wanted them to compete with Intel at the time, which forced you into 200+W overclocks (or buying one of their 200+W SKUs). Outside of that it was ... fine?


The red spirit said:


> KSP


Yep, that one game that is essentially a gamified high precision physics simulation at its very core, with _very_ few other aspects to the game. One might almost wonder if _something_ makes it uniquely suited to adopting FP64?


The red spirit said:


> But for real, it's very hard to find info about how PhysX worked, I know that it did FP, but which? KSP is the only game I found example of utilizing FP64. Others might use it to, but like I said, nobody says that their game uses FP64 or even FP32. companies just don't share their super technical details so freely.


PhysX was, AFAIK, FP32 through CUDA.

And given how game developers love to use the technical specificities of their games to drum up interest, it would be really weird if there were any notable titles out there using FP64 in any significant way with nobody aware of it. As has been said time and time again now: FP64 is more complex to program for and its only benefit is higher precision, which generally isn't necessary in games (as FP32 is already quite precise). The reason nobody is using it or talking about using it is that the only thing that would gain them and their gain would be more work and worse performance. It would have zero noticeable benefits in the _vast_ majority of games.


The red spirit said:


> I read more and it seems that Arma 2, Universe Sandbox may use it. Star Citizen's game engine does use FP64. Hellion certainly uses FP64.


Yet they run fine on modern, low-FP64 architectures, right? So, presumably, for whatever it's used for in these games, either it's relatively low intensity, or it's run server-side rather than locally. And crucially, one of the acutal useful applications of "AI" (neural networks) is running these kinds of simulations much faster, at much lower precision, yet with similar accuracy in the outcomes as the algorithms are that much more precise.


The red spirit said:


> Form what I read, FP64 is being utilized more off-screen, where compute capabilities can be utilized.


It would likely be useful in things like simulating a vast physics-based universe (Star Citizen) or large amounts of highly complex AI over a significant amount of time, in scenarios where it's crucially important that those AIs or physics simulations are repeatable and predictable to a very high degree. That's what FP64 is good for - high precision, i.e. predictable and repeatable outcomes when running the same simulation many times, to a _very_ high degree. FP32 is already pretty good at that, but generally insufficient for something really complex and high stakes like MRI imaging. Simulating a persistent physics-based universe isn't all that different from that - but that's also mainly done server-side.


The red spirit said:


> But on other hand, Unity engine has used double precision floats too, but I can't find where specifically. UE5 is speculated to use FP64, but there's no confirmation of that. UE4 can support FP64 if needed, but that's avoided due to performance penalty, lack of support on certain hardware for doubles altogether and hardness of troubleshooting. So devs do a lot of work to make things work in FP32. If FP64 performance was great and capable cards common, it could take gaming to quite a different space than it is today. FP32 is a bottleneck, not a natural sweet spot.


The first half of what you're saying here is correct, but the second half is pure conjecture and speculation - and to some degree contradicts the first half. After all, if _difficulty troubleshooting_ is a problem of FP64, how can that mean that devs are doing _more work_ to make FP32 work for them? That sentence literally tells us that _FP64_ would be more work, not the other way around. And it's not surprising that most game engines can or do support FP64 - it exists, and it has certain specialized uses. You're also speculating that there is significant pent-up demand for the ability to use high performance FP64 in games, which isn't supported by what you said before. That's just speculation. That it's avoided due to having drawbacks doesn't mean it would be widely adopted if some or all of those drawbacks were reduced, as it would still need to be _useful_ in some significant way.


The red spirit said:


> Yes them, dunno about you, but in Lithuania even big businesses get their hardware from retailers, retailer do get their stuff from single big logistic center and that center gets stuff from perhaps nVidia. No business would want to deal with expense or hassle of basically doing retail themselves.


Large enterprises either buy directly from the producing companies, or do so through the intermediary of a distributor, but with the deal then typically being a three-party deal, with the distributor getting a cut due to their infrastructure and personnel being used. The question is _how_ large of an enterprise you have, as this takes some size to do - but all businesses of sufficient size do so, as it is both faster, cheaper, and far more flexible than buying retail or buying from the distributor wihtout involving the producing company. If that doesn't happen in Lithuania, either the companies in question aren't of a sufficient size, or someone _really_ needs to tell them that they ought to be doing this, as they would otherwise be throwing away a lot of time and money. They're in the EU after all, so they can use Nvidia/Intel/AMD/whoever's EU distribution networks and professional sales networks.


The red spirit said:


> I'm not, but don't you think that FX revisions also changed core internals? It did, I just don't see any point in mentioning that, because it's obvious. Chip layout was indeed very similar, but if you compare K10 core layout to FX, there's no similarity. Zen didn't do that.


You _are_ confusing those two. Yes, the various heavy machinery revisions updated various architectural traits of the cores - like that 20% IPC gain we spoke about before with Carrizo. Just as Zen 1, 1+, 2 and 3 also have significant changes between their cores. The difference is, these are revisions on the same design (though Zen3 is described as a ground-up redesign, tweaking literally every part of the core, with previous revisions being much more conservative). Within the same instruction set, and especially the same company and design teams (and associated patents, techniques and IP) there is obviously a fuzzy line between a new architecture and a tweaked one, but drawing lines is still possible - and frequently done. Zen1 was _far_ greater of a departure from anything AMD had made previously than any subsequent design, _despite_ Zen3 being a ground-up redesign. Sharing a vaguely similar layout tells us nothing of any significant value about the low level architectural details of a chip.


----------



## The red spirit (May 12, 2022)

Valantar said:


> Spoiler: Reply
> 
> 
> 
> I have absolutely zero idea what tasks those PCs were running, so ... okay? If anything, that just indicates that those tasks can't have been particularly performance or precision sensitive. I would be rather shocked if the people designing or building a power plant, or running CFI simulations for a reactor, a high-load heat exchange system, etc. were using that class of software.





Spoiler: Reply



Well I guess we are fucked then. My dad engineers some power plant periphery. Docs use those computers for MRI, X-Rays and other stuff. In terms of engineering, most often used software is AutoCAD, Bentley, Dassault, Maya and some other bits. That old 7770 has decent FP64 ratio at least. But yeah, this is where you could get a boost from workstation card sometimes maybe. But it's FP64 that is important, meanwhile having a workstation card is not.

I have found this due, who has been making videos and seemingly does similar job:









Yep, not much love for pro cards from him. The only other person I found that actually can utilize a pro card was some filmmaker with high budget, but those people buy several pro cards, use all VRAM and they really need bleeding edge and they use some special features, but liek this engineering dude said some people will have a need for 24/7 support and can't compromise on VRAM performance or some features, but we are still talking about tiny niche.



Valantar said:


> You seem to be mistaking statements of "these products are used in these industries" with "these products are used _in every instance, everywhere_ in these industries". Nobody here has made the latter claim. The average PC running in a hospital or university isn't likely running heavy computational workloads, but might just be used for reading and writing to various databases and other organizational tasks, and only needs hardware to match. Heck, a lot of stuff in hospitals and all kinds of laboratories is run on low-power, low performance embedded terminals of various kinds as well.


I dunno, I'm quite torn about such statement. You would want pro hardware in life critical computers and universities should have at least one machine capable of some pro stuff. Think about IT researchers or other things. But still, if those cards actually did enough things to be truly pro, there wouldn't be so much resistance in buying them.



Valantar said:


> Of course a lot of the heavier compute tasks, particularly within research, is increasingly offloaded to off-site cloud compute services - Azure, AWS, or any of dozens and dozens of smaller providers. Unless the hospital or university is sufficiently wealthy to build and run their own HPC clusters, of course.


I don't think that it would be so bad for them to have one or two machines with mid tier pro cards for research purposes only. Those are quite "affordable".



Valantar said:


> Absolutely - GCN has aged very well, especially in terms of compute, even if its efficiency is obviously crap by today's standards.


I sort of made it awesome. If you set TDP limit to 100 watts, it still reaches almost whole frequency. You lose only 100-200 MHz, that's not much, considering you lose 85 watts. Such stupidly simple tweak basically makes it superior to RX 5500 XT or RX 6500 XT in terms of performance and power efficiency. GCN was really efficient, but AMD in whitepapers state, that they decided to clock it higher at efficiency trade-off. All I can say that it was a mistake, but on other hand people wouldn't have bought GCN cards if they were slower than GeForces. And AMD ended up liek that, because Polaris was supposed to come earlier and once it came out, nVidia already released their next gen, that made Polaris look bad. Not to mention how damn late Vega was and how pathetic it ended up looking. It was supposed to be competitor to 980 Ti, but it came out when 1080 ti was already a thing. Meanwhile, HD 7000 series and R series were quite well timed, overall decent, but did they blew on coolers and power consumptions, not to mention tessellation. nVidia at that point was suffering form bad releases, so yeah.




Valantar said:


> And even consumer electronics can run for a _long, long_ time - I ran my old Core2Quad for nearly a decade as my main CPU, with a 31% overclock for the last few years of my ownership of it, before selling it on to someone else (who at least never reported to me that it had failed, so I'm assuming it still works). GPUs can also work for a long time as long as they aren't hammered or some onboard component fails - MOSFETs and capacitors are the main killers of those things, on top of ensuring good overall designs that avoid internal resonance, hot spots, etc.


Well I had awful experience with reliability of computers. I went through 3 AM3+ boards and each lasted only like 2 years until they all croaked. I have to admit that I abused one, but two others just died for no good reason. I also went through 3 boards with socket 754, one lasted over decade, meanwhile others were DOA. I had 2 graphics cards die on me for no reason. I had HDD fail from popping some components and stinking for the rest of day. One HDD died randomly after 2 years of use. One HDD croaked silently too. I had warranty hell with FM2+ boards and those that work are kinda shit and have their own problems like very hot VRMs, downclocking and etc. BTW that FirePro v5800 just decided to stop working one day and just died like that. It wasn't overheating, artifacting or doing anything strange a day before, but it stopped outputting video completely. I had one malfunctional router, which's WAN sort of failed (dropped connection at any higher load). I never saw memory, PSU or CPU fail, but rest dies rather easily and often for no obvious reason. Some of this hardware was bought new, some was eBay specials. I still have paranoia that motherboards can just die for some reason. It's pretty obvious that my experience hasn't been great. I now have quite a bit leftover CPUs, RAM, GPUs, that I can't use. Those things last.



Valantar said:


> Yes, that's what I've been saying the whole time. That is not equal to them being "crippled" or "harvested". Different tools for different uses made to different standards.


Sort of. Most Quadros use dies with less cores, lower clock speed, but faster memory. Same deal with AMD.




Valantar said:


> I never said they had "secret features" or magic. Also, have you considered that for a resource-strapped company like AMD was back in those days, they might take the cheaper approach of engineering one good board, rather than one good and one slightly less good but cheaper board? AFAIK the workstation market was less dominated by Nvidia then than now. A lot of this work is also transferable, at least if the workstation board doesn't require something like a ton of PCB layers or more premium PCB materials that would necessitate a separate design for the consumer card.


Or imagine that they really don't give them any better hardware




Valantar said:


> Pro drivers are literally never updated as frequently as consumer ones - that's a feature, not a bug. Bugfixes are ideally pushed out rapidly, but not necessarily, and regular driver updates are intentionally slow, as the last thing you want when doing work is for a driver update to break something. And yes, the same goes for adding features - unless those features add significant value to pro customers, it's safer to leave them out in case they have some unexpected side effect. I kind of doubt AMD - especially considering how little money they had to spend on driver development back then - had the resources to care about youtube crashing a Pro GPU.


But that sort of makes web usable nowadays at least. So many sites have YT embedded. Also if YT crashes, who says that any other video won't trigger BSOD? It might be decoder wide bug. That's just a really pathetic driver fail.



Valantar said:


> I'd like to see some more conclusive evidence as to that than decade-old pro GPUs and half-decade old consumer GPUs running stripped down "pro" drivers.


Wanna buy be some Radeons?  I would bench them, look like a hawk for visual quality differences, but I would like to keep Radeon Pro model.




Valantar said:


> Making an FP64 accelerator isn't all that difficult - at least compared to the other hardware accelerators various startups make all the time for AI and the like. If there was a market for this, they would exist. It's reasonably clear that to the degree the market for this exists, it is sufficiently saturated by AMD and Nvidia's top-end compute accelerators, which still have 2:1 FP64 capabilities. And given the massive surge in cloud compute in recent years, the need for on-site FP64 compute is further diminishing, given its mostly specialized applicability.


Like I said, those products are inaccessible for many buyers, not mention likely sky high price. If you are prosumer, you are better off getting Radeon VII or some Vega card from eBay.



Valantar said:


> That's a misconception - though one AMD caused for themselves with some _really friggin' weird_ sensors. Those thermals included significant (>20°C) offsets, if you remember. So when my first gen 1600X told the system it was running at 80+°C, the reality was that it was sitting in the low to mid 60s. They barely ran warm at all, they just made themselves look that way.
> 
> AFAIK this (stupid) way of doing things was done to maintain a (very) low tJmax rating without forcing motherboard makers and OEMs to fundamentally reconfigure their fan curves and BIOS configurations. Which seemed to stem from an insecurity regarding the actual thermal tolerances of this new architecture on an untested (and not all that great) node. The subsequent removal of these thermal offsets on newer series of CPUs tells us that this caution is no longer there.


It's basically the same with FX then, but even later Ryzen gens suffer from poor heat transfer through IHS, so I'm not sure if I can agree that it was just sensor offset issue. Not to mention the fact that some Zen chips have really unevenly flat IHS surface. It just hasn't been a problem before Zen.




Valantar said:


> That ... again, just isn't a very accurate description. Zen 3 (and to some degree Zen 2) is difficult to cool due to its very high thermal density, which is in turn due to its relatively small core design on a dense node. Higher thermal density makes spreading the heat sufficiently and with sufficient speed more of a challenge, simply due to it being more concentrated - which both raises absolute temperatures in the hotspot and means heat has to travel further across an IHS. Of course the newer cores drawing as much or more power per core than older ones in order to reach higher clocks doesn't help. In the end, this comes down to physics: concentrating the same heat load in a smaller area makes it more difficult for any material contacting that area to efficiently and rapidly dissipate that heat, and to keep the temperature at the heat source at the desired temperature. This is unavoidable unless you also change the material or construction of the IHS and TIM in between die and IHS. I've speculated previously on whether we'll see vapor chamber based IHSes at some point specifically for this reason, as at some point copper alone just won't be sufficient.
> 
> There are obviously optimizations to be done: a thinner diffusion barrier on top of the die improves thermal transfer (see the Intel ... was it 10900K, where they effectively sanded down every die for those?), but runs some risk of premature hardware failure due to TIM materials diffusing into the silicon and changing its characteristics. This is generally avoidable though, with some forethought. Thinner solder between the die and IHS will also help, though that's difficult in practice for mass production. Liquid metal outperforms solder, so that's another possible solution - and one that's been used in mass produced electronics with great success for quite a few years now. So there are still ways to improve things. But none of that boils down to "AMD not having solved that problem with connecting chiplets to IHS".


All in all Intel fared a bit better there and despite advertised nm numbers, they actually had smaller nodes. That really looks more like AMD only problem. And then there's nVidia, who don't have problems due to just cooling a bare die. I guess we need some "free titty" movement for computer hardware too. 



Valantar said:


> FX didn't run all that hot unless you wanted them to compete with Intel at the time, which forced you into 200+W overclocks (or buying one of their 200+W SKUs). Outside of that it was ... fine?


Yep and if you turn off boost, it gained a lot of power efficiency. But that's a fail. It was built for high clocks, it was the sole reason for IPC sacrifices, older node, smaller cores and it just couldn't achieve that. Even FX based Opetrons ended up being quite crap compared to Sandy Xeons, because of how ridiculously slow each core was. You could buy 16 core Opteron, just to see it beaten by 8 core Sandy Xeon. And then again, Sandy Core i parts had ridiculously low power usage too. i7 only had around 60 or 70 watt real usage. i5 was even more economical. So FX truly was only better at a lot of weak cores, for low price. And I think that was reasonable value preposition. You could get FX 6300 for the price of Pentium and FX 8320 for the price of i5. But parts like FX 9590 were just plain stupid and insane. And that thing murdered boards and their VRMs. But I have to admit, that I admire it a bit for being crazy and something unusual and for showing what happened with architecture at such high speeds. Looking back now, it doesn't even look that insane, when we have even hotter and even less efficient chips like Core i9s. I just hate those i9s with passion for some reason. And I really don't liek that Intel didn't have guts to strongly enforce TDP limits on them, since those chips then become very efficient and not as stupid. And I hate nVidia even more, because they don't even have non-K (aka non clocked balls to the wall) versions of their cards. I would like to see LE cards with same cores, but lower TDP.




Valantar said:


> PhysX was, AFAIK, FP32 through CUDA.


And it was actually good technology, really made a difference, but sadly it basically died like EAX or A3D. nVidia did a lot to monopolize it and kill it.




Valantar said:


> And given how game developers love to use the technical specificities of their games to drum up interest, it would be really weird if there were any notable titles out there using FP64 in any significant way with nobody aware of it.


Counterquestion, but are you aware of games using INT8 or FP16 too? It's just very technical info, that nobody really tells, because barely anyone understand that and it's non-marketable. I have never seen any game dev marketing their game that use FP32 either.



Valantar said:


> It would likely be useful in things like simulating a vast physics-based universe (Star Citizen) or large amounts of highly complex AI over a significant amount of time, in scenarios where it's crucially important that those AIs or physics simulations are repeatable and predictable to a very high degree. That's what FP64 is good for - high precision, i.e. predictable and repeatable outcomes when running the same simulation many times, to a _very_ high degree. FP32 is already pretty good at that, but generally insufficient for something really complex and high stakes like MRI imaging. Simulating a persistent physics-based universe isn't all that different from that - but that's also mainly done server-side.


Those still could be games, just not the conventional games like GTA or CoD. I think that various simulators could benefit from strong FP64 presence in market. Imagine Assetto Corsa with FP64, good crash modelling, voxels maybe and other tech stuff. Game would be simulation porn.



Valantar said:


> The first half of what you're saying here is correct, but the second half is pure conjecture and speculation - and to some degree contradicts the first half. After all, if _difficulty troubleshooting_ is a problem of FP64, how can that mean that devs are doing _more work_ to make FP32 work for them? That sentence literally tells us that _FP64_ would be more work, not the other way around. And it's not surprising that most game engines can or do support FP64 - it exists, and it has certain specialized uses. You're also speculating that there is significant pent-up demand for the ability to use high performance FP64 in games, which isn't supported by what you said before. That's just speculation. That it's avoided due to having drawbacks doesn't mean it would be widely adopted if some or all of those drawbacks were reduced, as it would still need to be _useful_ in some significant way.


No, you probably missed something. Game developers could benefit from FP64. FP64 is good for them and could make their job easier, faster or just higher quality, however currently existing tools in game engines aren't great, they have their own problems that makes whole ordeal with FP64 problematic. Due to that and FP64 hardware rareness or poor performance, devs are basically forced to convert FP64 code into functional FP32 code, process is complicated, but tools to do it are of higher quality.




Valantar said:


> Large enterprises either buy directly from the producing companies, or do so through the intermediary of a distributor, but with the deal then typically being a three-party deal, with the distributor getting a cut due to their infrastructure and personnel being used. The question is _how_ large of an enterprise you have, as this takes some size to do - but all businesses of sufficient size do so, as it is both faster, cheaper, and far more flexible than buying retail or buying from the distributor wihtout involving the producing company. If that doesn't happen in Lithuania, either the companies in question aren't of a sufficient size, or someone _really_ needs to tell them that they ought to be doing this, as they would otherwise be throwing away a lot of time and money. They're in the EU after all, so they can use Nvidia/Intel/AMD/whoever's EU distribution networks and professional sales networks.


Lithuania is small country with population of 2.7 million people. Most companies aren't that big and most companies aren't tech companies. Our biggest companies only have like 2k people hired and they are multinational companies too, so people hired in Lithuania isn't 2k most likely. Our biggest cap companies are also listed as small cap companies by Morningstar. Many medium size companies here only have 10-100 people. That definitely limits some business and some kinds of investments. Either way, I'm not a fan of buying restrictions of certain hardware. There's no need for them to exist. Neither there's any need to have say Quadros and FP64 Quadros. That's just more overhead for nV to manage almost identical products.




Valantar said:


> You _are_ confusing those two. Yes, the various heavy machinery revisions updated various architectural traits of the cores - like that 20% IPC gain we spoke about before with Carrizo. Just as Zen 1, 1+, 2 and 3 also have significant changes between their cores. The difference is, these are revisions on the same design (though Zen3 is described as a ground-up redesign, tweaking literally every part of the core, with previous revisions being much more conservative). Within the same instruction set, and especially the same company and design teams (and associated patents, techniques and IP) there is obviously a fuzzy line between a new architecture and a tweaked one, but drawing lines is still possible - and frequently done. Zen1 was _far_ greater of a departure from anything AMD had made previously than any subsequent design, _despite_ Zen3 being a ground-up redesign. Sharing a vaguely similar layout tells us nothing of any significant value about the low level architectural details of a chip.


Fine, fair enough about core internals, but AM3+ chis later got additional instruction sets liek FMA3 I think and some other bits. Also Carizzo may have been a small win in performance, but despite 20% IP gain, it also gained like 50% power efficiency, which is huge. Also if we go from Zambezi to Carrizo, so many things were tweaked, changed or modified. It was a lot more than ever before. Importantly, Carrizo wasn't even the last of FX derivative, there were some AM4 FX based chips, which were the last revision and they also gained some performance, efficiency, overclockability, DDR4 and instructions again. But those chips again were gimped due to lack of L3 cache and 28nm node. Had they been ported to 14nm and got L3 cache, they might not have sucked at all and we may have got Zen like performance out of them. Compared to Athlon 200GE (the closest actual competitor), FX derived AM4 chips doesn't seem to have much of multicore performance difference. I know that it falls flat in single core benches, but then we get into that core feud. Should we count module as core with HT or two cores.










They were pretty close in Cinebench, but that A8 is quite low clocked part. So, whether FX was bad or just on ancient node is hard to say now. Not to mention that FX had shit ton of revisions, redesigns. It was more like rolling architecture.


----------



## zx128k (May 16, 2022)

I have been reading some reviews about this card, performance is much lower PCIe 3.  PCIe4 has much higher performance.  Basically dont pair this card with a system with no PCIe4 support.


----------



## Valantar (May 16, 2022)

zx128k said:


> I have been reading some reviews about this card, performance is much lower PCIe 3.  PCIe4 has much higher performance.  Basically dont pair this card with a system with no PCIe4 support.


During the previous 12 pages of discussion in this thread, several such reviews have already come up - from here at TPU, TechSpot/Hardware Unboxed, and more. So yeah, that's quite well known - though also very dependent on the game according to members here with actual hardware in hand.

@The red spirit see below:


Spoiler






The red spirit said:


> Well I guess we are fucked then. My dad engineers some power plant periphery. Docs use those computers for MRI, X-Rays and other stuff. In terms of engineering, most often used software is AutoCAD, Bentley, Dassault, Maya and some other bits. That old 7770 has decent FP64 ratio at least. But yeah, this is where you could get a boost from workstation card sometimes maybe. But it's FP64 that is important, meanwhile having a workstation card is not.


In those cases I would absolutely agree that FP64 capabilities are more important than whatever branding your GPU has - depending on software and driver support, of course. Software like that is often very picky in what hardware they will let accelerate their workloads. I would be quite surprised to see the types of computers you describe running MRIs, but most likely that's more down to old equipment than anything else - if it ran the software when it was new, it obviously still does so, and not changing anything unnecessarily is good in these situations. But damn, it must be _slow_.


The red spirit said:


> I have found this due, who has been making videos and seemingly does similar job:
> 
> 
> 
> ...


That's an interesting video! I don't quite see where you get "not much love for pro cards" from though - he's crystal clear that if he's recommending mission critical hardware purchases for a business, he _only_ recommends Quadros, as they can be relied on to get direct professional-level support from Nvidia. And that's a crucial difference: as an individual with the knowledge and skills to pick hardware for yourself, you're also most likely able to work around various issues on your own. That's not a solution that scales to a business with any number of non-enthusiast employees. That's where the need for reliability and support comes in - and why it costs the money it does. Remember: in most professional applications, performance is always second to reliability. You want the job done as quickly as possible, but running a few percent slower but _always_ finishing is far better than randomly having to spend hours or days troubleshooting some weird issue.


The red spirit said:


> I dunno, I'm quite torn about such statement. You would want pro hardware in life critical computers and universities should have at least one machine capable of some pro stuff. Think about IT researchers or other things. But still, if those cards actually did enough things to be truly pro, there wouldn't be so much resistance in buying them.


But for the type of professionals you're describing, there isn't - outside of hobbyist/enthusiast circles where everyone is aware that consumer products can do a lot of the same stuff much cheaper, but without the driver validation, development support, and end user support.


The red spirit said:


> I don't think that it would be so bad for them to have one or two machines with mid tier pro cards for research purposes only. Those are quite "affordable".


It's entirely possible that they do, but for the most part it's _far_ cheaper for them to just rent a remote compute node.


The red spirit said:


> I sort of made it awesome. If you set TDP limit to 100 watts, it still reaches almost whole frequency. You lose only 100-200 MHz, that's not much, considering you lose 85 watts. Such stupidly simple tweak basically makes it superior to RX 5500 XT or RX 6500 XT in terms of performance and power efficiency. GCN was really efficient, but AMD in whitepapers state, that they decided to clock it higher at efficiency trade-off. All I can say that it was a mistake, but on other hand people wouldn't have bought GCN cards if they were slower than GeForces. And AMD ended up liek that, because Polaris was supposed to come earlier and once it came out, nVidia already released their next gen, that made Polaris look bad. Not to mention how damn late Vega was and how pathetic it ended up looking. It was supposed to be competitor to 980 Ti, but it came out when 1080 ti was already a thing. Meanwhile, HD 7000 series and R series were quite well timed, overall decent, but did they blew on coolers and power consumptions, not to mention tessellation. nVidia at that point was suffering form bad releases, so yeah.


You're presenting a pretty uneven playing field here though. "If I undervolt and underclock my older GPU it's more efficient and as fast as a newer, lower tier GPU clocked much higher" isn't particularly surprising. That's just how hardware works, especially with clock scaling. The issue then becomes absolute performance and die area. If your RX 580, with its 36 CUs and 232mm² die can match a 22CU/158mm² RX 5500 or 16CU/107mm² RX 6500 XT, that's .. okay? Not great though. In a low-to-midrange tier like that it might be fine if the process node is sufficiently cheap, but you quickly run into issues scaling performance upwards, as you either run into reticle size limits for the die or just end up with a massive, very expensive die. And, of course, GCN didn't allow for more than 64CUs no matter what, which is why Vega was the shitshow that it was. If AMD could have revised GCN to scale past 64CUs, they could have delivered lower clocked, much more efficient Vega cards that would have competed well in both efficiency and absolute performance with conetmporaneous Geforce cards. But they hit an architectural limit, and didn't have any other choice than pushing clocks high to compete at a higher tier. 

So, while RDNA does also improve game performance per compute performance significantly (~+50% IIRC, this has been tested and confirmed), it also removes that CU limit and allows for much more flexible GPU designs. Of course CDNA is also a modified GCN with the CU limit overcome, but it has other changes too that render it unusable as a desktop GPU - among other things, there isn't a display pipeline at all.


The red spirit said:


> Sort of. Most Quadros use dies with less cores, lower clock speed, but faster memory. Same deal with AMD.


To a degree, but as I showed above, this isn't a hard and fast rule, but varies across all SKUs. Some are fully enabled or close to it, others are severely cut. This just comes down to yields and the very different rationales behind product segmentation in pro and consumer markets. Different priorities make for different configurations.


The red spirit said:


> Or imagine that they really don't give them any better hardware


You seem to be missing the point: "better hardware" isn't the same as proper engineering. The components might in the end be entirely the same - buying one component in huge quantities is often cheaper than buying two even if one of the two starts out cheaper, after all. What I'm talking about is the implementation - PCB quality, trace routing, all the minutiae of PCB engineering. And, again, a lot of this also carries over to consumer GPUs. And, of course, many third party GPUs are _ridiculously_ over-engineered, and go far beyond workstation specs in some regards - mostly to facilitate overclocking and the like. None of that disproves the existence of stricter quality control and engineering standards for pro cards.


The red spirit said:


> But that sort of makes web usable nowadays at least. So many sites have YT embedded. Also if YT crashes, who says that any other video won't trigger BSOD? It might be decoder wide bug. That's just a really pathetic driver fail.


While I do agree that youtube is a basic part of internet infrastructure these days, that's hardly a reasonable line to draw for a workstation GPU from, what, 2015? Sure, there are absolutely uses where a YT video could be useful for a workstation. But back then? Not likely.


The red spirit said:


> Like I said, those products are inaccessible for many buyers, not mention likely sky high price. If you are prosumer, you are better off getting Radeon VII or some Vega card from eBay.


But if you're a prosumer, you're _extremely_ unlikely to be running FP64-dependent caluclations. And if you are, you're still better off renting a compute node on AWS or Azure than buying your own hardware.


The red spirit said:


> It's basically the same with FX then, but even later Ryzen gens suffer from poor heat transfer through IHS, so I'm not sure if I can agree that it was just sensor offset issue. Not to mention the fact that some Zen chips have really unevenly flat IHS surface. It just hasn't been a problem before Zen.


... because designs before Zen weren't even close to as dense as Zen. And you're conflating two things here: 14/12nm Zen: hot because of thermal offsets. 7nm Zen: Hot because of thermal density. Different issues, similar "problems".


The red spirit said:


> All in all Intel fared a bit better there and despite advertised nm numbers, they actually had smaller nodes. That really looks more like AMD only problem. And then there's nVidia, who don't have problems due to just cooling a bare die. I guess we need some "free titty" movement for computer hardware too.


Have you seen all the people struggling to cool their higher end 10th, 11th and 12th gen Intel CPUs? 'Cause there are plenty of reports on this. They are just as difficult to cool as similarly dense AMD designs - but crucially, they are somewhat less dense, and thus struggle somewhat less. The difference is negligible though.


The red spirit said:


> Yep and if you turn off boost, it gained a lot of power efficiency. But that's a fail. It was built for high clocks, it was the sole reason for IPC sacrifices, older node, smaller cores and it just couldn't achieve that. Even FX based Opetrons ended up being quite crap compared to Sandy Xeons, because of how ridiculously slow each core was. You could buy 16 core Opteron, just to see it beaten by 8 core Sandy Xeon. And then again, Sandy Core i parts had ridiculously low power usage too. i7 only had around 60 or 70 watt real usage. i5 was even more economical. So FX truly was only better at a lot of weak cores, for low price. And I think that was reasonable value preposition. You could get FX 6300 for the price of Pentium and FX 8320 for the price of i5. But parts like FX 9590 were just plain stupid and insane. And that thing murdered boards and their VRMs. But I have to admit, that I admire it a bit for being crazy and something unusual and for showing what happened with architecture at such high speeds. Looking back now, it doesn't even look that insane, when we have even hotter and even less efficient chips like Core i9s. I just hate those i9s with passion for some reason. And I really don't liek that Intel didn't have guts to strongly enforce TDP limits on them, since those chips then become very efficient and not as stupid. And I hate nVidia even more, because they don't even have non-K (aka non clocked balls to the wall) versions of their cards. I would like to see LE cards with same cores, but lower TDP.


Well, that's what you get with a design made for low IPC, high clocks, and run into a clock ceiling. Things become crap. Just look at the whole Pentium 4 debacle.


The red spirit said:


> And it was actually good technology, really made a difference, but sadly it basically died like EAX or A3D. nVidia did a lot to monopolize it and kill it.


"Did a lot to monopolize and kill it" is ... a very weird way of putting it. This was an Nvidia technology. It was cool and innovative when new, but it died due to Nvidia's penchant for proprietary solutions and the difficulty of running physics compute loads alongside graphics on GPUs of the time. Other alternatives have since replaced it.


The red spirit said:


> Counterquestion, but are you aware of games using INT8 or FP16 too? It's just very technical info, that nobody really tells, because barely anyone understand that and it's non-marketable. I have never seen any game dev marketing their game that use FP32 either.


Not that I know of, but then, that's relatively new technology that's still barely getting off the ground. And as with FP64, I doubt we'll see much, as, again, the use of neural network inference in games is likely to be somewhat limited. It could have pretty cool applications for smart in-game AI, for example, but that's as much of a problem as an opportunity, as it takes control away from game designers, making balancing and scripting all the more difficult.


The red spirit said:


> Those still could be games, just not the conventional games like GTA or CoD. I think that various simulators could benefit from strong FP64 presence in market. Imagine Assetto Corsa with FP64, good crash modelling, voxels maybe and other tech stuff. Game would be simulation porn.


Even with modern GPUs I doubt we'd have the compute capabilities to do that in raw FP64 without also crippling rendering performance. And, most likely, some kind of machine learning solution could do that to a sufficient degree of accuracy while being much, much faster. Large-scale, repeatable simulations would still need higher accuracy, but there are very, very few game simulations that can actually make meaningful use of that. If you're not running the same simulation a bunch of times, a handfull of minuscule errors or inaccuracies aren't going to break your game.


The red spirit said:


> No, you probably missed something. Game developers could benefit from FP64.


Yes, they probably could, but essentially nobody actually does, which tells us the applications are few and far between.


The red spirit said:


> FP64 is good for them and could make their job easier, faster or just higher quality,


This is _pure_ speculation, and does not reflect reality. How would FP64 make game development easier? How would it make it faster? At the very least they'd need to adapt to a new way of programming, and performance would - at best - be half of FP32. As for "higher quality" - that only matters if you're running calculations that need an _extreme_ degree of accuracy. Which is generally not the case for games.


The red spirit said:


> however currently existing tools in game engines aren't great, they have their own problems that makes whole ordeal with FP64 problematic. Due to that and FP64 hardware rareness or poor performance, devs are basically forced to convert FP64 code into functional FP32 code, process is complicated, but tools to do it are of higher quality.


I'd love to see some sources of game developers complaining that they have to run FP32 instead of FP64 in their games. 'Cause to me, this sounds like a made-up problem.


The red spirit said:


> Lithuania is small country with population of 2.7 million people. Most companies aren't that big and most companies aren't tech companies. Our biggest companies only have like 2k people hired and they are multinational companies too, so people hired in Lithuania isn't 2k most likely. Our biggest cap companies are also listed as small cap companies by Morningstar. Many medium size companies here only have 10-100 people. That definitely limits some business and some kinds of investments.


Well, obviously. And they're subject to the same limitations as companies of the same size in other locations - or even more, as there's less likelihood of there being local representatives and the like. Being in the EU alleviates a lot of that though, with EU reps for Nvidia likely covering this area.


The red spirit said:


> Either way, I'm not a fan of buying restrictions of certain hardware. There's no need for them to exist. Neither there's any need to have say Quadros and FP64 Quadros. That's just more overhead for nV to manage almost identical products.


I'm not arguing that there should be restrictions on buying anything either, I'm just describing the realities of professional/enterprise distribution vs. retail. Nvidia does not operate directly in retail markets, so products sold directly by Nvidia - such as their pro GPUs - are thus harder to get there, and more expensive. Buying pro products directly is another matter entirely.


The red spirit said:


> Fine, fair enough about core internals, but AM3+ chis later got additional instruction sets liek FMA3 I think and some other bits. Also Carizzo may have been a small win in performance, but despite 20% IP gain, it also gained like 50% power efficiency, which is huge. Also if we go from Zambezi to Carrizo, so many things were tweaked, changed or modified. It was a lot more than ever before. Importantly, Carrizo wasn't even the last of FX derivative, there were some AM4 FX based chips, which were the last revision and they also gained some performance, efficiency, overclockability, DDR4 and instructions again. But those chips again were gimped due to lack of L3 cache and 28nm node. Had they been ported to 14nm and got L3 cache, they might not have sucked at all and we may have got Zen like performance out of them. Compared to Athlon 200GE (the closest actual competitor), FX derived AM4 chips doesn't seem to have much of multicore performance difference. I know that it falls flat in single core benches, but then we get into that core feud. Should we count module as core with HT or two cores.


But again, all you're describing here are tweaks. And, crucially, tweaks with tradeoffs - for example, Carrizo and Bristol Ridge didn't clock much past 4GHz, much lower than the FX designs they were derived from. So, they improved IPC and added some instructions, improved efficiency, but cut the clock ceiling. Look at the OC results in this Anandtech article: 4.8GHz and barely beating a Haswell i3. If that is the best a pro overclocker can do with them, that's not much. And they still topped out at 2 modules/4 threads. And no, you wouldn't have gotten Zen-like performance out of them - 1st generation Zen clocked about the same, and absolutely trounced the performance of these chips per core, while quadrupling core counts. Bristol Ridge on 14nm might have clocked a few hundred MHz higher than on 28nm, and would no doubt have consumed less power, but it wouldn't have come close to Zen.


The red spirit said:


> They were pretty close in Cinebench, but that A8 is quite low clocked part. So, whether FX was bad or just on ancient node is hard to say now. Not to mention that FX had shit ton of revisions, redesigns. It was more like rolling architecture.


Pretty close? I see 333 points vs. 258 - that's a 29% advantage, and at just 100MHz higher clocks. Of course the Athlon 200GE is a 35W chip vs the 65/45 of the A8, but the node difference makes up for that mostly. But, remember: this is _two_ Zen cores, with HT. Against four "cores" (two modules, four threads) on Bristol Ridge. This just drives home how much more performant Zen is per core compared to previous AMD designs.


----------



## catulitechup (May 16, 2022)

For courtesy of videocardz leave this:



> AMD claims to offer better performance per dollar than NVIDIA GPUs across its entire Radeon RX 6000 stack - VideoCardz.com
> 
> 
> AMD RX 6000 vs NVIDIA RTX 30: up to 80% better FPS/$ AMD’s new battleground is performance per dollar. AMD’s Chief Architect of Gaming Solutions and Marketing, Frank Azor, published a chart illustrating AMD greatest potential right now, and that’s better performance per dollar. The chart...
> ...













miserable fucking scumbag company to try justify non sense prices, well after this only left put lower prices because nobody believe any word of this type of company



catulitechup said:


>



various prices stay wrong case rtx 3060 non ti stay in 390us dont 430us



> ASUS NVIDIA GeForce RTX 3060 Phoenix V2 Single-Fan 12GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! The ASUS Phoenix GeForce RTX 3060 derives its name from a high performance output in a robust package. A large single fan takes advantage of our Axial-tech fan design and a dual-ball bearing fan that lasts twice as long as sleeve-bearing alternatives.
> ...



rtx 3060ti stay at 479us dont 580us



> EVGA NVIDIA GeForce RTX 3060 Ti XC Gaming Dual-Fan 8GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! The EVGA RTX 3060 Ti XC cards are designed for the no-frills gamer who needs a high-performance card that can also fit into tight spaces.
> ...



rtx 3070 stay at 499us dont 700us and 50us more lower than amd put rx 6750 xt at 550us according slides but in reality rx 6750 xt begins on 580us on microcenter



> NVIDIA GeForce RTX 3070 Dual-Fan 8GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! The GeForce RTX 3070 is powered by Ampere - NVIDIAs 2nd gen RTX architecture. Built with enhanced RT Cores and Tensor Cores, new streaming multiprocessors, and high-speed G6 memory, it gives you the power you need to rip through the most demanding games.
> ...





> ASUS AMD Radeon RX 6750 XT Dual Overclocked Dual Fan 12GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! Delivering the latest AMD RDNA 2 architecture experience in its purest form, the ASUS Dual Radeon RX 6750 XT melds performance and simplicity like no other.
> ...



rtx 3070ti stay at 699us dont 750us



> ASUS NVIDIA GeForce RTX 3070 Ti TUF Gaming Overclocked Triple-Fan 8GB GDDR6X PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! The TUF GAMING GeForce RTX 3070 Ti has been stripped down and built back up to provide more robust power and cooling. A new all-metal shroud houses three powerful axial-tech fans that utilize durable dual ball fan bearings.
> ...



and rx 6800 xt begins on 799us dont 850us



> ASRock AMD Radeon RX 6800 XT Phantom Gaming Overclocked Triple-Fan 16GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! Delivers great performance which is more higher than reference cards based on the solid hardware design. Crafted for the best balance between the thermal efficiency and silence by all the details
> ...



and now beside liars in information can add lazy to searching proper prices, another frank assnor classic like as justify rx 6500 xt on 200us for crappy adapted laptop gpu with 64bit memory bus, reduced capabilities (video decoding - encoding) and reduced pci-e lanes

compared before models like rx 5500xt with 128bit memory bus, more ram options like 8gb models, complete capabilities (video decoding - encoding) and more proper pci-e lanes like x8 lanes rx 6500 xt seems utter garbage


----------



## The red spirit (May 17, 2022)

Valantar said:


> @The red spirit see below:


Perhaps we should stop already. We just keep on bickering about random things that have no relevance to this thread. It's quite exhausting to write an essay per day. It's not like we both will agree about our points. 

I will reply, but only to limited amount of points.




Spoiler






Valantar said:


> That's an interesting video! I don't quite see where you get "not much love for pro cards" from though - he's crystal clear that if he's recommending mission critical hardware purchases for a business, he _only_ recommends Quadros, as they can be relied on to get direct professional-level support from Nvidia.


That's my point. If Quadros aren't worth for common types of productivity, then CAD, perhaps visual design and other things, the niche for them becomes tiny. Like I also said, some countries don't even have businesses high end enough or big enough to even have such type of applications in any sustainable way, so they opt for consumer cards anyway. And if you have no budget, but something is mission critical, it's much better idea to add time buffers, hell AMD replacements or something like that, because Quadros are damn expensive. 

At this point, perhaps if you are Pixar or some medical equipment makers, you need and benefit from Quadros enough to justify the expense. But then there's only one Pixar. I'm not even talking about things like GV100 Quadros that are even more offensively priced and their sale is restricted. Like you said before, there's only benefit in actually buying them if you have supercomputer or something like that. No consumer is going to drop 10k dollars on just one card alone. Well, maybe if you are Linus, but he's rich.

I actually looked at Turing's whitepaper. Turns out that nVidia specifically cripples GeForce drivers so that they perform worse at OpenGL. Also RTX card RT cores are slightly crippled in accumulation, compared to Quadros. Like some person perfectly put it "nVidia made surprisingly good product at very low costs, that was very powerful and then had to find ways to cripple it to make another product to make money". This quote was used to describe original Quadro cards, but I think that it applies even today. Also back then it replaced things like SGI RISC workstation cards, 3d Labs! cards and bunch of other very expensive equipment, which was very limited, frankly not so great and often required very specific programming knowledge too.  

And while in the past ATi went hard with FireGL cards and outright didn't give Radeons OGL support, that changed over time too. And while OGL was once workstation/scientific standard, it came to consumer hardware and lost exclusivity, meanwhile cards like FireGL kinda lost a lot of their hard selling power. over time there were less and less reasons to get workstation card. FP64 was actually an unfortunate upscaling victim and in the end ended up being excluded from workstation cards too. It really got out of hand. Somehow it slipped both from nV's and from AMD's hands, therefore we have nonsense like GV100 Quadro. It also hurt workstation card hard selling power too, because it used to be an unique selling point. Now you are only left with less. Therefore my comment that Quadros are losing reasons to sell and nV with AMD are running out of reasons why you should buy cards like that. Card makers also have snafus. 




Valantar said:


> You're presenting a pretty uneven playing field here though. "If I undervolt and underclock my older GPU it's more efficient and as fast as a newer, lower tier GPU clocked much higher" isn't particularly surprising.


No, my point was about lowering TDP value, the power target. There's no undervoltign at all, you lose some performance for big efficiency gains, that's all. Polaris perf/watt was great, until diminishing return wall.




Valantar said:


> Have you seen all the people struggling to cool their higher end 10th, 11th and 12th gen Intel CPUs? 'Cause there are plenty of reports on this. They are just as difficult to cool as similarly dense AMD designs - but crucially, they are somewhat less dense, and thus struggle somewhat less. The difference is negligible though.


I own 10th gen CPU myself and no they aren't that hard to cool as long as you stick to exactly Intel spec TDP, TAU limits for base speed and turbo speed. Problem is that nobody cares about that, board makers crank those to the moon and also add stupid overclocking tools like MCE, some reviewers have no idea what boost is and what it does and makes "sensational" videos about turbo peaks and spread misinformation about i9s being furnaces. If you have i9 10900K and leave it at PL1 - 125W, PL2 - 227W, Tau 27 sec, like Intel recommends, then it's actually not nearly as bad as media says (including TPU reviews) and in most tasks you also don't loose much performance. Only tasks that manage to saturate all CPU internal resources get throttled hard, but it's like we are running prime95 all the time and if you do, then get better cooler, better board anyway. In gaming, most productivity and etc. Intel spec limits are completely reasonable. 

And also people spread nonsense about throttling. They don't understand that base speed is the hard spec and you should get it in any case. Boost is non guaranteed, never was and will never be. Sure most of the time you will get some boost, but if say Dell decides to make prebuilt with i9 10900 and it only achieves 3Ghz, then there's no issue with that. It's only rated for 2.8GHz base, anything more is extra and can't be relied on. It seems like people didn't get this either. I also want to partly blame benchmarkers too, because they don't test with turbo off and if they keep it on, results wary a lot and don't meaningfully represent what you may get with your hardware with your thermal and power restrictions. 

On AMD side it's even more confusing, because AMD pushes PBO and PBO definitely is overclocking. You won't get any warranty if you keep using it, yet AMD doesn't seem to enforce it much and take losses, to avoid scandals, while their chips run way out of spec. And that's mostly due to AMD's marketing and consumer's own ignorance about computer component actual power usage. The only difference is that Intel did PR worse, but either way PBO, Turbo or whatever else is complete shitstorm from legal point of view. And it's not liek either tech is new. Intel had turbo from at least first gen Core i series with adjustable power limits. Only AMD didn't and only had generic turbo with it either on or off, but difference was that it was used to avoid poor utilization of already set TDP, instead of going nuts and just clocking chips to the moon while flushing power usage concerns down the toilet as well as warranty.

All in all, this stuff was really stupid, likely could have been avoided if Intel and AMD truly held iron grip over board makers, but they didn't and thus we have this unintuitive dumpster fire of failed communication among different sides and no, it's not just Intel's problem with certain gens, AMD is affected too, but they just suck it up for whatever reason. If AMD wanted too, they could launch lawsuits to board makers for making out of spec hardware, reject RMAs for using PBO and clamping down on XMP. Perhaps they don't want to end up like Intel, but they could and they would be damn right to do so.




Valantar said:


> "Did a lot to monopolize and kill it" is ... a very weird way of putting it. This was an Nvidia technology. It was cool and innovative when new, but it died due to Nvidia's penchant for proprietary solutions and the difficulty of running physics compute loads alongside graphics on GPUs of the time. Other alternatives have since replaced it.


They killed Ageia, as well as their card, they paid game devs to not us competing tech and when that hurt other competitors enough, they killed PhysX too. Therefore, we didn't get much in return, but nV eliminated competitors. I don't believe that nVidia actually gave two shits about PhysX. It was basically Hairworks, before Hairworks. Good tech made for flexing and extinguishing competitors and axed once that's done. All they actually did was expanding Huang's leather jacket collection, we as gamers, content creators or devs got nothing out of it.




Valantar said:


> Not that I know of, but then, that's relatively new technology that's still barely getting off the ground.


lmao what? FP16 is now common. Common on all Radeon cards ever since like 2011 or 2012, basically ever since "Surface Format Optimization" option was added to catalyst. All it does is demoting FP32 calculations to FP16. Has been on by default for over decade. And yes, AMD clearly says that it can degrade graphics quality in their own control panel. 




Valantar said:


> Even with modern GPUs I doubt we'd have the compute capabilities to do that in raw FP64 without also crippling rendering performance. And, most likely, some kind of machine learning solution could do that to a sufficient degree of accuracy while being much, much faster. Large-scale, repeatable simulations would still need higher accuracy, but there are very, very few game simulations that can actually make meaningful use of that. If you're not running the same simulation a bunch of times, a handfull of minuscule errors or inaccuracies aren't going to break your game.


I meant it if we had 1:2 ratio FP64 GPUs. Now it's lost cause.



Valantar said:


> But again, all you're describing here are tweaks. And, crucially, tweaks with tradeoffs - for example, Carrizo and Bristol Ridge didn't clock much past 4GHz, much lower than the FX designs they were derived from. So, they improved IPC and added some instructions, improved efficiency, but cut the clock ceiling. Look at the OC results in this Anandtech article: 4.8GHz and barely beating a Haswell i3. If that is the best a pro overclocker can do with them, that's not much.


Bruh, overclocker only used stock cooler. 4.8GHz with stock cooler is actually great. There also weren't any higher wattage Carrizo chips, since they were made for laptops only and only 65 watt chips made to desktops as harvested laptop chips, therefore severe limitations. Even Athlon X4 845 was laptop part, its die was only made with it being 15 watt part, but it got seriously stretched to work with 65 watts. I think that laptop dies aren't made the same and can't stand such wattages. Not to mention that Athlon also had locked multiplier which is big limitation to overclocking as boards nowadays don't have any speed locks for separate buses anymore. We haven't seen what those chips can actually achieve. I would say that 5.5 GHz would be achievable with just modest 120mm AIO if anyone bothered to buy them for overclocking and AMD unlocked multis.

The question is what would have happened if Carrizo or Bristol Ridge was on 14nm. Would we get faster chips or just more efficient chips.




Valantar said:


> And they still topped out at 2 modules/4 threads.


Imagine making a flagship level chip for already discontinued FM2+ platform and imagine sabotaging your own companies other product launch for no good reason.





Valantar said:


> And no, you wouldn't have gotten Zen-like performance out of them - 1st generation Zen clocked about the same, and absolutely trounced the performance of these chips per core, while quadrupling core counts. Bristol Ridge on 14nm might have clocked a few hundred MHz higher than on 28nm, and would no doubt have consumed less power, but it wouldn't have come close to Zen.


I highly doubt your claims. Even by your logic, it would have been possible to make 8 module Carrizo or Bristol Ridge chips within the space of Zen1 chips. That's 16 cores, before Zen scaled to that. Mind you, not exactly full cores, but it would have had more ALUs. And you seem to downplay the impact of lithography shrink, meanwhile you showed me an article, where AMD themselves mentioned that most changes come from exactly that, shrinking. Either way, it won't happen, but pretty cool to think about what would have happened. 




Valantar said:


> Pretty close? I see 333 points vs. 258 - that's a 29% advantage, and at just 100MHz higher clocks. Of course the Athlon 200GE is a 35W chip vs the 65/45 of the A8, but the node difference makes up for that mostly. But, remember: this is _two_ Zen cores, with HT. Against four "cores" (two modules, four threads) on Bristol Ridge. This just drives home how much more performant Zen is per core compared to previous AMD designs.


No it doesn't, there's only ~30% difference with unfair node advantage. And essentially, it's fair to compare two modules to two full Zen cores, since only whole modules is completely independent execution unit, FX like core, however, isn't. There's no point in arguing about FX cores, when AMD lost lawsuit and had to pay FX chip buyers a compensation about core count claims. And to be fair, most correct way to describe those "cores" would be 4 ALUs, 2 FPUs with shared logic with two completely independent execution units (in case of Athlon X4 845). Hence why I compare FX derivative quad cores to Zen Athlon.


----------



## Valantar (May 17, 2022)

catulitechup said:


> miserable fucking scumbag company to try justify non sense prices, well after this only left put lower prices because nobody believe any word of this type of company


Lol, this made me laugh out loud - not that I explicitly disagree, more the degree of vitriol for what is in the grand scheme of things (or really the everyday workings of large corporations) such a small thing. Oh, and while I have no idea whether their price comparisons are correct, pointing out that prices have changed a week later in the middle of the largest GPU price drop in recent history is kind of redundant. Unless you've got prices from May 10th to compare to, any later comparison is essentially invalid after all. That doesn't mean what they're saying here is right, just that your complaint boils down to "prices fluctuate", which is just how online retail operates. GPU pricing is still stupidly high, and even if they're creeping downwards that hasn't changed meaningfully, and while I agree that advertising "better value" in a market like that is pretty dumb, it's also just ... meh. I'd take dumb marketing over misleading performance claims, market manipulation or the various anticompetitive behaviours we've seen in the PC business any day of the week.


----------



## catulitechup (Jun 7, 2022)

at least rx 6400 in some way better price in microcenter (open box)

$127.96 us



> PowerColor AMD Radeon RX 6400 ITX Single Fan 4GB GDDR6 PCIe 4.0 Graphics Card - Micro Center
> 
> 
> Get it now! The cooling fan utilizes two-ball bearing technology, increasing the longevity of the fans by up to 4 times. Mute Fan Technology intelligently turns off the fan below 60C, providing silent gaming during medium and low-load while reducing power consumption.
> ...


----------

