I'm not nitpicking, I'm pointing out that your core arguments - such as this somehow being similar in concept to an MXM adapter - make no sense, and seem to stem from a conflation of different understandings or uses of the same term.
Btw, does this mean you've realized how misguided that argument was, finally?
No, that's you who don't understand.
Again: it's not difficult. But it takes more time than not doing so, and as it would be a new connector design, it would require new QC and likely trigger time-consuming further steps to ensure that nothing has been messed up by the changes to the connector (clearances, insertion force, friction against the connector, etc.). Thus, just leaving the copy-pasted x16 connector in there is the lowest effort, cheapest path forward.
Lowest effort? Maybe, but unlikely the cheapest. There's nothing else to do, but shrink connector, remove traces to it and remove caps if traces have them. That's all. Probably takes like 5 minutes in professional fab software to do this.
That's quite a different stance than the comment that was originally responded to, triggering that whole mess. But at least we can agree on that much. I still don't think it's a major issue, but it's a bit of a let-down still. To me it just highlights that AMD designed this chip first and foremost for being paired with a 6000-series APU - but at least most of the same functionality can be had from any other APU or non-F Intel CPU.
When you consider the lack of decoding/encoding, x4 PCIe, no ReLive, no overclocking, downclocking. This whole deal just stinks. It also alienates some previously interested audiences, like people with old systems that just wanna watch videos without frame skipping.
It's entirely possible they did, but that would still involve essentially zero R&D, as all of that has already been done. All that would be needed would be some basic recalibration of various steps of the lithography process. Everything else is already done. And, of course, it's entirely possible that Nvidia had a stockpile of GP107 dice sitting in a warehouse somewhere. I'm not saying they did, but it wouldn't be all that surprising - there are always surplus stocks. Plus, GP107 is used for the MX350 GPU as well, which points towards some continuation of production, at least intermittently - that launched in 2020, after all.
Hypothetically, it would be possible to make a new card, that has older lithography and uses DDR4 or DDR5 memory with high bus width (meaning more lower capacity chips, instead of few bigger capacity chips that are faster). And to reduce RnD expenses, it could be relaunched GTX 1060 or GTX 1070 GPU, but with clock speed reduced, so that it is more efficient. If you look at how much cheaper less than cutting edge nodes are, you would realize that you could basically make bigger dies on older node, than smaller ones with new node and that would be cheaper. That would be an ideal cheap card to relaunch as GPU shortage special.
You're welcome to disagree (and I'd love to hear your arguments for doing so if that's the case!), but I choose to trust the two reviews from trustworthy sources.
I won't, just that it's unusual when so many sources have rather different data. There still might be some driver related issues leading to inconsistent performance between different systems.
Why would I have a Core 2 Quad when I can pick up a Sandy or Ivy i5 from any second-hand store or electronics recycling centre for literally a couple of pounds? Building my i7 4765T system cost me about £100. The whole system! And you don't need an i7 for Youtube, I just wanted the top of the line 35 W chip because why not.
The point is: there are countless options for building a low-spec PC for nearly free to watch Youtube. There is absolutely zero need to stick to a 10+ year-old / extremely weak CPU.
Real life example: my school had shitty Pentium D machines, that were quite woeful. It guy put some GT 610, so that they could run videos for time being. It worked. All for like 40 EUR per machine, instead of 100.
Hypothetical example: You got free computer with BD drive. You wanna play movies, but GPU is too old to have decoding and CPU is not cutting it. You get lowest end card that decodes.
Bitching? What the...?
I said that the 6400 can decode all the video formats that the 710 can, and H.265 on top of that. Does this look like bitching to you?
Yet another completely misunderstood sentence. I said that you got GT 710 for playing back videos, but when it arrived it turned out to be older variant (Fermi?) and AFAIK it didn't decode or something. You bitched about that and later got GT 1030, which was good enough.
Doesn't RX 6400 look exactly like the same trap?
Correct. The 4330 was released in 2013, the X4 845 in 2016. I don't give a damn about what architecture it is. The only thing that matters is that it's newer and slower and was selling within a similar price range.
And I don't give a damn about Athlon either, but I use it as
EXAMPLE of something you may find in older machine (performance wise only). And it does play 1080p fine, but you want more, so you get decoding capable card.
BTW that old i3 was not comparable to Athlon. Athlon was going for 80-90 EUR, meanwhile that i3 went for 130 EUR and that's without normalizing for inflation. That was very significant price increase for not really a lot more. And since I bought it late and for already existing system, which was not intended for anything in particular, yeah it ended up clearly unplanned and not the most economically efficient. Due to buying parts late I got Athlon for ~40 EUR, which was not great deal, but okay deal. And if you want to compare performance, don't use userbenchmark. It's a meme page of utter incompetence, huge bias, shilling... Using Cinebench alone would be better than userbenchmark for any reference of CPU performance.
I was able to find data about about their Cinebench scores. In R15 multicore test Athlon X4 845 stock got 320 points. i3 3220 got 294 points. You would think that Athlon is better, but nah it's not. It shares two FPUs with 4 integer cores. As long as all cores are utilized, it does have better multicore performance, but if not, then it quite weak. It doesn't have L3 cache at all. i3 on the other hand has two faster FPU/Integer cores, but to improve multicore performance, it uses HT. So it's faster if you don't need 4 threads or can't utilize them all, but once you do, it overall performs worse than Athlon. Also HT can sometimes reduce performance, if scheduling is poor in software code. Lack of L3 cache, means that Athlon can stutter badly in games. There's higher chance of experiencing downsides of cache miss. Or if you have code that doesn't fit in L2, you gonna have shitty time. i3 on the other hand also tends to stutter, but it's due to the fact that software is made to use more than 2 cores and thus it has to reprocess whole code to make it function on two cores. It does lead to lag spikes, if code is complicated enough. HT isn't efficient either and is code dependent and can only work, if cores aren't already saturated enough with core (aka if all of their instruction sets and pipeline width isn't utilized). So it's very hard to predict their performance, it can be very inconsistent too. Still, i3 is better with older or code that is difficult to multithread or FPU heavy code, meanwhile Athlon is better at more recent code with mostly integer operations. Athlon may also be a lot better due to having more modern instruction array. i3 may not be able to launch some software, due to old instructions being available. Good ole K10 architecture literally became obsolete, not due to lack of performance, but mostly by how quickly software started to require newer versions of SSE or FMA. In terms of efficiency, it's a tie, because Athlon X4 is not a bulldozer or excavator part, it's Carrizo part, taken from laptops with jacked up clock speeds. It still retained quite a lot of efficiency advantage of its origins, meanwhile i3 was just decent part from get go. Athlon can be overclocked technically, but its multi is locked, so you need to raise base speed, without changing other speeds (like PCIe bus speed, RAM speed, HyperTransport speed, RAM controller speed, legacy PCI speed and anything else you might have). So practically, it's too damn hard to do and FM2+ boards don't have good locks for speed separation, like in socket 754 days. i3 is not tweakable at all. Still thee are people, who managed to reach 4.9 GHz with Athlon, so YMMV. Athlon also has a lot of room for undervolting. You can reduce voltage by 0.2-0.3 volts on them.