The Supreme X is quieter. So is the Strix OC.
The Matrix is 8.7% faster than the FE at 4k.
The Strix and Supreme are 3% faster.
Is that 5.7% uplift worth over $1000?
Then again, the MSI Gaming X is 4% faster than the FE (and as quiet, and quieter than the Matrix). So over that, the Matrix has only a 4.7% perf advantage.
I don't normally get into stats but for this cash grab, it felt important.
I always get into stats, because they're important for context and to realize most-everything [especially nVIDIA does] is a cash-grab, if not PR stunt, so I'm glad when somebody else does it.
(It's like explaining that 2080ti overclocks 20% beyond what you see on the charts; when people realize where it actually sits [remember that it supports DLSS] they are even less-pleased with current products.)
The important thing to realize about RTX40 is the voltage limit, which is a conscious choice by nVIDIA. Even dudes like Der8auer have shown this in videos, I believe in one he killed a card by going over the supremely locked-down fail-safe of ~1.07/1.08v (I've seen both mentioned, I don't know which is correct granted it's pretty much semantics). That's why which card you buy really doesn't matter. Sure, more power/better circuitry/cooling will help hold a higher clock, but the potential itself is limited by things outside of AIB/general consumer control.
I've theorized this is because nVIDIA altered the 5nm process with TSMC for these products (and nVIDIA then claimed it to be '4nm', which it isn't...It's just a custom 5nm variation).
For instance, take something like the Apple's 3.24ghz on 5nm. nVIDIA may have/likely went to TSMC and said something along the lines of "we need better density/power consumption but only the potential of ~3ghz", or, perhaps, in their initial process testing it showed that performance/leakage/power consumption above about 1.07/1.08v (read: a typical mobile [dense] design driving voltage; the initial processes are built around mobile chips that used to be tuned for around ~1.05v on a custom process, now TSMC uses that aim [mobile] but will allow a 1.2v for more generalized products to use it or guarantee better yields) didn't scale well to the industry standard of 1.2v, which isn't exactly a huge surprise, and nVIDIA wanted to trim the fat. TSMC then likely 'adjusted the knobs' of p/p/a (power/performance/area) so that nVIDIA had a very-well tuned version of the process that was more-or-less tamper-proof. That doesn't make it 'better', it doesn't make it '4nm', it just means they optimized the trade-offs (at the curve) of what was possible on 5nm around what they wanted to make/allow. One can argue it's successful as 1.15/1.2v have been shown on AMD cards to really only add another 100-150mhz under what I presume is a lighter load (hence the requirement of under-volting) less power consumption go bananas and often exceed it's limit (without a modified bios).
There is a fundamental 'optimization' argument to be made here for what they did (to save
themselves money), but one could also argue it's an extension of Green Light; the nVIDIA philosophy that cards will all perform pretty much exactly the same (which screws AIBs on differentiation big-time; it's even scummier that nVIDIA sells cards directly). It was a game-changing philosophy that started over a decade ago that many don't realize had a gigantic impact; it probably also killed eVGA. It has made nVIDIA a shitload of money because it has allowed them to not only keep their products segmented wrt past/current and even wrt future generations, but also allows them to perfectly plan obsolescence and when a user will need an upgrade. Most people only think of VRAM as a tactic they use, or maybe limiting software voltage adjustment. In reality there is a whole gambit (like late adoption of display connector specs/color/bit-rate limitations etc) of what I consider extremely dirty tactics that nVIDIA uses that has pretty much killed this (overclocking/enthusiast) hobby and even changed it's user-base to be more general, if not naive. You can even look back this entire decade and see the top-end card achieves just below the next threshold...like a stock 4090 at sub 120/144fps at 4k. That small (and surmountable) hurdle, along with the move to (in this case) 32GB is likely the only thing that will force an upgrade from an average user for a very long time, yet it will happen and it will be successful. Oh wait, maybe they'll just want a Displayport 2.0/2.1 connector. This is why Huang is an evil genius, because for many people it not only hampers potential performance and guarantee segmentation (that savvy consumers used to circumvent; one tier higher by overclocking) but forces an upgrade cycle earlier than they otherwise may.
You may have noticed ATi/AMD kept away from this practice for a while, perhaps because it was a differentiation factor that garnered them praise from people like me, but over the last couple generations they have started to follow suit by locking down fuses/bios/clock speeds/configurations etc in a likely attempt to up-sell people. Not just in GPUs, but also (clockspeed limits) in CPUs. You can see this in something like the 7700xt where it could probably be a rad $399 card if pushed to it's actual physical limit (and given higher clock/power limit potential) for old-school overclockers like some of us, but instead what you see is the cut-off being EXACTLY below where you would want it to perform for many people to be satisfied. It is, in-fact, a somewhat artificial tactic to up-sell people to the 7800xt. They still aren't as bad as nvidia (probably because their engineering budget is smaller and/or they need higher voltage/leakage for better yields) but that doesn't mean the intention isn't there and they aren't following nVIDIA's pursuit of the almighty dollar given diminished returns in innovation over time and most people's lack of need for something better.
TLDR: nVIDIA is rich, but everything a lot of us loved about the hobby is forever broken because of greed.
AFAIK, it all started here, and has only been 'perfected' (and adapted for the worse wrt AIBs/consumers) over time:
In the world of graphics cards, there is always something quiet brewing underneath the surface. Over the past few months we’ve been clued into a program that Nvidia has been running since the Fermi days. This program is called Green Light and as you can imagine, it has to do with Nvidia giving a...
brightsideofnews.com