well lets not rule out AMD and Nvidia playing a game together, agreeing not to push it too far so both and enjoy easy profits
That is probably closer to the mark I think. AMD/ATI and Nvidia have colluded in the past, and their distinct lack of interest in initiating any kind of price war (aside from the occasional limited run salvage part) tends to indicate that they are quite happy with their revenue streams at the expense of true competition.
This is the hard truth. Nvidia only provides the bare minimum of performance, most noticeably starting with Kepler where they started holding back the big chips till later. They use to launch with those.
That is basic strategy for the productization and ROI for silicon, it's just that if you are the dominant player in the market, you are under less pressure with product cadence (see Intel)
If they were being more aggressively challenged by AMD these past few years I think either A) cards now would be twice as powerful or
Very unlikely. Both vendors are bound by the fabrication process (and its die size limits) and adhering to a common specification (ATX). Without the latter it is impossible to achieve large scale commoditization for add-in hardware components.
B) we'd at least have a longer time to enjoy our top tier cards being top tier because if they were launching with the big chips we'd have more of a lull between each architecture release
If you look back to when we had multiple graphics vendors ( ATI, 3dfx, S3, Matrox ) and even discounting the low end (Trident, SiS, 3DLabs, VideoLogic/Imagination, Tseng Labs etc.), that was never really the case either. Admittedly the strides were greater and the product lives shorter because the 3D graphics pipeline evolution and the pace of memory introduction were faster - something we are revisiting currently. I'd tend to note that the only reasons last generation (or earlier) cards aren't deemed competitive is because of shoddy game coding, people fixated with 4K screen resolution, API advancements, and the consumers addiction to the next best thing.
Add-in card sales have fallen consistently over the years. Launching your biggest and best at the beginning of a process node just means you generally have no room for improvement for the 2-4 years the node lasts. That becomes a tough economic sell for companies who tend to rely upon serial upgraders.
Prices may also be better.
Maybe. That used to be the case...but lower prices means lower margins, and that means a war of attrition (and deepest pockets). There's a reason that there used to around 50 graphics IHV's and now there are just a little over a handful (including the embedded market).
Anyhow, regarding the actual review topic, an interesting comparison between AIO implementations as
@Fluffmeister noted. Academic interest only for me though. If I want a watercooled card I'd just add one to my loop and avoid all the extra plumbing.