They weren't bad at all...
...on launch day.
This is how tech advancement works.
They're part of the reason the Ampere stack was (-is) such a horrible mess, and got revised almost entirely with higher capacities of VRAM later on. Even on launch day we had a 10GB 3080 that was already short on VRAM in titles
at launch. Its a complete departure from what we're used to getting from an x80 tier product.
So, this is how Nvidia's lack of TSMC works, you mean. Because now we're back on TSMC and suddenly we cán get decent VRAM capacities (all on GDDR6X this time, btw, and all but the largest capacities under 300W) from the get-go alongside numerous core/transistor count improvements and an overall performance boost.
Stop fooling yourself. This was clear since launch and was then proven by Nvidia's own release cadence plus what came before and after Ampere, now. The consensus was, is and will be that early Ampere is the all time low in relative core power to VRAM of the last decade; numbers don't lie. Its also the only gen built on Samsung, mind, only the consumer line, the real stuff got TSMC anyway.
The only reason Ampere is competitive, in the end, is the fact it can do DLSS/RT earlier than RDNA2 could do FSR proper. Everything other than its feature set is objectively worse on Ampere. Its less efficient even though it may (should?) have an architectural advantage.