Is there any performance draw back to using 8 chips as opposed to 16 for 2GBs of on board memory?
I agree to a point. I'm sure you remember the days of 3Dfx where adding a second GPU meant you doubled your frame rate. 100% scaling and no real need for the drivers to keep updated to get it. Sadly though the free cake was eaten a long time ago as both you, me and others that remember know.
Where I don't agree though is to realize that the efficiency has gone through the roof at the same performance level. Its just the top performance part makes it appear as though this is not true. Where as today we have more transistors in a Juniper chip then any 3Dfx made while performance has plowed through the roof and kept power use in check. Get a 400w Corsair and you have enough juice for a 5770/GTS 450, i5 760/Phenom II X4, 4GB of DDR3 and Gigabytes of drive space. Even the late nineties we were not using much more than poorly efficient (in comparison) 300w units. Sure they produce more heat but they now we have several orders of magnitude increase in transistors that operate on much the same amperage. I remember using a 300w unit for a 800Mhz Duron, 3Dfx 5500, 1GB 133Mhz SDRam, and two raid 0 120GB 7200.7 drives and I thought that was quick. I did that in 2000 when I graduated high school. I didn't build another system until I went to college at Kansas State in 2007. Moving from that to a C2D, 10k Raptor, and 1950 Pro was a shock.
If we hold power usage and performance steady over the last, say 12 years, the increase in efficiency is quite staggering. I think the real proof of this is that the same games I played when I was a kid (Doom, Quake, Unreal, C&C, etc) are starting to show up again but now played on the internet using your browser. Imo, shocking is an understatement.