Can't say it's really a quarter. Significantly higher clocks (675 VS 576 MHz) and 32 GBps VS 86.4 GBps VRAM bandwidth. This is 29% GPU speed and 37% VRAM speed for 33% price. Still terrible though.
You look from buyers/performance perspective.
From NV point of view, after design is done they only pay for die manufacturing.
So, G80 vs. G84 would look like this in NV books :
681M @
484mm^2 vs.
210M @
127mm^2.
Which means on 12" wafer NV can fit ~4x more G86s, than G80s (depending on actual dimensions, more square = better). More dies = more GPUs can be sold, math is simple : 4x 200$ > 1x 600$.
I doubt cost of G80 die was actually 4x higher than G84* for card manufacturers in 2007 (but this is pure speculation).
Sure binning takes place, but rejects from GTS just get thrown into 8600 GT in this case (still 160$ card in 2007).
*Note : G84 is 80nm part, while G80 is 90nm.
Manufacturer cost per die WILL be different vs. same technology I assumed above.