Tests were completed with UNDERCLOCKED memory, and at every point of memory clock the relationship it had as a percentage of gain or loos was significantly less related than the gain or loss created by the changing of core clock.
Is that better? Here is a benchmark run from a actual user, back to back changing only the memory speed.
I've tested in Unigine Heaven, SF4 benchmark and DMC4 benchmark, core always at 950mhz, and memory speed varying from 1000-1300.
Heaven 1680x1050 - 4xAA - 16xAF MAX settings
950/1000 = 31.7fps - score 799
950/1100 = 32.6fps - score 822
950/1200 = 33.6fps - score 845
950/1300 = 34.4fps - score 865
1920x1200 4xAA - 16xAF, MAX - Posterization
950/1000 - 134.64
950/1100 - 139.37
950/1200 - 142.01
950/1300 - 145.44
DMC4 I had to do average of 3 runs across the 4 scenes and experienced some issues, so i wont clog up my post with those results, suffice to say it followed these results quite linearly.
my testing shows;
30% difference in memory bandwidth across the 256 bit bus results in a performance difference of ~9%
I dare make the assumption that even if ATi paired this card with 6.4gbs memory instead of 4.8, we'd see performance of around 10%, given my testing between 4gbs to 5.2gbs
I'd love to speculate how the card would perform with a 512 bit bus, but I honestly don't think I could do it justice. But I really think the choice for 4.8gbs memory was based on how cheap and abundant the memory chips are compared to faster clocked stuff, and the fact that performance on this GPU seems to have little to gain from the speed alone.
http://forums.techpowerup.com/attachment.php?attachmentid=30510&d=1257887705
A simple chart to show the relatinoship of memory speed to gain.
Blue is the memory speed increase, red is the actual performance increase, yellow is the net loss of effect that people are proposing. So for evey 10% increase your 3% of win is overbalanced by 7% of fail, netting your over 200% more fail per clock.