You would need a Fury (or Tesla) with gddr5 to test...
And we're going to ignore how fury x perf doesn't tank when you GBs over on VRAM usage, eh?
No, but then the Fury also had just 4 GB to work with, so it HAD to not tank when you want to use more than that. Again, the *added advantage* of using HBM on the Fury is nonexistant, because the core can't go fast enough. So you can whine about Nvidia's tight VRAM budget (which is true, I'm not denying that, you really need to let go of dropping the fanboy-bomb everywhere you go), but you need A to use B. And the fact still is that AMD's A was too weak to saturate B, the Fury X was a card with as much inbalance (overcapacity) on the VRAM as the Nvidia 1080 has overcapacity on the core.
The funny thing about all this, is that *across the board* on a large benchmark suite it is STILL beneficial to have a 'too strong' core than it is to have too much bandwidth. Can you have too much bandwidth? Yes, because high bandwidth translates into an efficiency drop, and efficiency is TDP, and the more of that budget you can reserve for the GPU core, the more performance you can extract from it.
So you can harp on all you want about Nvidia's tight VRAM, as long as the GPU still is king of the road, which it is by a margin of about 50-60% at this point versus AMD's offerings, that is what counts. Not to mention the fact that only a portion of the game engines lean heavily on VRAM, while all game engines want a fast core. Nvidia's simply got a better balance going on than AMD, and this has been the case since Kepler.
Again, the resulting performance is what matters, all the rest is irrelevant, unless you are looking at very specific engines and situations, such as 4K gaming where the Fury X excelled, but still couldn't really beat a 980ti because the latter could get much better OC - a direct result of using a tight VRAM bus.