That's too simplistic IMO. VRAM usage, especially in games with complex, aggressive asset-streaming systems like these games have, is never reducible down to "I took a measurement, here's the number". This is a methodological weakness of these measurements, though an understandable one, as an actual investigation into real-world VRAM usage (rather than just opportunistic allocation) is very, very time consuming (if possible at all).
The core of the issue here is that not (even close to) all allocated VRAM is actually put to use in any given play situation, partly because the asset-streaming systems need to cover all eventualities of player movement - do you run straight down a hallway, exit the building, or climb to its roof? Assets for all these possibilities need to be available, despite only one of them being the outcome, and thus data for only one of them is actually put to use. And as time passes, unused assets are ejected from VRAM to make room for new possible progressions of play. The breaking point between being able to cover enough eventualities and VRAM size is where you start to see storage/IO based stutter, as the game starts needing to stream in assets as they are needed rather than beforehand. But, especially as we only have average FPS numbers here and no .1%/.01% lows, we don't know what actual performance looks like here. Is the average an even chug, or is it incredibly spiky and wide-ranging? We don't know. Hence why I asked the person actually doing the benchmarks for some insight. It might be down to the game actually needing more than 8GB of VRAM, but it might also just be down to the asset-streaming system not keeping up (and thus being bottlenecked by the bus), it might be down to a driver issue, or it could be a bunch of other factors.