You're talking about frames in time. I'm talking about framebuffers used in the computation of a single output frame. I don't know though. I'm just trying to figure out the jump from 8.1 to 18.5 GB due to 4K to 8K. What I'm guessing is that it suggests a lot of scratch space being used. Perhaps someone else knows why 8K needs an extra 10.4GB vram.
Using that many framebuffers for multiple render passes seems unlikely, and if this was performed with raytracing, the need for such multi-pass "tricks" should decrease, not increase. ~100 pass rendering is unusual, ~3-5 pass is more typical, above 10-12 pass is unusual.
I suspect that either the comparison is not "apples to apples", the game is buggy/"unpolished" or the game is doing something very unusual.
Edit: I also want to remind that Titan RTX with its massive 672 GB/s memory bandwidth only leaves about 11.2 GB/s per frame at 60 FPS (for example), so I doubt that all of this is "scratch space".
Or let me put it another way; if a game is using >10GB of "scratch space" in a single frame, just writing it once and then reading it once during the fragment shader part of the rendering would result in a maximum ~10 FPS on a Titan RTX. So you will be bottlenecked by memory bandwidth long before capacity if you intend to use this much in a single frame.