VRS tests in 3DMark are not "maximum possible" scenarios - by default it is actually trying to mimic a realistic game scenario and get as close to being "invisible" as possible. The goal was to make the difference not visible while moving unless you specifically knew what to look at on the Tier 2 test.
Are the gains bigger than they most likely will be in games? I don't think so.
Note that Tier 1 one is much more lightweight (and only 1080p vs 4k in Tier 2) to accommodate the lower end of the hardware spectrum, ie. Ice Lake iGPU and due to simpler version of the tech, the quality difference there is more apparent.
If you want maximum possible, there are settings in the Interactive Mode. You can also save screenshots from it for closer analysis and observe how framerate changes as you trade off quality for performance.
VRS is going to be (eventually) a major feature that actually allows "8K gaming" one day without completely hilarious amounts of GPU power. With that many pixels it is outright stupid to do everything at full res unless you are gaming on a 72" screen or something. Think about hypothetical 34" ultrawide gaming monitor that actually has ~8K horizontal resolution (well, four times the pixels of current 3440x1440 ultrawides, so 6880x2880) - The pixels are so small that you have to be smart about where you put your pixel processing power or you just end up wasting most of it on detail that no-one can see without a magnifying glass.
Yes, it'll be years before such screens are around and common, but underlying tech like this is always years ahead of practical use. This is also an interesting feature because it is completely up to game developers and artists - with smart defaults they can do some really impressive stuff that you can't really tell apart from "real full resolution".
And yes, widespread adoption in games will take years. The fact that this is DX12 only already means it'll be another year or two before most games even could use it (due to Win7 users who cling to their dying OS).