I run synthetic benchmarks more than I actually game, but I have found that the title makes a big difference thanks to optimizations. I typically run everything at native rendering, no DLSS or the like. A title like Cyberpunk 2077 will run at 4k with 8GB but it is unplayable. For comparison's sake (and out of sheer curiosity), I ran a Vega 64 LC vs a Vega Frontier Edition LC (Workstation-Disabled Mode) in Cyberpunk 2077 to see what difference the extra VRAM makes in actual gameplay. At 4k, the Vega Frontier Edition was far better than the Vega 64 (27-32 FPS vs 18-22 FPS) running identical clock/memory speeds. Shifting to 1440P made little difference, with all VRAM still being utilized by the V64 and about 14GB/16GB being used on the Frontier. At 1080P, the V64 pulled away from the Frontier by around 8%, which is more typical when frame buffer isn't an issue. Vega can use HBCC, but this has made frame rates worse in my experience.
My "everyday" gaming GPU is either the RX 6800 or the Radeon VII, depending on which workstation I'm using, and both of them are easily playable with most titles at 1440P (High or Ultimate), which is where I typically stay. I also try to vsync if possible, mainly just because my monitors are 60hz/75hz respectively. For the 4k testing, I connected up to a 65" Sony flatscreen in order to achieve the native 4k experience. No ray tracing obviously, but still interesting nonetheless. I will be testing a workstation-grade GP100 against a 1080 Ti a bit later to compare the 16GB HBM2 (732 GBs Bandwidth) against 11GB GDDR5X (484 GBs Bandwidth). The 1080 Ti runs much faster, but the GP100 has more ROPs and higher bandwidth (albeit at lower clocks).
My use case is mainly workstation-based (data science and ML), so most of the testing I do is on GPUs that fit this category. Benchmarking/overclocking/modding is my hobby, but I enjoy gaming as well.