View attachment 291460
You can't handle the truth when 8 GB VRAM is below PS5's and XSX's VRAM allocation.
Fact: Windows 10's OS and non-gaming apps still consume dedicated GPU's VRAM. Look in the mirror.
For desktop PCs, Windows 10/11 OS-dedicated VRAM usage can be reduced by enabling IGP and letting the OS manage the dedicated GPU as a slave GPU co-processor like in the laptop's IGP-dGPU hybrids.
All AMD's desktop Zen 4 (RDNA 2-based IGP) and most Intel Alder Lake / Rocket Lake SKUs are APUs with IGP capability. PC IGP can handle the desktop graphics usage while the dedicated GPU can be fully allocated to the game.
AMD's desktop Zen 4's RDNA 2-based IGP with 128-bit DDR5-6000 has up to 96 GB/s memory bandwidth that is attached to IGP's frame buffer pointer which is considerably higher than PCIe 4.0 16 lane's 32 GB/s limitations. The same benefit for most Intel Alder Lake / Rocket Lake SKUs with IGP.
Your Intel Core i5 12400F Alder Lake doesn't have IGP capability, hence you can't allocate desktop graphics usage to the IGP while conserving RTX 2070S 8GB VRAM for games. PC IGP is not useless for conserving GPU's VRAM.
Game consoles don't have PC's spare IGP that can conserve GPU's VRAM usage.
View attachment 291467
RTX 3080
10 GB = 45 fps average
RTX 3080
12 GB = 55 fps average
10 GB VRAM influenced the frame rate difference.
RTX 3070 Ti 8 GB / 3070 8GB is slower than RTX 3060 12 GB.
RTX 3070 Ti
8 GB =17 fps average
RTX 3070
8 GB = 17 fps average
RTX 3060
12 GB = 27 fps average
RTX 2080 Ti
11 GB has a 41 fps average. When compared to RTX 3070 / RTX 3070 Ti, the RTX 2080 Ti was able to maintain good performance with 11 GB VRAM.
Without VRAM issues, RTX 3070 and RTX 2080 Ti have similar results i.e. 67 fps with 1440p ultra quality without raytracing.
The console comparison is relevant when PC's inefficiencies are known. Read
https://www.techpowerup.com/306713/...s-simultaneous-access-to-vram-for-cpu-and-gpu
Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."
A shared pool of memory between the CPU and GPU will eliminate the need to keep duplicates of the game scenario data in both system memory and graphics card VRAM, therefore resulting in a reduced data stream between the two locations. Modern graphics cards have tended to feature very fast on-board memory standards (GDDR6) in contrast to main system memory (DDR5 at best). In theory the CPU could benefit greatly from exclusive access to a pool of ultra quick VRAM, perhaps giving an early preview of a time when DDR6 becomes the daily standard in main system memory.
Better late than never.
View attachment 291461
CPU + GPU unified memory architecture is nothing new, just hasn't been done for consumer-level Windows software but you can get unified memory with NVIDIA CUDA or AMD HIP in Linux right now.