With new diff based memory compression algorithms frame buffer transfer between multiple gpus could benefit from that ...
That would not be nearly enough.
A normal SLI bridge has a bandwidth of
1 GB/s. At 120 FPS 1 GB/s is 8.5 MB per frame. So using basic math you can see that just transferring the final image over this bus is not an option. In fact, even over PCIe it's pretty slow to transfer between GPUs:
(
source at ~29:30)
This is part of the explanation why we get microstuttering in multi-GPU configurations. The picture from the second GPU needs to be transferred back to the primary GPU while also disturbing it. This is why we get the typical pattern of a long interval between frames, followed by a shorter interval, followed by a longer, ... Even a 1-2 ms difference is quite noticable.
even with AFR there is an issue with any postprocessing shader that takes many frames into account like temporal anti-aliasing.
True, it's one of several types of dependencies which will limit the multi-GPU scalability of a game.
BTW; people should use proper AA like MSAA or SSAA rather than these poor AA techniques.
With split frame rendering, open terrain graphics issue where sky is easier to render than terrain can be solved with vertical split.
It depends on what kind of camera angles and movements you are talking about. I would have no problems creating a terrain which the left part of the screen is 10 times as demanding to render as the right. As mentioned, it's technically possible to do split frame rendering, but you will get very poor utilization for a free camera, bare no gain at all. While AFR can (if your engine is well design) double, triple quadruple with more GPUs.
And with AFR shared data being ready before other gpu starts rendering is getting much harder when fps to scale has triple digits ... the question is how much uncompressed data can HB bridge transfer in 8 ms (120 fps)
Even if the HB bridge have double lanes (
assuming 2 GB/s total), we are only talking of 17 MB for a frame. You'll at least need NVLink to do what you are dreaming about...
Not exactly, while yes they can program it better to work with multi-GPU its also up to the graphics companies to make it efficient at it and be able to utilize it. We would not have to wait for profiles except to fix bugs otherwise... That is also the reason why some games show vast improvements with different driver revisions in SLI or CFX.
No, this is a common misunderstanding.
It's up to the game engine developers to design well for multi-GPU support (how queues are built up).
What you are confused about are driver side tweaks from AMD and Nvidia, which is manipulations of the driver and the game's rendering pipeline to work around issues. These tweaks are limited in scope, and can only do minor manipulations such as working around strange bugs and such. If the game is not built for multi-GPU scaling, no patch from Nvidia nor AMD can fix that properly.