Just duplicate the same geometry in 2 gpus, 1 camera renders left half, 1 camera renders right half, each gpu computes its own opengl camera and renders on a texture. A third cpu thread combines two textures to have the frame. Easy peasy. Developers are developers because too lazy to do this easiy thing. Even 1.5x performance is a win. 67% latency means more smooth fps.
Problem is all games use just 1 camera. And opengl uses only 1 camera. So, use 2 contexts of opengl in different threads and render on a texture. If opengl does not work with multi context, then use vulkan or dx12.
Even windows 11 has option to change renderer gpu for opengl.