"t’s not as simple as that. You’re missing a variable. At least one. The software.
The PCIe lanes is where data and instructions are sent through from the CPU to the graphics card. And that is governed by the program running. What it needs to send to the GPU, how much and how often.
If the program is designed to send more than PCIe 2.0 x16’s 8 GB/s limit, it would flood those lanes - no matter what graphics card is at the other end.
Usually, though, programs are designed to not reach the limits of “current” technology. Thus, a game made in 2020 (say Cyberpunk 2077) wants to send (say) 100 MB each frame, while one from 2010 (say Skyrim) wants to send only 10 MB for each frame.
Then, what is the card meant to do for each of those? How long does it take to complete the instructions that program “told” it to perform? Which is what governs the maximum frame rate.
Add to this, some programs (games) throttle this frame rate. Which is why I used Skyrim as an example, since it limits the frame rate to 60 FPS, no matter if the graphics card is fast enough to calculate each frame quicker.
If those arbitrary data size per frame (chosen just as samples) above is correct, you can actually do some calculations. E.g. a PCIe 2.0 x16 connection can handle a maximum of 8 GB/s. Meaning it could run CyberPunk’s 100 MB/frame at a maximum of 8000/100 = 80 FPS, no matter if the card could calculate each frame quicker. Thus causing a PCIe bottleneck. But on that very same card, running Skyrim’s 10 MB/frame … 8000/10 = 800 FPS, and it get’s limited to 60 FPS, meaning it would NEVER exceed 10x60=600 MB/s = 0.6 GB/s.
I.e. even a RTX 3090, running Skyrim, would still not even come close to flooding a PCIe 2.0 x16 connection. But it might flood that if running Cyberpunk. Then again, an old GTX 980, might not be able to reach the frame rate needed to flood the max 8 GB/s of PCIE 2.0 x16, even when running Cyberpunk."
What were the first GPUs to saturate a PCIe x16 v2.0 slot? - Quora