AMD's new Radeon RX 6500 XT comes with the narrowest PCI-Express interface of all consumer graphics cards. It's just four lanes wide, which is a quarter of what we're used to on graphics cards, which typically have a x16 interface. It's certainly an interesting design choice by AMD—fewer lanes result in lower GPU package pin-counts, a smaller fiberglass substrate, fewer PCB traces, etc. This normally comes in handy for mobile platforms, but if AMD can pull it off, it would give them a competitive advantage over companies like NVIDIA, who don't tinker with the PCIe lane counts on their entry-mainstream GPUs.
Looking at the performance data generated by this review, I suspect that even at PCIe 4.0 x4, there's a significant performance loss for the RX 6500 XT. My estimate would be around 6–10%, which isn't huge, especially in this segment, but it's still performance that could have easily been achieved for only a minimal increase in cost. If the cost difference were substantial, everybody would have slimmed down their PCIe interfaces a long time ago.
Averaged over our game test suite, we found a 13% loss in performance when switching from the PCIe 4.0 interface to PCIe 3.0. This will happen to you when running the Radeon RX 6500 XT on an Intel platform older than Rocket Lake (10th generation and older). On the AMD side, PCIe 3.0 is the fastest option when using first-generation Zen processors or lower-end motherboards with cheaper chipsets.
I also ran a full batch of tests for the RX 6500 XT operating at PCI-Express 2.0 mode, which is a fairly unlikely scenario. I doubt PCs with that bus speed are used for serious gaming, though. It's still an interesting data point for science. Here, the performance loss is another 21% vs. PCIe 3.0, and 34% in total compared to the PCI-Express 4.0 baseline.
The differences in individual games are huge and depend on how the game uses various rendering techniques. If every frame is copied back to the CPU from the GPU for some additional processing, the performance hit will be big because each frame has to travel across the narrow PCIe x4 bus. This is also why the loss in performance gets smaller at higher resolutions: at higher resolution, fewer frames are rendered, which means less data has to travel across the PCIe bus per second.
This review has a few notable exceptions to this "resolution rule." In several games, there's a bigger loss in performance at 4K resolution specifically. An explanation is the small VRAM size of the RX 6500 XT. With just 4 GB, the card will run out of memory in many games at 4K resolution. To ensure that games can still run, DirectX will allocate system memory instead and storing assets there. Graphics memory (on the graphics card) is very fast to access for the GPU. It's just a short trip to the memory chips using a wide interface optimized to shuffle tons of data around. If the data now has to be fetched from system memory, it has to travel much farther, over much narrower interfaces that are clocked much lower. While this is certainly an acceptable approach to avoid crashing games, it will have a big impact on framerates and frametimes.
You have to consider that very few currently available CPUs actually support Gen4. For AMD, you need a Ryzen 3000 "Zen 2" or Ryzen 5000X "Zen 3," and for them to be based on a B550 or X570 motherboard (400-series won't help). Intel did not release Core i3 processors based on "Rocket Lake," which leaves the price of entry at $200 for the processor on both brands, for the target buyer of the RX 6500 XT. On the other hand, every platform since 2016 supports PCIe Gen 3. It would've helped if rather than PCIe 4.0 x4, the SKU designers had opted for PCIe 3.0 x8. That would've had this SKU meet target performance across a wider platform range.
It seems PCIe x4 is one step too much for a desktop PC graphics card if you want to deliver good performance without too much compromise. That is probably why NVIDIA retains x16 throughout its lineup, as they want to ensure customers on older platforms will buy their product, too. Given the insane demand for graphics today, and the limited volumes of 6 nm GPUs coming out of TSMC, the business decision by AMD makes sense, though—they'll still sell everything they make, with better margins.