Friday, January 7th 2022
AMD Radeon RX 6500 XT Limited To PCIe 4.0 x4 Interface
The recently announced AMD Radeon RX 6500 XT only features a PCIe 4.0 x4 interface according to specifications and images of the card published on the ASRock site. This is equivalent to a PCIe 3.0 x8 link or a PCIe 2.0 x16 connection and is a step down from the Radeon 6600 XT which features a PCIe 4.0 x8 interface and the Radeon 6700 XT with a PCIe 4.0 x16 interface. This fact is only specified by ASRock with AMD, Gigabyte, ASUS, and MSI not mentioning the PCIe interface on their respective pages. The RX 6500 XT also lacks some of the video processing capabilities of other RX 6000 series cards including the exclusion of H264/HEVC encoding and AV1 decoding.
Sources:
ASRock (via VideoCardz), 3DCenter
118 Comments on AMD Radeon RX 6500 XT Limited To PCIe 4.0 x4 Interface
RX 6300 oem only, pcie 4.0 x1
Based on TPU's GPU database and assuming 6500XT has roughly the performance of GTX 980 it could lose up to 14% with 2.0 and up to 6% with 3.0: www.techpowerup.com/review/nvidia-gtx-980-pci-express-scaling/21.html
It would be similar to 1Rx16 and 2Rx8 SDRAM. Most of the time you won't notice the difference but it's there if you look hard enough, and some architecture are more sensitive to it than the other.
Seriously, let's wait until actual benchmarking.... Bandwidth equivalent to 3.0x8 should be plenty for a 1080p card... What would be preferable is if at least one reviewer throws it I to an X370/470 system with PCIe 3.0 and see if it has any effect... If in the end 8t keeps cost down and noticeably expands supply, giving up 5% of the performance is a worthwhile trade off in my opinion
I think one was Zotac back in the day for their GT 520 or 710.
Hey at least it's not a 3090 tie :D
It most likely could do okay enough on a 4.0 x1 link.
(asrock also confirms this on the spec sheet, could be just an asrock thing but unlikely)
For any cards, PCI-E is just too slow on the latency side to be useful for rendering assets if the local memory is overwhelm. You have first to get via the latency of the bus, then latency of the destination (Memory latency if you access the main memory or SSD latency if you go thru storage). That might not be a big issue at very low fps, but it prevent by it's nature average to high FPS. We are hearing since AGP 1x that you can use the bus to access the main memory to expend the amount of ram the GPU have access but it was never been used this way. Well not for live rendering at least.
But it's being used to load or swap asset more easily.
Infinity Cache will not cache anything that it's not on the GPU memory since it's a L3 cache. So it will not be any help for the traffic that is going via the PCI-E bus.
In reality, this game will run at lower resolutions (Lower amount of memory needed for frame buffer), will probably not run game at max details unless very old, and the frame rate will be average.
I think 4 GB for that kind of card is enough and i think since it will be lower fps with lower details, the bus will probably get lower number of GPU commands. Previous test of Bus speed seems to demonstrate that the bus speed only matter at really high FPS. And we all know that if you go beyond the local GPU memory, you will stutter to the Bus latency.
This chip was designed to be a laptop chip running at low power on low-mid range gaming laptop paired with a IGP. The IGP would have all the video decoding function. In those situation while even more limited, the smaller bus would also mean less power usage.
But since every GPU produce is sold and the Geforce GTX 710 is still sold at 100+ US, AMD saw an opportunities to sell more GPU. The 4 GB buffer will keep it away from the miners as it's no longer enough to mine. So for low end gamer, it might not be such a bad deal in the end. But avoid that for a media box if possible. Better just get a good IGP instead for this purpose.
Will wait for benchmark and real price to see if it's a success or not. But i suspect it's going to be a great success .... for AMD finances.
the PCIe 4.0 x4 link is <7.9GB/s per direction. That's not much for streaming textures.
PS5 assumes NVMe can stream textures at >5GB/s (saved as compressed file in the SSD).
This card is really designed for non gamers or only simple game animation (not FPS, not flight sims).
Just enough to tick enough of the feature boxes in the marketing brochure/packaging/advert.
This gets more of the OEM's to have a lower entry price to advertise. If some users can move away from the larger cards to use this then larger cards could have greater availability.
If they increase the GPU RAM or increased the PCIe width then some crypto miners may be tempted, crippled like this means their not a problem for this card.