Friday, January 7th 2022

AMD Radeon RX 6500 XT Limited To PCIe 4.0 x4 Interface

The recently announced AMD Radeon RX 6500 XT only features a PCIe 4.0 x4 interface according to specifications and images of the card published on the ASRock site. This is equivalent to a PCIe 3.0 x8 link or a PCIe 2.0 x16 connection and is a step down from the Radeon 6600 XT which features a PCIe 4.0 x8 interface and the Radeon 6700 XT with a PCIe 4.0 x16 interface. This fact is only specified by ASRock with AMD, Gigabyte, ASUS, and MSI not mentioning the PCIe interface on their respective pages. The RX 6500 XT also lacks some of the video processing capabilities of other RX 6000 series cards including the exclusion of H264/HEVC encoding and AV1 decoding.
Sources: ASRock (via VideoCardz), 3DCenter
Add your own comment

118 Comments on AMD Radeon RX 6500 XT Limited To PCIe 4.0 x4 Interface

#1
ShurikN
I get that they're trying to cheap out here and there, but God damn amd, how low are you gonna go.
RX 6300 oem only, pcie 4.0 x1
Posted on Reply
#2
nguyen
6500XT is so bad that it's good, for people desperate enough :D
Posted on Reply
#3
Fouquin
As if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
Posted on Reply
#4
ShurikN
FouquinAs if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
x16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.
Posted on Reply
#5
Tomorrow
FouquinAs if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
The point is that people who use it on PCIe 3.0 or perhaps even 2.0 board will also be limited to x4 link but wil much less bandwidth than 4.0 x4 would provide. Obviously 4.0 x4 is just fine for this card but it may not be for 3.0 or 2.0 users.

Based on TPU's GPU database and assuming 6500XT has roughly the performance of GTX 980 it could lose up to 14% with 2.0 and up to 6% with 3.0: www.techpowerup.com/review/nvidia-gtx-980-pci-express-scaling/21.html
Posted on Reply
#6
Fourstaff
Going to wait for W1zzard's numbers before passing judgement. I don't think they purposely gimp it unless it doesn't matter anyway.
Posted on Reply
#7
Fouquin
ShurikNx16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.
The bandwidth of the RX 470 isn't relevant. A card of that caliber never saturates x16, and was wired for x16 for the current draw. The 6500XT doesn't need the extra slot current nor bandwidth, and thus isn't wired for it.
TomorrowThe point is that people who use it on PCIe 3.0 or perhaps even 2.0 board will also be limited to x4 link but wil much less bandwidth than 4.0 x4 would provide. Obviously 4.0 x4 is just fine for this card but it may not be for 3.0 or 2.0 users.
It won't be a problem on 3.0, and god help anyone still on 2.0. You're going to face UEFI issues on most 2.0 platforms before you ever have the opportunity to face bandwidth problems.
TomorrowBased on TPU's GPU database and assuming 6500XT has roughly the performance of GTX 980 it could lose up to 14% with 2.0 and up to 6% with 3.0
The 6500 XT also has a larger L3 cache buffer like all other desktop RDNA 2 cards, thus isn't susceptible to bus bandwidth. AMD and nVidia's cards handle bandwidth differently as well, and are thus not comparable.
Posted on Reply
#8
ExcuseMeWtf
It's probably enough bandwidth for that performance anyways, so would be much ado about nothing.
Posted on Reply
#9
Deeveo
I'd be more worried about the cut video processing capabilities than the PCIE lane amount. This makes a huge difference for anyone planning to use this for HTPC environment.
Posted on Reply
#10
ArdWar
While there might be no discernable difference, I won't say there's no difference between x4, x8 or x16. Even if the GPU itself can't sustain the full rate. There has to be some difference from , say, the buffer being loaded in 1/n time instead of 2/n time, however minuscule.

It would be similar to 1Rx16 and 2Rx8 SDRAM. Most of the time you won't notice the difference but it's there if you look hard enough, and some architecture are more sensitive to it than the other.
Posted on Reply
#11
AnarchoPrimitiv
The horror... The horror...

Seriously, let's wait until actual benchmarking.... Bandwidth equivalent to 3.0x8 should be plenty for a 1080p card... What would be preferable is if at least one reviewer throws it I to an X370/470 system with PCIe 3.0 and see if it has any effect... If in the end 8t keeps cost down and noticeably expands supply, giving up 5% of the performance is a worthwhile trade off in my opinion
Posted on Reply
#12
b4psm4m
People in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited
Posted on Reply
#13
nguyen
b4psm4mPeople in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited
That would be the 5500XT 4GB with 8x PCIe bandwidth
Posted on Reply
#14
Shou Miko
This here could make partners do graphics cards that has a physical PCI-E x4 and x8 slot on the cards I remember some partners made a lower end Nvidia GeForce GPU with physical PCI-E x1 port on their card.

I think one was Zotac back in the day for their GT 520 or 710.
Posted on Reply
#15
Chaitanya
AnarchoPrimitivThe horror... The horror...

Seriously, let's wait until actual benchmarking.... Bandwidth equivalent to 3.0x8 should be plenty for a 1080p card... What would be preferable is if at least one reviewer throws it I to an X370/470 system with PCIe 3.0 and see if it has any effect... If in the end 8t keeps cost down and noticeably expands supply, giving up 5% of the performance is a worthwhile trade off in my opinion
Just wait for @W1zzard or Hardware Unboxed to run his usual PCI-e scaling test when the card is released. Also if the PCI-e is really limited to x4 link then I hope some AIB will make a card with physical x4 slot for SFF PCs.
Posted on Reply
#16
b4psm4m
nguyenThat would be the 5500XT 4GB with 8x PCIe bandwidth
Correct, thanks! So I guess PCI-E bandwidth may matter for this card more than others. Any way you look at it, it's not worth the asking price. It also has the same maximum theoretical tflops (5.8) as the RX480, which was also a $200 card on release about 5 years ago, so things have come nowhere in 5 years? I'm an AMD fan (mainly CPU) but it's hard to see how AMD isn't ripping people off here. All I can think of is that the margins on previous GPUs was so low that it was barely worth it for them.
Posted on Reply
#17
trsttte
b4psm4mCorrect, thanks! So I guess PCI-E bandwidth may matter for this card more than others. Any way you look at it, it's not worth the asking price. It also has the same maximum theoretical tflops (5.8) as the RX480, which was also a $200 card on release about 5 years ago, so things have come nowhere in 5 years? I'm an AMD fan (mainly CPU) but it's hard to see how AMD isn't ripping people off here. All I can think of is that the margins on previous GPUs was so low that it was barely worth it for them.
It's bad but could be a lot worse is my take. Prices on memory or raw materials even are currently expensive but they are also clearly taking advantage of the situation, like they did with the 5700g/5600g/5300g(oem only) where they are still at around the same prices (the 5300g is even an OEM exclusive).

Hey at least it's not a 3090 tie :D
Posted on Reply
#18
Shou Miko
trsttteIt's bad but could be a lot worse is my take. Prices on memory or raw materials even are currently expensive but they are also clearly taking advantage of the situation, like they did with the 5700g/5600g/5300g(oem only) where they are still at around the same prices (the 5300g is even an OEM exclusive).

Hey at least it's not a 3090 tie :D
It will be the most expensive "tie" you properly ever purchased that you cannot wear and needs other parts to function :roll:
Posted on Reply
#19
Dr. Dro
ShurikNx16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.
Sure, but this is a 64-bit bus GPU that even with the fastest memory around will barely crack the 128 GB/s mark. That's about as much bandwidth as the HD 5870 had 13 years ago. Assuming 18 Gbps memory that would amount to 144 GB/s, still slower in raw bandwidth compared to what the GTX 480 had 12 years ago (~177 GB/s).

It most likely could do okay enough on a 4.0 x1 link.
Posted on Reply
#20
Blaazen
Missing AV1 decoding is the real issue (and surprise).
Posted on Reply
#21
TheDeeGee
All connections look the same in the picture, how does one even see what is 4x, 8x and 16x?
Posted on Reply
#22
trsttte
TheDeeGeeAll connections look the same in the picture, how does one even see what is 4x, 8x and 16x?
Look closer and notice the lack of pcb traces along the entire connector on the 6500xt and their presence until middle on the 6600xt and entire card on the 6700xt

(asrock also confirms this on the spec sheet, could be just an asrock thing but unlikely)
Posted on Reply
#23
Punkenjoy
b4psm4mPeople in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited
PCI-E traffic is almost only GPU commands from the processors, Assets exchange with the processors and Assets loading/swapping.

For any cards, PCI-E is just too slow on the latency side to be useful for rendering assets if the local memory is overwhelm. You have first to get via the latency of the bus, then latency of the destination (Memory latency if you access the main memory or SSD latency if you go thru storage). That might not be a big issue at very low fps, but it prevent by it's nature average to high FPS. We are hearing since AGP 1x that you can use the bus to access the main memory to expend the amount of ram the GPU have access but it was never been used this way. Well not for live rendering at least.

But it's being used to load or swap asset more easily.

Infinity Cache will not cache anything that it's not on the GPU memory since it's a L3 cache. So it will not be any help for the traffic that is going via the PCI-E bus.

In reality, this game will run at lower resolutions (Lower amount of memory needed for frame buffer), will probably not run game at max details unless very old, and the frame rate will be average.

I think 4 GB for that kind of card is enough and i think since it will be lower fps with lower details, the bus will probably get lower number of GPU commands. Previous test of Bus speed seems to demonstrate that the bus speed only matter at really high FPS. And we all know that if you go beyond the local GPU memory, you will stutter to the Bus latency.

This chip was designed to be a laptop chip running at low power on low-mid range gaming laptop paired with a IGP. The IGP would have all the video decoding function. In those situation while even more limited, the smaller bus would also mean less power usage.

But since every GPU produce is sold and the Geforce GTX 710 is still sold at 100+ US, AMD saw an opportunities to sell more GPU. The 4 GB buffer will keep it away from the miners as it's no longer enough to mine. So for low end gamer, it might not be such a bad deal in the end. But avoid that for a media box if possible. Better just get a good IGP instead for this purpose.

Will wait for benchmark and real price to see if it's a success or not. But i suspect it's going to be a great success .... for AMD finances.
Posted on Reply
#24
Mussels
Freshwater Moderator
If its OEM only, and only for systems with PCI-E 4.0 out of the box that more than enough bandwidth
Posted on Reply
#25
tygrus
So instead of an PCIe 4.0 x16 allowing near RAM speed (~30GB/s) access to system RAM (RAM now >45GB/s),
the PCIe 4.0 x4 link is <7.9GB/s per direction. That's not much for streaming textures.
PS5 assumes NVMe can stream textures at >5GB/s (saved as compressed file in the SSD).
This card is really designed for non gamers or only simple game animation (not FPS, not flight sims).
Just enough to tick enough of the feature boxes in the marketing brochure/packaging/advert.

This gets more of the OEM's to have a lower entry price to advertise. If some users can move away from the larger cards to use this then larger cards could have greater availability.
If they increase the GPU RAM or increased the PCIe width then some crypto miners may be tempted, crippled like this means their not a problem for this card.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts