# AMD Radeon RX 6600 XT PCI-Express Scaling



## W1zzard (Aug 23, 2021)

When the Radeon RX 6600 XT launched with an interface limited to PCI-Express 4.0 x8, lots of discussion emerged about how AMD crippled the bandwidth, and how much it affects the gaming experience. In this article, we're taking a close look at exactly that, comparing 22 titles running at PCIe 4.0, 3.0, 2.0, and even 1.1. Frametimes are included, too.

*Show full review*


----------



## _Flare (Aug 23, 2021)

Thank you very much for your article. well done.
It doesn´t struggle as much as the 4GB versions of 5500(XT), right?
InfinityCache helping a lot, but leaving those exotic corner cases as you mentioned?


----------



## Oberon (Aug 23, 2021)

_Flare said:


> Thank you very much for your article. well done.
> It doesn´t struggle as much as the 4GB versions of 5500(XT), right?
> InfinityCache helping a lot, but leaving those exotic corner cases as you mentioned?


Infinity Cache doesn't have anything to do with PCIe bandwidth bottlenecks; it's just helping alleviate memory bandwidth pressure due to the smaller memory bus. The 6600 XT doesn't run into the same performance issues as the 5500 XT because it's not running out of VRAM and trying to shuttle data back and forth from main memory.


----------



## Selaya (Aug 23, 2021)

> Probably the most important question for many of you is "How much performance is lost due to AMD cutting the PCIe interface in half, down to x8 from x16?"


Given this previous data:





I am pretty confident that it is safe to assume that the delta would be _nil_. (As PCIe 4.0 x8 = PCIe 3.0 x16.)

Also, small other thing (unrelated) that's been annoying me for awhile now: Currently, GPU reviews are structured like this:
Introduction > Individual Gamespam > Performance Summary > Conclusion.
Given how many games are tested, I'd like to suggest this structure instead:
Introduction > Performance Summary > Conclusion > Appendices for the Individual Gamespam
The individual Games are of course relevant and should be published, but it's kinda _really annoying_ to click through them just to reach the Performance Summary and Conclusion pages since for the vast majority of users that's what they're after since the grand total is reasonably representative (and perhaps a few individual titles they themselves play). The individual games should instead be relegated to the Appendices, and then each linked individually in a list there, like this


> * Game A
> * Game B
> *Game C
> et cetera


----------



## Mysteoa (Aug 23, 2021)

Selaya said:


> but it's kinda _really annoying_ to click through them just to reach the Performance Summary and Conclusion pages


There is a drop-down menu where you can select what page you want.


----------



## badmau5 (Aug 23, 2021)

It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?


----------



## london (Aug 23, 2021)

another SAD attempt to push this  utter crap GPU


----------



## Outback Bronze (Aug 23, 2021)

Hey W1z great review as usual!

Any chance of a 68/6900 XT scaling review?
I see the latest AMD card was the 5700 XT and from that review there didn't seem to be much variance in the PCI-E scaling from say the
RTX 2080 TI & 3080. It would be interesting to see what the latest high end AMD cards deliver. I fully understand if this cant be achieved.
Also on the opening page you have RTX 980 which I'm pretty sure should be GTX 980 : )

Thanks.


----------



## Chrispy_ (Aug 23, 2021)

Cool, so at this class of card more bandwidth is always better but we're getting real close to diminishing returns at 8GB/s (so PCIe 3.0 x8 and PCIe 4.0 x4) and outside of a few games, even PCIe 3.0 x4 isn't leaving too much on the table.



london said:


> another SAD attempt to push this  utter crap GPU


Seems like a pretty decent GPU to be honest - cheap to make, power-efficient, and capable well into 1440p for AAA gaming. 
If you're moaning about the price, join the queue; In 2019 this class of card would have been a $299 product, sure, but _this isn't 2019._


----------



## Selaya (Aug 24, 2021)

Mysteoa said:


> There is a drop-down menu where you can select what page you want.


Oh, interesting. I guess that's one of the things that NoScript/uMatrix stops.


----------



## watzupken (Aug 24, 2021)

Price aside, I think the card is decent in performance. However, what I don’t like is that for a card that is meant for “budget” gamers who are mostly on PCI-E 3.0, the fact that you may not be able to get the most out of the GPU is quite annoying, even it’s not a common issue. I wonder if the main driver for AMD to cut down on number of PCI-E lane support is due to cost and power savings.


----------



## Mussels (Aug 24, 2021)

First thought before reading anything:

STICK IT IN A 1x SLOT

Edit: wow, the loss is actually quite small. <20% on the really outdated 1.1 8x is impressive, and the 2.0 results are almost not noticeable in general use.



london said:


> another SAD attempt to push this  utter crap GPU


How is it crap? its a great 1080p/1440p budget card, and the prices slaughter nvidia in many regions


----------



## Nordic (Aug 24, 2021)

I had to go look at Hitman's results to see the worst case scenario. That is pretty severe in my opinion. @W1zzard how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16? I am wondering if the worst case scenario would still be bottlenecked by full PCIE 4.


----------



## nguyen (Aug 24, 2021)

no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then


----------



## RedBear (Aug 24, 2021)

Mussels said:


> How is it crap? its a great 1080p/1440p budget card, and the prices slaughter nvidia in many regions


The prices slaughter Nvidia probably because demand is relatively weak, ergo the market consider it relatively crappier? Performance is solid at 1080p, but 1440p is a stretch and even AMD knows it and appropriately markets it as a 1080p card (Cyberpunk particularly stands out at less than 40 fps, but even another relatively recent title like Valhalla can't hit 60 fps); in addition a relatively worse video encoder (I understand that streaming is very popular between 1080p esports players) and purely theoretical raytracing capacities make for an overall worse card compared to a 3060ti and even the less performing 3060 can be an overall more attractive option for some folks. Of course it's a decent option if one is _desperate_ for a replacement, for instance because the old GPU has bought the farm, and can't afford to spend more money, but I wouldn't really consider it otherwise.


----------



## INSTG8R (Aug 24, 2021)

nguyen said:


> no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then


Well considering the RT performance of the 3060 I think it’s pretty pointless at this level…


----------



## Alpha_Lyrae (Aug 24, 2021)

Interesting. Some game engines are definitely streaming in assets on the fly (from CPU/RAM) via PCIe for GPU to render or display.

PCIe bandwidth has never really been a huge issue. You can get around it by copying data to GPU VRAM during level load. Seamless open world games don't really have this opportunity though, especially as you get farther away from preloaded/cached area. Has to be streamed in. Optimizing texture quad sizes for stream-in can help maximize available PCIe bandwidth without bogging down or hitching. Most engine designers probably expect a minimum of PCIe 3.0 these days. That shows in the small difference between 3.0/4.0, but much larger discrepancies for 2.0/1.1.


----------



## W1zzard (Aug 24, 2021)

Nordic said:


> how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16?


Not a lot. If you look at any of our RX 6600 XT reviews and check Hitman 3, 1080p. Look where 6600 XT is in relation to other cards it's not way down. Maybe a few percent, if even that much



nguyen said:


> no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then


6600 XT is not usable with RT at 1080p imo and it's not worth sacrificing other settings just for RT



Alpha_Lyrae said:


> Most engine designers probably expect a minimum of PCIe 3.0 these days.


I'd say that most engine designers expect a console  some slack and are done at this point, others do test on PC



badmau5 said:


> It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?
> View attachment 213984


Could be test scene. Did they specifically search for a worst case or is that their typical standard test scene?


----------



## btk2k2 (Aug 24, 2021)

Great testing. Would be interested in if PCIe 4 x 4 gets the same results as PCIe 3 x 8. I expect they would but you never know.


----------



## Charcharo (Aug 24, 2021)

badmau5 said:


> It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?
> View attachment 213984



It depends on where you test. For example I still do not understand how Wizzard doesnt see regressions in DOOM Eternal with 8GB of VRAM cards. My RTX 2080 and RX 5700 XT both had issues at 4K and even some (though much fewer) at 1440p Due to VRAM amounts.

So apparently, Wizzard is testing in an area that is just much MUCH lighter on memory, HU tests in an area that is much harder on memory and I test the DLC areas that are huge and even more demanding on VRAM (also, actually fun to play unlike the easy initial 3 levels). Also, I play for at least 5 minutes, while a short benchmark run may not be nearly as harsh on VRAM by design.

So that really is it. IMHO the biggest ever mistake was doing WItcher 3 CPU tests in White Orchard's forest area... that is like doing a CPU test while staring at the sky in GMOD.


----------



## nguyen (Aug 24, 2021)

W1zzard said:


> 6600 XT is not usable with RT at 1080p imo and it's not worth sacrificing other settings just for RT
> 
> Could be test scene. Did they specifically search for a worst case or is that their typical standard test scene?



Ultra vs High settings is just as arbitrary as RT ON/OFF, though I bet more people could notice RT (Reflections) than High vs Ultra settings during gameplay. 

Could you include Doom Eternal into the RT benchmark suite? the game get insane high FPS and is already in the benchmark suite anyways. BTW not only HUB is experiencing big drop off perf in Doom Eternal with PCIe Gen3, every other 6600XT review I have seen is saying the same thing, did you disable Resolution Scaling option in Doom Eternal?


----------



## W1zzard (Aug 24, 2021)

nguyen said:


> Could you include Doom Eternal into the RT benchmark suite?


That's the plan for next full rebench



nguyen said:


> did you disable Resolution Scaling option in Doom Eternal?


disable? you mean enable?


----------



## nguyen (Aug 24, 2021)

W1zzard said:


> disable? you mean enable?



I just checked in-game settings and the option is called Resolution Scaling Mode, which should be set to OFF instead of Dynamic to have a fair comparison between GPUs


----------



## Vayra86 (Aug 24, 2021)

Mussels said:


> yeah but we cant ban people just for having bad opinions, at least anyone seeing his post has a good rebuttal and knows he's incorrect



Bans are not the answer. What the new cultures on internet need is a straight up reminder their opinion is irrelevant and facts - only facts - are relevant.

Every time, in their face, they need to be told they're wrong and idiots for sticking with nonsense (perhaps in a nicer way). Keep bashing it in until there is no escape. That goes for all those new alternative facts we see. And if the facts are wrong, present a fact that proves it. That's how you get proper discussions.


----------



## Viilutaja (Aug 24, 2021)

This is a great e-gpu card! Those usually are constrained to low pci-e specs because the adapter.


----------



## Vayra86 (Aug 24, 2021)

W1zzard said:


> disable? you mean enable?


This is a pretty interesting thing to know. Resolution scaling can easily mitigate the bandwidth issue in that game... And if it does, the conclusion of the article might not be as accurate as it looks.


----------



## W1zzard (Aug 24, 2021)

Vayra86 said:


> This is a pretty interesting thing to know. Resolution scaling can easily mitigate the bandwidth issue in that game... And if it does, the conclusion of the article might not be as accurate as it looks.





nguyen said:


> I just checked in-game settings and the option is called Resolution Scaling Mode, which should be set to OFF instead of Dynamic to have a fair comparison between GPUs


I used "Off" of course, otherwise results are useless


----------



## Amaregaz (Aug 24, 2021)

Can you test horizon zero dawn as well, that game seems to like bandwidth.


----------



## Chrispy_ (Aug 24, 2021)

Nordic said:


> I had to go look at Hitman's results to see the worst case scenario. That is pretty severe in my opinion. @W1zzard how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16? I am wondering if the worst case scenario would still be bottlenecked by full PCIE 4.


The rebooted hitman franchise has been a stuttery, poorly-optimised anomaly for lots of reviewers over the years. Both in terms of messy frametime plots (useless 99th percentile scoring) and odd engine limits that get in the way of both CPU and GPU scaling.

Whilst the PCIe scaling does clearly show that it needs a lot of bandwidth I wouldn't treat this as representative of other games on the market. It's just an edge-case curiousity that shows there are more than zero situations where running at PCIe 3.0 x8 might be sub-optimal.


----------



## london (Aug 24, 2021)

IF you like your  GPUS with SERIOUS native bottlenecks,   LESS -everything-   LESS lanes,  memory,  bandwidth. CERO RT performance,  etc... by all means  BUY IT!!  Me?  HARD PASS AMD !  WAY TO MANY CUTS!  This  thing has short legs!  (wait  for the real next gen Hitman... this will DIE!)    CERO VALUE AND NO REAL MARKET.    

I think  the RADEON division is FLOPPING HARD,  THEY ARE 100% CLUELESS.  AMD took a nice little  GPU perfect for HTPC  and casual gaming at 75w  and $$$$ to hell  INTO  THIS MEGA  JOKE.

WHY??? simple,   at this level of price  and power consumption? there are MUCH BETTER OPTIONS.   Like a 3060TI that CRUSHES THIS    ( btw for all the complainers. SORRY this is not my native language, dont expect fluent or persuasive speaking from me )


----------



## Aretak (Aug 24, 2021)

london said:


> IF you like your  GPUS with SERIOUS native bottlenecks,   LESS -everything-   LESS lanes,  memory,  bandwidth. CERO RT performance,  etc... by all means  BUY IT!!  Me?  HARD PASS AMD !  WAY TO MANY CUTS!  This  thing has short legs!  (wait  for the real next gen Hitman... this will DIE!)    CERO VALUE AND NO REAL MARKET.
> 
> I think  the RADEON division is FLOPPING HARD,  THEY ARE 100% CLUELESS.  AMD took a nice little  GPU perfect for HTPC  and casual gaming at 75w  and $$$$ to hell  INTO  THIS MEGA  JOKE.
> 
> WHY??? simple,   at this level of price  and power consumption? there are MUCH BETTER OPTIONS.   Like a 3060TI that CRUSHES THIS    ( btw for all the complainers. SORRY this is not my native language, dont expect fluent or persuasive speaking from me )


Not TYPING in random caps LOCK might make you come across as slightly less DeRaNgEd no matter what YOUR NAtive language is.

Although I'm not sure about that when you're suggesting it's possible to buy a 3060 Ti for anything like the same price as one of these, unless you happen to be A) in a country that has FE drops and B) online in the 30 seconds per month that they're available.


----------



## london (Aug 24, 2021)

btw... why bother testing in a  PERFECT   VACUUM  ???  The Ryzen 7 5800X @ 4.8 GHz IS DOING ALL THE  HEAVY LIFTING HERE.  THATS CHEATING...   Try using a   _Ryzen_ 5 _1600  and see how that goes .... this are the  CPUS most folks  that game a  1080p use IN THE REAL WORLD .  AT WHAT  MARKET IS THIS  AIMED AT AMD?   IT DOES NOT EXIST. ( btw i have a 5600x and 3600, STILL I will not touch this, don't expect years of badass PERFORMANCE  from  this crap) _


----------



## sparkyar (Aug 24, 2021)

Today I learned the word "onus" (load, burden, charge). Thanks for the article. #nosarcasm


----------



## Xuper (Aug 24, 2021)

london said:


> another SAD attempt to push this  utter crap GPU


You know , this gpu with less spec is faster than 5700XT and equal RTX 2080 ! That's crazy.


----------



## 95Viper (Aug 24, 2021)

Stay on topic.
Read the guidelines/rules before posting.

Here is a sampling:


> All posts and private messages have a "report post" button on the bottom of the post, click it when you feel something is inappropriate. Do not use your report as a "wild card invitation" to go back and add to the drama and therefore become part of the problem.





> If you disagree with moderator actions contact them via PM, if you can't solve the issue with the moderator in question contact a super moderator.
> Under no circumstances should you start public drama.



Thank You and Have a Good (On-Topic) Discussion


----------



## W1zzard (Aug 24, 2021)

Amaregaz said:


> Can you test horizon zero dawn as well, that game seems to like bandwidth.


As mentioned in the conclusion Death Stranding uses the same engine as HZD and is affected by PCIe bandwidth limitations, too. Given the limited popularity of those two games, I have no plans to bench two games using Decima Engine, which would be almost 10% of the games test group


----------



## INSTG8R (Aug 24, 2021)

W1zzard said:


> As mentioned in the conclusion Death Stranding uses the same engine as HZD and is affected by PCIe bandwidth limitations, too. Given the limited popularity of those two games, I have no plans to bench two games using Decima Engine, which would be almost 10% of the games test group


And Death Stranding is a WAY better implementation of the same engine. HZD still has overall performance issues


----------



## Chrispy_ (Aug 24, 2021)

INSTG8R said:


> And Death Stranding is a WAY better implementation of the same engine. HZD still has overall performance issues


This. Death Stranding was designed by Kojima Studios from the ground up for an eventual cross-platform release and Kojima Studios also handled the PC version.

HZD was designed as a PS4 exclusive, with zero consideration given to PC compatibility and the PC port was outsourced to a third party (Virtuous Studios) who had no affiliation with the original developer; Even now they are still patching bugs in the PC port that are nothing to do with the original PS4 version and solely as a result of the third party learning from their mistakes as they go along. Rather than thinking of HZD PC as a PC version of the cross platform game, imagine that a newbie developer was given HZD PS4 assets and told to create a new game from scratch that looks like a copy of the PS4 version.


----------



## INSTG8R (Aug 24, 2021)

Chrispy_ said:


> This. Death Stranding was designed by Kojima Studios from the ground up for an eventual cross-platform release and Kojima Studios also handled the PC version.
> 
> HZD was designed as a PS4 exclusive, with zero consideration given to PC compatibility and the PC port was outsourced to a third party (Virtuous Studios) who had no affiliation with the original developer; Even now they are still patching bugs in the PC port that are nothing to do with the original PS4 version and solely as a result of the third party learning from their mistakes as they go along.


Yeah it’s really night and day usage of the same engine tho DS is using a later version as I understand it but both games are pretty equal as far visuals, open world etc. but you would never think they were both the same engine..


----------



## Valantar (Aug 24, 2021)

watzupken said:


> Price aside, I think the card is decent in performance. However, what I don’t like is that for a card that is meant for “budget” gamers who are mostly on PCI-E 3.0, the fact that you may not be able to get the most out of the GPU is quite annoying, even it’s not a common issue. I wonder if the main driver for AMD to cut down on number of PCI-E lane support is due to cost and power savings.


This is such a weird take, and makes me wonder if you read the article at all. "The fact that you may not be able to get the most out of the GPU" - how does that align with 1-2% average performance drop on PCIe 3.0? Yes, there are outliers that are worse than that, as there always will be. But they are highly specific outliers. The overall results from testing this is that you _will_ get a level of performance not perceptibly different from the full 4.0 speed. That is what the conclusion says. Besides, if you're on a PCIe 3.0 platform, chances are you'll be more held back by whatever CPU you are using on that platform than by the PCIe bandwidth. (Unless, that is, you're using a 9900K, 10700K or similar with a new midrange GPU fr some reason.)


----------



## Selaya (Aug 24, 2021)

Tbf I feel like AMD cheaping out on the lanes (while understandable) is like, _really cheap_ for a card of this class (midrange / entry-level midrange). Now, if this was something around a 1650 (ie, a 6500 or something), or even more budget I can totally understand that, but given how NVidia is quite consistently giving _all_ their cards down to the x50s series an x16 (not that the bottommost would benefit but that's quite besides the point here), I cannot completely shake off the feeling that AMD's cheaping out on us here. Given their track record of being _the budget vendor_, that's not the smartest move they could've pulled imho.


----------



## Chrispy_ (Aug 24, 2021)

Valantar said:


> This is such a weird take, and makes me wonder if you read the article at all. "The fact that you may not be able to get the most out of the GPU" - how does that align with 1-2% average performance drop on PCIe 3.0? Yes, there are outliers that are worse than that, as there always will be. But they are highly specific outliers. The overall results from testing this is that you _will_ get a level of performance not perceptibly different from the full 4.0 speed. That is what the conclusion says. Besides, if you're on a PCIe 3.0 platform, chances are you'll be more held back by whatever CPU you are using on that platform than by the PCIe bandwidth. (Unless, that is, you're using a 9900K, 10700K or similar with a new midrange GPU fr some reason.)


The thing is, most people dropping $600+ on a scalped/marked-up GPU will not be using an ancient motherboard. B550/X570/Z490/Z590 all have PCIe 4.0 anyway.

The 1-2% performance loss (for the most part) on PCIe 3.0 x8 is negligible if it's going to be held back even more than that by an old AMD 2600X or Skylake quad-core, for example. Like you say, who would have spent big bucks on a 9900K only to then pair it up with a crap GPU that's already in need of an upgrade?


----------



## TheinsanegamerN (Aug 24, 2021)

Selaya said:


> Tbf I feel like AMD cheaping out on the lanes (while understandable) is like, _really cheap_ for a card of this class (midrange / entry-level midrange). Now, if this was something around a 1650 (ie, a 6500 or something), or even more budget I can totally understand that, but given how NVidia is quite consistently giving _all_ their cards down to the x50s series an x16 (not that the bottommost would benefit but that's quite besides the point here), I cannot completely shake off the feeling that AMD's cheaping out on us here. Given their track record of being _the budget vendor_, that's not the smartest move they could've pulled imho.


Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.


----------



## INSTG8R (Aug 24, 2021)

TheinsanegamerN said:


> Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.


Where anywhere in the review outside of obvious outliers was it kneecapped and still beating the 3060 x16 “advantage” ?


----------



## TheinsanegamerN (Aug 24, 2021)

INSTG8R said:


> Where anywhere in the review outside of obvious outliers was it kneecapped and still beating the 3060 x16 “advantage” ?


"where in this review outside of cases where it matters can you find examples of it mattering" 

Well if you're going to immediately throw out evidence you dont like this conversation will go nowhere.


----------



## Valantar (Aug 24, 2021)

TheinsanegamerN said:


> Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.


... again, how are they kneecapping themselves? There is no notable performance limitation here. There is a _spec_ deficit with _no real-world consequences worthy of note_. If that amounts to "kneecapping themselves", then you have some rather absurd standards. Or are all GPUs without HBM or a 512-bit memory bus also kneecapped? Yes, there are a couple of outliers. One has a ~7% deficit, the other has a ~15% one. The former is a console port running in an engine primarily developed for consoles and well known for porting issues. The other is a game notorious for buggy performance. If your favourite game genre is "buggy ports", then yes, these are highly relevant. If not, then no, they aren't. They are outliers, and while absolutely true and likely representative of their respective games, they aren't representative of modern games overall - the rest of the tested field demonstrates that. Remember, that 1-2% overall deficit _includes_ those outliers.


Chrispy_ said:


> The thing is, most people dropping $600+ on a scalped/marked-up GPU will not be using an ancient motherboard. B550/X570/Z490/Z590 all have PCIe 4.0 anyway.
> 
> The 1-2% performance loss (for the most part) on PCIe 3.0 x8 is negligible if it's going to be held back even more than that by an old AMD 2600X or Skylake quad-core, for example. Like you say, who would have spent big bucks on a 9900K only to then pair it up with a crap GPU that's already in need of an upgrade?


Exactly. If I were to buy one of these and stick it into my travel PC (an old and heavily modified Optiplex 990 SFF) with its i5-2400 and PCIe 2.0, the PCIe 2.0 _really_ isn't what would be holding me back. That would be the CPU.


----------



## Recus (Aug 24, 2021)

Mussels said:


> First thought before reading anything:
> 
> STICK IT IN A 1x SLOT
> 
> ...



And $700 budget card slaughter customers.



http://imgur.com/a/HzQPR6z


In that store 3060 and 3060 Ti have same prices.


----------



## TheinsanegamerN (Aug 24, 2021)

Valantar said:


> ... again, how are they kneecaping themselves? There is no notable performance limitation here.
> If that amounts to "kneecapping themselves", then you have some rather absurd standards. Or are all GPUs without HBM or a 512-bit memory bus also kneecapped?


Now there's a strawman argument. Wher did I say any of that? I didnt. The only thing I said was that AMD has a habit of kneecaping themselves when they start catching nvidia. Rebranding cards, too little memory (4GB 580) or a x8 bus that impacts performance in some games (6600xt, 5500xt was hit by BOTH of these issues). 



Valantar said:


> There is a _spec_ deficit with _no real-world consequences worthy of note_.


You know, outside of software that did show a performance difference. Of course:



Valantar said:


> *Yes, there are a couple of outliers. One has a ~7% deficit, the other has a ~15% one.*


So no real world consequences. Outside of real world consequences, but who coutnts those? 


Valantar said:


> The former is a console port running in an engine primarily developed for consoles and well known for porting issues. The other is a game notorious for buggy performance. If your favourite game genre is "buggy ports", then yes, these are highly relevant. If not, then no, they aren't. They are outliers, and while absolutely true and likely representative of their respective games, they aren't representative of modern games overall - the rest of the tested field demonstrates that. Remember, that 1-2% overall deficit _includes_ those outliers.


Right, so any time performance doesnt line up with expectations there are excuses. Using a x16 bus like nvidia would fix that problem, but the GPU isnt gimped. Everyone known that buggy console port games NEVER sell well or are popular, ever. Right? 

If you have to come up with excuses for why examples of a x8 bus hurting performance dont actually matter you've answered your own question. You've constructed your own argument here that you can never lose because you immedately discredit anything that goes against your narrative. I dont knwo what it is about the modern internet where any criticism against a product has to be handwaved away. The 6600xt is already a gargantuan waste of money, why defend AMD further screwing with it by doing this x8 bus thing that nvidia woudl get raked over the coals for doing?


----------



## Jism (Aug 24, 2021)

Lol all these threads on the net about how AMD necked it's users by providing a PCI-E x8 type of card.

Just depends on the user case, but overall it's still within and twice as fast as a polaris. @W1zzard > how does PCI-E overclocking yield with such cards and it's performance? You could use a older board without a NVME setup and be able to push it to 112Mhz PCI-E bus or so. Should be perfectly possible.


----------



## Oberon (Aug 24, 2021)

TheinsanegamerN said:


> Now there's a strawman argument. Wher did I say any of that? I didnt. The only thing I said was that AMD has a habit of kneecaping themselves when they start catching nvidia. Rebranding cards, too little memory (4GB 580) or a x8 bus that impacts performance in some games (6600xt, 5500xt was hit by BOTH of these issues).
> 
> 
> You know, outside of software that did show a performance difference. Of course:
> ...



Since we don't have 4.0 x16 numbers, you can't say that AMD is "kneecapping" themselves with this choice. There is a grand total of ONE performance scenario in this review where the difference matters between 4.0 and 3.0 x8 (9 FPS vs 7 is irrelevant). As for previous generations, both the 5500 XT and the 480/580 had 8 GB versions available to those with a tiny bit more money. There's just really no basis for your argument.


----------



## Valantar (Aug 24, 2021)

TheinsanegamerN said:


> Now there's a strawman argument. Wher did I say any of that? I didnt. The only thing I said was that AMD has a habit of kneecaping themselves when they start catching nvidia. Rebranding cards, too little memory (4GB 580) or a x8 bus that impacts performance in some games (6600xt, 5500xt was hit by BOTH of these issues).


Not a straw man, just highlighting the flawed logic behind your argument.


TheinsanegamerN said:


> You know, outside of software that did show a performance difference. Of course:
> 
> So no real world consequences. Outside of real world consequences, but who coutnts those?


...sigh. I and several others have argued why considering those outliers precisely as outliers is reasonable. I have yet to see an actual argument for the opposite.


TheinsanegamerN said:


> Right, so any time performance doesnt line up with expectations there are excuses.


No. Perspective and context is not the same as an excuse.


TheinsanegamerN said:


> Using a x16 bus like nvidia would fix that problem, but the GPU isnt gimped. Everyone known that buggy console port games NEVER sell well or are popular, ever. Right?


Hey, look at that, a first attempt at an argument for why these games matter. However, it is once again deeply flawed. If a port is buggy and performs poorly, on whom is the onus to rectify that situation? The developer, engine developer/vendor, GPU (+driver) maker, OS vendor/platform owner, all of the above? Depending on the issue, I would say some balance of all of the above. You are arguing as if the _only_ responsible party for improving things is AMD. Hitman's highly variable performance is widely documented, as is DS and HZD's issues. AMD has done work towards improving things with driver updates, as have developers, but at this point, outside of possible effects of unidentified early driver bugs that typically get fixed in 1-2 releases after launch of a new SKU, the remaining responsibility falls squarely on the developer and engine vendor.


TheinsanegamerN said:


> If you have to come up with excuses for why examples of a x8 bus hurting performance dont actually matter you've answered your own question.


But that's the thing: we have no proof of that. We know that a 3.0 x8 bus on a current-gen high-end CPU sees a performance drop. We have no way of knowing whether a 4.0 x16 bus would perform better than an x8 one - it might be identical. A 3.0 x16 bus will likely perform better than an x8 one (due to matching 4.0 x16 in bandwidth), but that still leaves another issue with this 'fault': essentially nobody is going to use this GPU on PCIe 3.0 with a CPU as fast in gaming as the 5600X. Nobody who bought a 9900K or 10700K are going to buy a 6600 XT - most likely they already have something equivalent or better. And if your CPU is holding you back, well, then you won't have the performance overhead to actually see that bandwidth bottleneck at all.

So the scope of your issue is progressively shrinking:
It's not that PCIe 4.0 x8 is a bottleneck, it's only on older 3.0 systems.
It's not a bottleneck in all older systems, only those with CPUs that are close to the gamign performance of the 5600X.
It's not only on fast CPUs, but only in a highly selective range of titles with known issues unfixed by developers.
The Venn diagram of all of these caveats is _tiny_. Hence the 'issue' is overblown.


TheinsanegamerN said:


> You've constructed your own argument here that you can never lose because you immedately discredit anything that goes against your narrative. I dont knwo what it is about the modern internet where any criticism against a product has to be handwaved away. The 6600xt is already a gargantuan waste of money, why defend AMD further screwing with it by doing this x8 bus thing that nvidia woudl get raked over the coals for doing?


Because criticism should be considered, relevant, accurate, useful, applicable, and so on. This criticism is neither. It is pointing to a spec-sheet deficiency that has negative consequences in an _extremely_ restrictive and unlikely set of circumstances. Making that out to be a wholesale criticism of the product is irrational and deeply flawed logic, which either speaks to bias (implicit or explicit) or just poor reasoning.

You say the 6600 XT is a 'gargantuan waste of money' - in your opinion, is it more so than the 3060? Or the 3060 Ti? At MSRP, or at street prices? I could understand that argument in the current market if leveled against literally every GPU sold, as they _all_ are. But you're singling out one specific card. That requires more reason than "it has the potential to slightly underperform if you play a highly specific selection of titles on a highly specific hardware configuration".

And that's the point here. People aren't defending the 6600 XT specifically as much as we are arguing against any singling out of it. The arguments for doing so do not stand up to scrutiny. Sure, there are (very) niche cases where it will be a poor choice, and if you're in that niche, then it really isn't the GPU you should be looking at buying. But extrapolating that into supposedly generally applicable advice? That's some really, really poor reasoning.


----------



## INSTG8R (Aug 24, 2021)

TheinsanegamerN said:


> "where in this review outside of cases where it matters can you find examples of it mattering"
> 
> Well if you're going to immediately throw out evidence you dont like this conversation will go nowhere.


You’re the one who’s trying to ignore the evidence to the contrary and claiming it’s “kneecapped” the 2%? Still beats it’s competition with your “disability” regardless.


----------



## Fast Turtle (Aug 24, 2021)

I'm currious how the card compares against the 5600 XT as this is supposedly a direct replacement/upgrade to that.


----------



## Valantar (Aug 24, 2021)

Fast Turtle said:


> I'm currious how the card compares against the 5600 XT as this is supposedly a direct replacement/upgrade to that.


Check one of the many reviews? They all inlcude comparisons to a heap of other GPUs.
ASRock Phantom Gaming D
Asus ROG Strix OC
MSI Gaming X
XFX Speedster Merc 308
Sapphire Pulse XT

Tl;dr: going from the Pulse XT (which is the closest to stock clocks of the cards above) it's about 33% faster at 1080p and 1440p and 22% at 2160p (not that you'd use this GPU for 2160p gaming) with ~10W higher power draws.


----------



## Mescalamba (Aug 25, 2021)

Some time ago I read article (maybe even here) if any card can saturate PCI-E 3.0 x16. If I remember right, answer was that it cant. And since it wasnt that long time ago, Im fairly positive this kinda mainstream GPU is perfectly fine with x8.


----------



## Valantar (Aug 25, 2021)

Mescalamba said:


> Some time ago I read article (maybe even here) if any card can saturate PCI-E 3.0 x16. If I remember right, answer was that it cant. And since it wasnt that long time ago, Im fairly positive this kinda mainstream GPU is perfectly fine with x8.


IIRC the 2080 Ti was the first tested GPU to show significant (i.e. more than 1-2%) performance limitations when going from PCIe 3.0 x16 to 2.0 x16. And the 3080 Ti still doesnt' meaningfully differ between 4.0 x16 and 3.0 x16 (and only drops ~4% at 2.0 x16). If anything, the different PCIe scaling between these two GPUs in outlier games like Death Stranding points to something else besides PCIe bandwidth being the cause of the drop, as there's no bandwidth-related reason why the 6600 XT should lose more performance moving from 4.0 x8 to 3.0 x8 than the 3080 Ti does from 3.0 x16 to 2.0 x16 - those are the same bandwidth, after all. This indicates that the cause for the drop in performance in those titles isn't the bandwidth limitation itself, but some other bottleneck (driver issue? Some convoluted game bug? The engine for some reason transferring far more data to VRAM on RDNA2 than on Ampere?) that can't be identified through this testing. It's too bad the 3080 Ti wasn't tested with Hitman 3, as that would have been another interesting comparison.


----------



## Octopuss (Aug 25, 2021)

Is there any explanation why is the card limited to x8?


----------



## Chrispy_ (Aug 25, 2021)

Octopuss said:


> Is there any explanation why is the card limited to x8?


Cost savings.
x8 is fewer traces on the PCB, simpler PCB layout, less expensive gold and copper used etc.

I know these cards are being scalped at $600 and have a high MSRP of $380 but realistically this is budget/entry-level design where cost-effectiveness is more important than outright max performance. If they can shave 3% off the price and it only has a 2% effect on the performance, then that's a worthwhile tradeoff.


----------



## Valantar (Aug 25, 2021)

Chrispy_ said:


> Cost savings.
> x8 is fewer traces on the PCB, simpler PCB layout, less expensive gold and copper used etc.
> 
> I know these cards are being scalped at $600 and have a high MSRP of $380 but realistically this is budget/entry-level design where cost-effectiveness is more important than outright max performance. If they can shave 3% off the price and it only has a 2% effect on the performance, then that's a worthwhile tradeoff.


Yeah, there's also the (small, but existent) die area savings from a smaller PCIe PHY and minute power savings from the same. Not an unreasonable thing for what this GPU is clearly designed to be - an upper midrange budget+ GPU (likely in the ~$300 range like the 5600 XT). Of course the market has made pricing like that unrealistic from many directions (increasing material prices, fab capacity shortages, silicon wafer shortages, SMD component shortages, etc., plus crypto, plus demand from gamers sitting on 2-generations-old hardware, +++), but this design was likely started _long_ before this situation started making itself known in a major way. IIRC the 5600 XT was x8 as well, no? Alongside the RX 460/560 too. It's probably designed to scale downwards with 1-2 cut-down SKUs using the same reference board design (just not fully populated with VRMs etc.), which would make cost savings important as margins on those cheaper products would inevitably be lower.


----------



## 5 o'clock Charlie (Aug 25, 2021)

Thank you @W1zzard for a very informative review. I was very curious about how much of an impact it would be between pci gen 3 vs 4.


----------



## Garlic (Aug 25, 2021)

badmau5 said:


> It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?
> View attachment 213984


Nope, it's just doom needing so much bandwidth.


----------



## badmau5 (Aug 27, 2021)

There is probably nothing tragically wrong with this card. It all comes down to price and availability. MSRP is irrelevant in most cases. Of course this card comes with compromises, like ray tracing capabilities are non existent, but used 1070s and 980Ti's sell for $350+ on ebay, 6700 XTs go for $850 so I guess if one could easily buy a 6600 XT for $400, it wouldn't be such a bad deal. Again, in current market conditions. But If all cards were readily available at msrp it would be a whole different ball game. AMD knew they'll sell these like hot cakes regardless of reviews, so they figured they wouldn't have to design a top value product. If only availability improved, but we all know it'll take many more months for this nonsense market to stabilize. 

I personally will just keep torturing my GTX 980 until I can buy something decent for $400-$450.


----------



## Vayra86 (Sep 3, 2021)

badmau5 said:


> There is probably nothing tragically wrong with this card. It all comes down to price and availability. MSRP is irrelevant in most cases. Of course this card comes with compromises, like ray tracing capabilities are non existent, but used 1070s and 980Ti's sell for $350+ on ebay, 6700 XTs go for $850 so I guess if one could easily buy a 6600 XT for $400, it wouldn't be such a bad deal. Again, in current market conditions. But If all cards were readily available at msrp it would be a whole different ball game. AMD knew they'll sell these like hot cakes regardless of reviews, so they figured they wouldn't have to design a top value product. If only availability improved, but we all know it'll take many more months for this nonsense market to stabilize.
> 
> I personally will just keep torturing my GTX 980 until I can buy something decent for $400-$450.



This - and let's be honest, its not like Nvidia is producing top level product on the entire line up right now. Or for any sort of reasonable price.


----------



## plonk420 (Sep 11, 2021)

yeah, i haven't checked here, but in AU, HUb found that the cheapest Nvidia at the same price as the 6600XT is the 1660 Super


----------



## LabRat 891 (Oct 15, 2021)

Only <1% loss @ PCIe 3.0 x8; these cards seem perfect to run off PCIe 4.0 x4 if your workstation or multifunction appliance needs the lanes for other devices. Something like a 100gbps NIC, HBA, or RAID card might need the bandwidth more.
If it wasn't for the signal integrity issues, you could run your GPU off the CPU M.2 slot, riser'd to wherever you wanted, and retain your full pcie x16 slot. Say, for a bifurcated quad m.2 card?


----------



## MagnuTron (Mar 7, 2022)

So, strangely enough - I am measuring a significant performance loss in none other than CS:GO, on the map ancient. Very niché I know, but I am still taking it up with AMD right now...


----------



## Fast Turtle (Mar 7, 2022)

Interesting thought there LabRat but I'd go and get a Zotack GT 730 PCIe x1 based card for lots cheaper. Yes they're out there though originally OEM only but Newegg has an entry. Very useful for a Server HTPC and other media playback with very light (Freecell) gaming.


----------



## pexxie (Jul 9, 2022)

> The underlying reason we're seeing these effects in some games is that nearly all titles are developed for consoles first, which have just one kind of memory that's shared between CPU and GPU. This also means that moving data between the CPU and GPU is incredibly fast and doesn't incur the latency or bandwidth penalties we're seeing on the PC platform. Remember, consoles are basically using something similar to IGP graphics, which has CPU and GPU integrated as a single unit, unlike the PC where you have discrete GPUs sitting on the PCI-Express bus. The onus is now on the game developers to make sure that their games not only run the best on consoles, their cash cow, but also on the PC platform.


Thank you for this. Great reasoning. Those PCIe 1.1 x8 numbers are good for naming and shaming. Comparing the AMD numbers to the NVIDIA numbers in terms of the scaling and relative performance is quite revelatory. My guess is it's the AMD hardware GPU scheduler showing off.


----------



## Jokii (Oct 5, 2022)

MagnuTron said:


> So, strangely enough - I am measuring a significant performance loss in none other than CS:GO, on the map ancient. Very niché I know, but I am still taking it up with AMD right now...


Any updates? How much was the difference in performance? Did they fix it?


----------



## MagnuTron (Oct 6, 2022)

Jokii said:


> Any updates? How much was the difference in performance? Did they fix it?


They didn't fix jack. They didn't answer - as always. But I found a fix. ReBAR.


----------



## ARF (Oct 6, 2022)

Don't buy..

Lowest prices by type of card:

*RX 6600 XT:*





AMD Radeon RX 6600 XT Grafikkarte (2022) Preisvergleich | Günstig bei idealo kaufen

*RX 6600:*




AMD Radeon RX 6600 Grafikkarte (2022) Preisvergleich | Günstig bei idealo kaufen

*RX 6650 XT:*




AMD Radeon RX 6650 XT Grafikkarte (2022) Preisvergleich | Günstig bei idealo kaufen


Whoever decides these pricings, is either mad or super stupid, with lost connection with the physical world


----------



## newuser78 (Oct 21, 2022)

@MagnuTron: he pretty much doubles his framerates.... really? that's something trump would say. No benchmark that you can find out there gives rebar even 10% more performance.

I can even tell you how he faked that performance jump in the video, you enable multicore rendering in csgo settings - on my 5800x that gets you from around 180fps to 400fps (defualt csgo cap) and even more with fps_max 0. but its not a stable frametime.
but you should enable that setting and limit to the fps of your monitor and let it run in fullscreen windowed. otherwise you have some kind of weird lag especially if you turn in game.

tested on a 6900xt


----------



## Mussels (Oct 22, 2022)

newuser78 said:


> @MagnuTron: he pretty much doubles his framerates.... really? that's something trump would say. No benchmark that you can find out there gives rebar even 10% more performance.
> 
> I can even tell you how he faked that performance jump in the video, you enable multicore rendering in csgo settings - on my 5800x that gets you from around 180fps to 400fps (defualt csgo cap) and even more with fps_max 0. but its not a stable frametime.
> but you should enable that setting and limit to the fps of your monitor and let it run in fullscreen windowed. otherwise you have some kind of weird lag especially if you turn in game.
> ...


Bugs exist dude


Go look at how intels GPUs perform withour reBAR


----------



## newuser78 (Oct 22, 2022)

Mussels said:


> Bugs exist dude
> 
> 
> Go look at how intels GPUs perform withour reBAR


this thread is about 6000 series


----------



## Mussels (Oct 22, 2022)

newuser78 said:


> this thread is about 6000 series


And? Unless you own the same hardware as the person who posted about his problem, you can't know if his situation was correct or not.

A user had a problem and posted a solution and you've decided to attack them and pretend they made the whole thing up.


----------

