# AMD Radeon Fury X PCI-Express Scaling



## W1zzard (Nov 3, 2015)

In this article, we investigate how performance of AMD's Radeon Fury X is affected when running on constrained PCI-Express bus widths such as x8 or x4. We also test all PCIe speed settings, 1.1, 2.0, and 3.0. One additional test checks how much performance is lost when using the chipset's PCIe x4 slot.

*Show full review*


----------



## Ferrum Master (Nov 3, 2015)

For fun, does the overall system power consumption change when using pcie2 vs 3?


----------



## Assimilator (Nov 3, 2015)

I was hoping for this test run with CrossFire Fury X,  to see how much bandwidth XDMA consumes and how badly CF scaling is affected by lower speeds.


----------



## W1zzard (Nov 3, 2015)

Assimilator said:


> I was hoping for this test run with CrossFire Fury X,  to see how much bandwidth XDMA consumes and how badly CF scaling is affected by lower speeds.


I'll do 970 SLI next, after that maybe Fury X CF, if I can find a second card..


----------



## FW1374 (Nov 3, 2015)

IIRC Wolfenstein had interesting results last time. Maybe add this game to 970 test?


----------



## GhostRyder (Nov 3, 2015)

This is a great read as its good to get some answers to questions many people ask when it comes to PCIE scaling.

This still shows us how as long as you have Sandy-Bridge or higher (Heck even chips like the i7 920) you can still have games maxed out without worrying about PCIE performance scaling.


----------



## Basard (Nov 3, 2015)

Cool... So I'm all good to go for another few years with my 770 chipset then.  (edit)  Seeing GhostRyders post makes me cry now.

I like the new look of the charts by the way.


----------



## Joss (Nov 3, 2015)

Thank you. 
This kind of tests define what a tech site is, and this _is_ a tech site.


----------



## heydan83 (Nov 4, 2015)

Thanks for this great article!


----------



## Hayder_Master (Nov 4, 2015)

great review W1zzard, any tests for GTX980ti? or maybe you tell me what you guess ? i think same as this


----------



## Ja.KooLit (Nov 4, 2015)

@W1zzard can you do PCI scaling on a board which has PLX chips? I wonder the impact of PLX latencies with regards to crossfire/SLI on non-plx vs PLX board 

Anyway, thanks for the test and great review.


----------



## geon2k2 (Nov 4, 2015)

W1zzard said:


> I'll do 970 SLI next, after that maybe Fury X CF, if I can find a second card..



Please please please ... I'm very much interested to see what happens now that the crossfire bridge is gone. There are many users out there which have 16x+4x slots (both on 2.0 and on 3.0).


----------



## Ubersonic (Nov 4, 2015)

> In the coming weeks, we will test GTX 970 SLI using various PCIe settings to investigate what happens in a multi-GPU setup.



Don't Nvidia restrict SLI to only 8x and 16x slots?


----------



## W1zzard (Nov 4, 2015)

Ubersonic said:


> Don't Nvidia restrict SLI to only 8x and 16x slots?


I think so, but with PCIe speeds 1.1, 2.0 and 3.0 there should be sufficient ways to test this


----------



## Aquinus (Nov 4, 2015)

I think in reality, PCI-E scaling is more of a thing in CFX/SLI setups. It's 95% of the time, if someone has a PCI-E slot for a single graphics card that it's going to be a 16 lane slot with the exception of some very low end boards that only have 4 lane slots which would already be gimped by the platform and CPU.

What I want to see is the difference for dual GPU when its 16/16, 8/8, and 8/4 which seem to be the most common configurations for two GPUs.


----------



## Mussels (Nov 4, 2015)

one thing that stands out, theres a large amount of detail on intel platforms supporting PCI-E 3.0 in the first page, but nothing about AMD supporting it. seems odd to have such details on intel, yet forget the competition.

edit: poor wording. I'm aware that AMD has piss all support for it on the chipset side, but a comment on that seems like it belongs in this type of article.


----------



## Aquinus (Nov 4, 2015)

Mussels said:


> one thing that stands out, theres a large amount of detail on intel platforms supporting PCI-E 3.0 in the first page, but nothing about AMD supporting it. seems odd to have such details on intel, yet forget the competition.


AM3/+ doesn't really support PCI-E 3.0 in any meaningful way which only leaves APUs. On the other hand, my 3820 is currently running my 390 using PCI-E 3.0 which just gives you an idea for how long Intel has had it available.


----------



## Ferrum Master (Nov 4, 2015)

Aquinus said:


> AM3/+ doesn't really support PCI-E 3.0 in any meaningful way which only leaves APUs. On the other hand, my 3820 is currently running my 390 using PCI-E 3.0 which just gives you an idea for how long Intel has had it available.



Yea but the funny thing is that AMD was right holding that off as really PCIE3 is not needed... especially for their budget-mid platform offerings. And these tests prove it.


----------



## Aquinus (Nov 4, 2015)

Ferrum Master said:


> Yea but the funny thing is that AMD was right holding that off as really PCIE3 is not needed... especially for their budget-mid platform offerings. And these tests prove it.


AMD didn't hold off. AM3/+ is just a dead platform. A lot of newer APUs have PCI-E 3.0, in particular the 7xxx series of APUs.


----------



## FourtyTwo (Nov 4, 2015)

Excellent reference article.


----------



## Ferrum Master (Nov 4, 2015)

Aquinus said:


> AMD didn't hold off. AM3/+ is just a dead platform. A lot of newer APUs have PCI-E 3.0, in particular the 7xxx series of APUs.



Yeah but it took them few years to have it at all.


----------



## Legacy-ZA (Nov 4, 2015)

This was a very interesting article, thank you for making it. I have to admit, I was expecting way more bad performance on the older PCI-E lanes, but to be honest, it really isn't that bad at all.


----------



## TheDeeGee (Nov 4, 2015)

Still no worries between 8x and 16x 3.0 it seems.

In my current setup i'm limited to 8x 3.0 because of my dedicated Sound Card. I could put it in the 1x Slot, but then it's creamed on top of my Videocard and i'm not really fond of that.


----------



## BiggieShady (Nov 4, 2015)

Most pcie traffic in games is done during the loading screen and of course there's no point measuring performance of a progress bar.
So, during the actual gameplay pcie traffic spikes when streaming high res texture mip levels and/or geometry LOD.
Unreal Engine uses texture streaming extensively and streaming is multi threaded ... one would think it shouldn't be a problem to saturate pcie bus this way on a old pcie 1.1 dual cpu mobo with xeons. 
The thing is unreal engine (and probably others too) have system in check to monitor bandwidth used for texture streaming, and a mechanism to optimally use available pcie bandwidth in order to avoid stuttering.
In Unreal Engine games you can type in console "*stat StreamingDetails*" and see bandwidth that streaming uses.
It would be interesting to see if those values exceed any of the pcie modes, and if it's adjusted to work stutter free on all modes.


----------



## GC_PaNzerFIN (Nov 4, 2015)

How does the bandwidth affect minimum fps? I remember in the good old days at least there was more significant change in minimum fps, not so much in average. More dips in FPS when you definitely didn't want.


----------



## EarthDog (Nov 4, 2015)

I love these articles. I can't wait to see what SLI/CFx will bring... But I would see if you can use AT LEAST 980s. Don't go down on the level of card used. 



GhostRyder said:


> This still shows us how as long as you have Sandy-Bridge or higher (Heck even chips like the i7 920) you can still have games maxed out without worrying about PCIE performance scaling.


Sorry, what does the CPU have to do with PCIe scaling? Did he test other CPUs that show different results? Curious as to where you pulled that assessment from...


----------



## W1zzard (Nov 4, 2015)

EarthDog said:


> AT LEAST 980s


i'm thinking about using 980 Ti, and also do triple-sli... just have to organize the cards


----------



## EarthDog (Nov 4, 2015)

That would be HUGE! I dont ever recall anyone doing that and always wondered if that scaled with more cards!


----------



## Kissamies (Nov 5, 2015)

Ubersonic said:


> Don't Nvidia restrict SLI to only 8x and 16x slots?


IIRC at least 3-way supports x4 for the third card? At least for example some X58 mobos support x16/x16/x4 3-way SLI without the nVidia chip, like my old Asus P6T.

But officially I think also that x8/x8 is the minimum for (2-way) SLI.


----------



## vargis14 (Nov 6, 2015)

Well it looks like my 2600k and its 8x PCIE 2.0 will not hold me back at 3440-1440 anytime soon since I am using it now with 2 EVGA GTX 770 Classified 4GB cards and the performance is right up there with a single 980...maybe even a 980ti for the most part.

Luckily I have 2 Cherry Classified cards that with boost 2.0 they run at 1280mhz with 111% power target.


----------



## jaggerwild (Nov 7, 2015)

GhostRyder said:


> This is a great read as its good to get some answers to questions many people ask when it comes to PCIE scaling.
> 
> This still shows us how as long as you have Sandy-Bridge or higher (Heck even chips like the i7 920) you can still have games maxed out without worrying about PCIE performance scaling.





 Sandy Bridge E(X79 chips set) had only PCI-E 2.0 had to hack it to make it 3.0, which the box says it had.


----------



## Aquinus (Nov 7, 2015)

jaggerwild said:


> Sandy Bridge E(X79 chips set) had only PCI-E 2.0 had to hack it to make it 3.0, which the box says it had.


My 3820 runs my 390 at 3.0 out of the box without any alteration to anything. Just saying.


----------



## escapeclause (Nov 10, 2015)

Excellent article, thank you!

IB or RAID controller on second CPU-attached PCIe3 slot at x8 still makes sense.


----------



## manofthem (Nov 13, 2015)

Very awesome review W1zz, great insight and much appreciated by all!  Looking forward to the next SLI one  

Yeah, I missed this review til just now


----------



## Mini0510 (Nov 23, 2015)

geon2k2 said:


> Please please please ... I'm very much interested to see what happens now that the crossfire bridge is gone. There are many users out there which have 16x+4x slots (both on 2.0 and on 3.0).



I already tested this. I got about 90-110 fps with 290X crossfire in BF4 with x16 + x4. Then i switched to a SLI motherboard. I got between 120-140fps.
So without the bridge, 4x is a major bottleneck


----------

