# NVIDIA GeForce GTX 1080 PCI-Express Scaling



## W1zzard (Dec 12, 2016)

In this article, we investigate how performance of NVIDIA's GeForce GTX 1080 is affected when running on constrained PCI-Express bus widths such as x8 or x4. We also test all PCIe speed settings, 1.1, 2.0, and 3.0. One additional test checks on how much performance is lost when using the chipset's PCIe x4 slot.

*Show full review*


----------



## Grings (Dec 12, 2016)

Thanks for running these again

Every time new cards come out i wonder if we will finally see some form of drop off, what with pci-e 3.0 releasing 5 or so years ago now, nope, still good

Out of interest, have you run any comparisons of regular and hb sli bridges on 1070/80? i was thinking of picking up another 1070, but Asus dont have their own Bridge out yet, so was going to try the 1 connector one that came with my board until asus releases one (or find an nvidia one in a clearance section, damn thing is £10 more than msi and evga ones)


----------



## Gasaraki (Dec 12, 2016)

Thanks for letting me know that my x58 platform is still good.


----------



## Dippyskoodlez (Dec 12, 2016)

I'd like to see a 2 way sli in here too.

@Grings there was an article when it first came out that wizzard ran i think.


----------



## RejZoR (Dec 12, 2016)

Isn't it strange how x8 3.0 at many occasions performs better than x16 3.0 ? I find that strangely peculiar.


----------



## Ferrum Master (Dec 12, 2016)

RejZoR said:


> Isn't it strange how x8 3.0 at many occasions performs better than x16 3.0 ? I find that strangely peculiar.



Up to putting some scotch to your card?


----------



## arbiter (Dec 12, 2016)

RejZoR said:


> Isn't it strange how x8 3.0 at many occasions performs better than x16 3.0 ? I find that strangely peculiar.


when kinda think about it since you are using less lanes to move same data though not gonna say its correct but in theory its like SLI/CF scaling, less lanes to scale over. Doubt that is completely correct but the idea i think is pretty close.


----------



## Octopuss (Dec 12, 2016)

Anyone else puzzled by the graphs bar colours? What do they mean?


----------



## nickbaldwin86 (Dec 12, 2016)

but then how can I justify spending money on a new system every time a new chipset and PCIe version comes out. Also how am I supposed to cry that there isn't enough lanes in for PCIe and my SLi setup only runs at 8x per card. LOL

I am going to link this every time I see those comments or forum posts. Thanks for this ammo


----------



## erixx (Dec 12, 2016)

Don't you all feel like being cheated by the industry?

They sell each "next step" as revolutionary. Well, 5 fps difference between PCI 2 or 3, x8 or x16 at 4K is not revolutionary at all!!!


----------



## nickbaldwin86 (Dec 12, 2016)

erixx said:


> Don't you all feel like being cheated by the industry?
> 
> They sell each "next step" as revolutionary. Well, 5 fps difference between PCI 2 or 3, x8 or x16 at 4K is not revolutionary at all!!!



no, only because I have never built a system based on a 5FPS gain. LOL  only a sucker would do that 

I am sure a lot of people read into the marketing hype and see "next revolution in technology" and proceed to open their wallet


----------



## Fx (Dec 12, 2016)

nickbaldwin86 said:


> but then how can I justify spending money on a new system every time a new chipset and PCIe version comes out. Also how am I supposed to cry that there isn't enough lanes in for PCIe and my SLi setup only runs at 8x per card. LOL
> 
> I am going to link this every time I see those comments or forum posts. Thanks for this ammo



It has become hard to justify. I used to update my rig every year, then that moved to 2 years. These days, I am running a rig with parts varying up to 3-4 years old. The graphics card usually only gets paced out as far as 2 years before an upgrade.


----------



## erixx (Dec 12, 2016)

^ I have a 4K monitor with G-sync since 2 weeks and I have a 980ti card. I am getting 60 fps in all games except monstrous Arma3 at ultra (can be tweaked in "5 fps seconds" hahaha) which is fine, yet the press and the hype force me to go "next gen" hahaha. Will I be strong enough? haha


----------



## Player (Dec 12, 2016)

Thank you for another great article W1zzard!



Octopuss said:


> Anyone else puzzled by the graphs bar colors? What do they mean?


With the exception of the bar for the _x4 3.0 via Chipset_, which has it's own color, all the others are colored by available bandwidth. PCI-E x1 3.0 has almost twice the bandwidth of PCI-E x1 2.0, which in turn has twice the bandwidth of PCI-E x1 1.1\1.0.


----------



## btarunr (Dec 13, 2016)

Octopuss said:


> Anyone else puzzled by the graphs bar colours? What do they mean?



Same color = similar bandwidth.


----------



## buggalugs (Dec 13, 2016)

interesting.


----------



## The Quim Reaper (Dec 13, 2016)

If people would only show a little self restraint and only build themselves a new PC every 3,4 or 5 years, they would actually appreciate the performance gains from new standards and architecture.

I came to my current PC in 2014 from a near 5yr old AMD Phenom II 550, Radeon 4850 & slow mechanical OS Hard drive.

The first week, getting used to the extra performance I suddenly had at my disposal, was mind blowing and it felt like it was actually worth the considerable outlay.

Piecemeal upgrades that may get you an extra 5 or 10% here and there are just a waste of money IMO.

I really don't see any reason now or in the next 18mths to upgrade anything on my current rig.


----------



## Nihilus (Dec 13, 2016)

A whopping 3% loss running 4k at 4x pcie.
It will be intreresting to see how many people get the enthusiast Kaby Lake X once the Cannon Lake mainstream is released with 6 cores. 
  With this article and the extra cores, it makes the enthusiast class a tough sell.


----------



## Aquinus (Dec 13, 2016)

The Quim Reaper said:


> If people would only show a little self restraint and only build themselves a new PC every 3,4 or 5 years, they would actually appreciate the performance gains from new standards and architecture.
> 
> I came to my current PC in 2014 from a near 5yr old AMD Phenom II 550, Radeon 4850 & slow mechanical OS Hard drive.
> 
> ...


That's how I felt went I went from  a Phenom II 940 based system to my current i7 3820 machine. I'm getting ready for an upgrade though now that my machine is starting to encroach upon 5 years of age. I'm probably going to wait for the next HEDT platform that impresses me.


----------



## Fluffmeister (Dec 13, 2016)

Good to see my 8 year old X58/i7 920 combo holding up well.


----------



## D1RTYD1Z619 (Dec 13, 2016)

Octopuss said:


> Anyone else puzzled by the graphs bar colours? What do they mean?


I agree they don't seem to have any relevance


----------



## EarthDog (Dec 13, 2016)

There hasn't been a flame war about this in years... because of these articles. 

Thanks again!

Would have been interesting to run it with the fastest card in your stable though, with the Titan XP.. not that it would have changed anything though, lol!


----------



## TheinsanegamerN (Dec 13, 2016)

Aquinus said:


> That's how I felt went I went from  a Phenom II 940 based system to my current i7 3820 machine. I'm getting ready for an upgrade though now that my machine is starting to encroach upon 5 years of age. I'm probably going to wait for the next HEDT platform that impresses me.


I'm still rocking ivy bridge. Only reason I'm thinking of getting a zen rig is to have a watercooled 8 core for....reasons.

Other then that, I'm confident I could get another 5-6 years out of this thing, no problem. Just upgrade the GPU every 3-4 years (just moved from a 680 to a 480). 64 player battlefield is probably the only thing that would bottleneck this system CPU wise, unless I move to a 144hz panel.




EarthDog said:


> There hasn't been a flame war about this in years... because of these articles.
> 
> Thanks again!
> 
> Would have been interesting to run it with the fastest card in your stable though, with the Titan XP.. not that it would have changed anything though, lol!


Titan XP SLI. Only way to be sure.


----------



## EarthDog (Dec 13, 2016)

TheinsanegamerN said:


> Titan XP SLI. Only way to be sure.


It would be the same thing... they each have their own bandwidth/lanes... if anything it would show more on the memory and CPU side being the limits there.


----------



## TheinsanegamerN (Dec 13, 2016)

EarthDog said:


> It would be the same thing... they each have their own bandwidth/lanes... if anything it would show more on the memory and CPU side being the limits there.


10 core skylake with quad channel 4200 ddr4. Just to be sure. 

Legitimately though, I believe the last SLI test showed that dual cards are more sensitive to bandwidth then single cards were, and showed some legitimate gains in PCIE 2. Something as fast as the TXP in SLI might show some benefit form PCIE3.


----------



## axiumone (Dec 13, 2016)

What about sli? My own testing revealed to me that there's a substantial enough difference to go with x16/x16 if you're investing heavily already. 

http://www.overclock.net/t/1616578/...8-w-titan-xp-sli-benchmarks-and-results/0_100


----------



## EarthDog (Dec 13, 2016)

axiumone said:


> What about sli? My own testing revealed to me that there's a substantial enough difference to go with x16/x16 if you're investing heavily already.
> 
> http://www.overclock.net/t/1616578/...8-w-titan-xp-sli-benchmarks-and-results/0_100


I'm floored at those results...


----------



## ThomasS31 (Dec 13, 2016)

btarunr said:


> Same color = similar bandwidth.



Would have made more sense to color the same versions... ie.: 3.0 = color 1, 2.0 = color2... etc.

Then it would be clear that how they compare to each other.
At least for me.


----------



## R0H1T (Dec 13, 2016)

Didn't anyone notice that the difference between successive generations of PCIe was less @4K than say 1080p, I figured it'd be the opposite?


----------



## etienne31 (Dec 14, 2016)

What about stuttering ?


----------



## Vayra86 (Dec 16, 2016)

erixx said:


> Don't you all feel like being cheated by the industry?
> 
> They sell each "next step" as revolutionary. Well, 5 fps difference between PCI 2 or 3, x8 or x16 at 4K is not revolutionary at all!!!



Not really. The fact is, because there is a very real difference, even though it is small, the bandwidth has been growing along with the power of graphics cards. Back in the PCIE 1.1 days we had significantly less powerful cards. If anything this bench shows the need of PCIE 3.0 on any serious gaming rig. And that standard really isn't that old - the 1080 also isn't the most powerful card today, so a Titan will show us a larger gap.

I have trouble with the conclusion of this review because of that. The picture I saw on Toms' back in 2011 or 12 was completely different, with powerful cards not even showing a single % difference. Today, as FPS goes up, the bus gets actually saturated. I see some significant differences at 1080p and let's not forget we don't run most of the games in this bench suite at high fps, it is mostly eye candy and console port gaming and we are pushing ultra instead of high, which generally accounts for a major fps drop.

Also keep in mind that VR likes to run at high FPS.


----------



## kn00tcn (Dec 16, 2016)

etienne31 said:


> What about stuttering ?


i forgot to make this comment, this is exactly what matters!

seeing these really fast cards have fps drops on lower pcie standards by themselves is one thing, but seeing similar results on older slower cards (past pcie articles) means something else is reducing the fps that's NOT peak bandwidth (the old cards wouldnt have saturated if the new cards are much faster)


----------



## a_ump (Dec 18, 2016)

thanks W1z. and my 2 cents. its rather cool to see that it does make a difference. However, i also see in some of those results, what looks to me like there's no indication that 3.0 isn't already maxed and that more bus bandwidth could improve performance further. I mean there's a decent crescendo on the FPS in many titles....guess we won't know if even PCI 3.0 is bottlenecking until 4.0/3.X is released.


----------



## Vayra86 (Dec 18, 2016)

a_ump said:


> thanks W1z. and my 2 cents. its rather cool to see that it does make a difference. However, i also see in some of those results, what looks to me like there's no indication that 3.0 isn't already maxed and that more bus bandwidth could improve performance further. I mean there's a decent crescendo on the FPS in many titles....guess we won't know if even PCI 3.0 is bottlenecking until 4.0/3.X is released.



CPU/GPU bottlenecking is what I'm attributing that to. CPU for 1080p, GPU takes over at higher resolutions.


----------



## Bender (Dec 22, 2016)

Nice PCI-E scaling review. Thanks


----------



## cheddle (Dec 22, 2016)

sigh... why bother testing so many games!

You should do a test that ACTUALLY MATTERS like testing SLI and Crossfire scaling on high end cards across different PCIe bus speeds.


----------



## _larry (Dec 23, 2016)

I got one for you guys that is not tested here!! Well...sort of.

I just added another R9 290 to my rig. My mobo is an Asrock z75 pro3. It has 1xPCIE x16 Gen 3.0 slot and the other is 1xPCIE running at 4x Gen 2.0....

I have Crossfired AMD 7770s before and the performance was not that bad considering the small memory bandwidth of those cards.
HOWEVER, with the 2x R9 290's that have MASSIVE amounts of memory bandwidth, I am having stuttering issues and game crashes on a LOT of games...

I kind of knew what I was getting into...But I found the second card that can be BIOS modded into a 290X for cheap on Ebay...
I could just sell my old card and mod this one up if I can't tweak my drivers/mobo to cooperate with these beasts of cards.


----------



## AquaeAtrae (Feb 1, 2017)

Thanks for the article. I'm seeing references to this in an attempt to understand the impact that new Thunderbolt 3 eGPU connections might have. Some laptops are wired with all x4 PCIe lanes while others only have x2 lanes. We have yet to see any tests for these.

More importantly, we should keep in mind and test for the fact that *PCIe bandwidth has little to do with the GPU's output speeds (FPS)*. Once the textures, 3D models (map, characters), cameras, and lighting are all loaded into those 8GB of VRAM, very little PCIe bandwidth is required for the CPU to request an updated render with small changes in angles in positions.

However, loading a new map or region into VRAM may temporarily utilize all available bandwidth and take much longer. When traversing open worlds (map regions) like Far Cry, one might see this in-game. In games like Battlefield 1, this means you'll never start a match fast enough to jump in that plane. Certain games and maps may even timeout (e.g. Call of Duty: Advanced Warfare) before starting. 

Any tests comparing PCIe lane scaling should really identify and report these differences by comparing map load times and by testing games in different video modes (DX11 vs DX12 vs Vulkan). With Thunderbolt 3 solutions, we should also expect an additional small loss from its own overhead.


----------



## Dippyskoodlez (Feb 4, 2017)

AquaeAtrae said:


> Thanks for the article. I'm seeing references to this in an attempt to understand the impact that new Thunderbolt 3 eGPU connections might have. Some laptops are wired with all x4 PCIe lanes while others only have x2 lanes. We have yet to see any tests for these.



Plenty of information is available, thunderbolt 2 gpus have been working for a long time.


----------

