# Ivy Bridge PCI-Express Scaling with HD 7970 and GTX 680



## W1zzard (Apr 29, 2012)

Today's latest graphics cards come with support for PCI-Express 3.0, which promises twice the bandwidth, while still being compatible with older motherboards and graphics cards. In our article we analyze differences in PCIe performance on Intel's Ivy Bridge with GeForce GTX 680 and Radeon HD 7970, using 20 games at five resolutions, each at all three PCIe generations and x4, x8 and x16 link width.

*Show full review*


----------



## kroks (May 7, 2012)

Why don't you test SLI/Crossfire? 
They require more bandwight from PCIe and you can see real difference between PCIe 3.0 and 2.0 in big resolution like 5760x1080


----------



## W1zzard (May 7, 2012)

kroks said:


> Why don't you test SLI/Crossfire?
> They require more bandwight from PCIe and you can see real difference between PCIe 3.0 and 2.0 in big resolution like 5760x1080



we'll see about that. given the data in this article i seriously doubt higher resolution in multi gpu will need more bandwidth.

i'm planning to do multi gpu testing once hd 7990 is out (using 680 sli, 690, 7970 cf, 7990)

multi-gpu is only a minority of users so i thought it would be more useful to test single card first


----------



## jaredpace (May 7, 2012)

needs cfx/sli


----------



## repman244 (May 7, 2012)

What about compute benchmarks? I saw a couple that showed quite a big difference between PCI-E 2.0 16x and PCI-E 3.0 16x.


----------



## FreedomEclipse (May 7, 2012)

I always thought the whole PCI-Ex 3.0 'double the bandwidth' thing was more of a gimmick just to sell more boards then anything else.

Oh well.... since IB is not such a big leap in performance from SB and PCI-E 3.0 being nothing but a waste of time, theres really no reason to upgrade to IB from a decent SB set up and this review just proves it.


----------



## raptori (May 7, 2012)

does GTX690 on PCI-E 2.0 x8 will be similar like 2 GTX680 each on 2.0 x4? ,assume performance of GTX690=2x GTX680 in SLI


----------



## ifkopifko (May 7, 2012)

Thanks for the review. Would it be possible to test minimum frame rates (drops) in the next article about scaling with interface speed? Thanks.


----------



## Mech0z (May 7, 2012)

Would be nice if you could add x1 performance as well (Maybe just a few of them) at http://forum.notebookreview.com/gaming-software-graphics-cards/418851-diy-egpu-experiences.html we are interested in x1 performance until Thunderbolt gear is ready for external GPUs for our laptops


----------



## nothappy (May 7, 2012)

Thank you for the hard work, it sure is a hassle to test it in detail.


----------



## Hayder_Master (May 7, 2012)

this is really great review wizzard


----------



## Yellow&Nerdy? (May 7, 2012)

Just goes out to show that PCIe 3.0 is pure marketing. Maybe it'll have more of a difference when you run quad-SLI/Xfire with 8x/8x on PCIe 2.0 vs. PCIe 3.0.


----------



## Mech0z (May 7, 2012)

Yellow&Nerdy? said:


> Just goes out to show that PCIe 3.0 is pure marketing. Maybe it'll have more of a difference when you run quad-SLI/Xfire with 8x/8x on PCIe 2.0 vs. PCIe 3.0.



It will be usefull, not now, but later and would you rather have it beforehand or have to wait for it and therefore limiting your GPU? Another field that will benefit is PCIe based SSD cards, these are growing massively in speed


----------



## dj-electric (May 7, 2012)

This review proves once again, AMD>NVIDIA when it comes to lower PCIE bandwidth


----------



## Fourstaff (May 7, 2012)

FreedomEclipse said:


> I always thought the whole PCI-Ex 3.0 'double the bandwidth' thing was more of a gimmick just to sell more boards then anything else.
> 
> Oh well.... since IB is not such a big leap in performance from SB and PCI-E 3.0 being nothing but a waste of time, theres really no reason to upgrade to IB from a decent SB set up and this review just proves it.





Yellow&Nerdy? said:


> Just goes out to show that PCIe 3.0 is pure marketing. Maybe it'll have more of a difference when you run quad-SLI/Xfire with 8x/8x on PCIe 2.0 vs. PCIe 3.0.



I wonder why you guys think its pure marketing gimmick, it delivered on its promise to double bandwidth. Psersonally I would rather them release 3.0 before we hit a bottleneck on the 2.0 (and thus showing you no performance advantage right now between the two) rather than wait until 2.0 becomes a bottleneck before making 3.0 a standard (where they actually show a difference between the two during launch day). They can release it slightly later, so people will not feel "cheated" but why release something tommorow when you can do it today?


----------



## Fatal (May 7, 2012)

Outstanding review W1zz!  I don't think people understand how much time you spend on your reviews. Hell reviewing one card would be a pain in the ass  Seems like 3.0 is not worth the worry at all.


----------



## Mathragh (May 7, 2012)

Excellent review, greatly appreciated!


----------



## Yellow&Nerdy? (May 7, 2012)

Mech0z said:


> It will be usefull, not now, but later and would you rather have it beforehand or have to wait for it and therefore limiting your GPU? Another field that will benefit is PCIe based SSD cards, these are growing massively in speed



No one can afford those, so it's pretty irrelevant.


----------



## Completely Bonkers (May 7, 2012)

FreedomEclipse said:


> I always thought the whole PCI-Ex 3.0 'double the bandwidth' thing was more of a gimmick just to sell more boards then anything else.



There is another way to look at it... PCIe 3.0 means you can ditch x16 slots.  We can start making cheaper MBs, cheaper sockets, and cheaper GPUs by only needing x4 or x8.

For mainstream, 3.0 x4 is all that is needed. For enthusiast, 3.0 x8 is more than enough. The extra 1% performance for x8 to x16 is far outclassed by the next GPU upgrade, or a better CPU etc.


----------



## W1zzard (May 7, 2012)

ifkopifko said:


> Thanks for the review. Would it be possible to test minimum frame rates (drops) in the next article about scaling with interface speed? Thanks.



no, sorry



repman244 said:


> What about compute benchmarks? I saw a couple that showed quite a big difference between PCI-E 2.0 16x and PCI-E 3.0 16x.



gpu compute seems to be a waste of time. video encoders have laughable quality. are there any other applications that anyone uses?


----------



## Jurassic1024 (May 7, 2012)

Some seriously questionable comments made here.  
AMD performs better at the new spec than nVIDIA. 
PCIe 3.0 is marketing. 
Make boards with x8 instead of x16 to save money on motherboards and because PCIe 3.0 isn't saturated.

Smh


----------



## Mussels (May 7, 2012)

> "My motherboard supports only x8 for multiple cards, will performance suck?"



love it. everything else seems professional until that crops up.


----------



## BigMack70 (May 7, 2012)

Amazing review to have - thanks!

Good to know that PCI-e 3.0 is just for future tech or for crazy multi-card configs on non-enthusiast motherboards right now.


----------



## badtaylorx (May 7, 2012)

wow....what a joke....seems like a real difference is still years away....i cant believe the pcie 1.1 numbers....holy shit


----------



## achk (May 7, 2012)

2000 individual test  awesome

i think it will be useful later


----------



## theJesus (May 7, 2012)

Thank you W1z for being so dedicated.  2k individual tests  I think I'd die if I had to do that 


Fourstaff said:


> I wonder why you guys think its pure marketing gimmick, it delivered on its promise to double bandwidth. Psersonally I would rather them release 3.0 before we hit a bottleneck on the 2.0 (and thus showing you no performance advantage right now between the two) rather than wait until 2.0 becomes a bottleneck before making 3.0 a standard (where they actually show a difference between the two during launch day). They can release it slightly later, so people will not feel "cheated" but why release something tommorow when you can do it today?


Exactly, it's about future-proofing.

That said, I'm surprised to see that PCI-E 1.1 x16 still holds up with high-end GPUs these days.


----------



## Fourstaff (May 7, 2012)

theJesus said:


> Thank you W1z for being so dedicated.  2k individual tests  I think I'd die if I had to do that



He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.


----------



## theJesus (May 7, 2012)

Fourstaff said:


> He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.


Don't rain on my parade!

It's still dedication!


----------



## W1zzard (May 7, 2012)

Fourstaff said:


> He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.



yup, otherwise it's impossible to do anything. 

2000 x 10 sec to type in a result number = 5.5 hours

oh and getting all those benchmarks automated = hard, lots of hours .. only the gpu manufacturers and tpu do that, no other sites i know of


----------



## Mindweaver (May 7, 2012)

Kickass review W1zzard!


----------



## darkangel0504 (May 7, 2012)

nice review


----------



## Steevo (May 7, 2012)

W1zzard said:


> no, sorry
> 
> 
> 
> gpu compute seems to be a waste of time. video encoders have laughable quality. are there any other applications that anyone uses?



F@H is about all we really use.


----------



## DaC (May 7, 2012)

Now that was a cool review.... well done Wizz


----------



## jethro (May 7, 2012)

Thanks for the review.   I wish a 16x + 4x test would have been ran as well in light of the abundant boards out there supporting crossfire that way.


----------



## stren (May 7, 2012)

W1zzard said:


> we'll see about that. given the data in this article i seriously doubt higher resolution in multi gpu will need more bandwidth.
> 
> i'm planning to do multi gpu testing once hd 7990 is out (using 680 sli, 690, 7970 cf, 7990)
> 
> multi-gpu is only a minority of users so i thought it would be more useful to test single card first



Thanks for doing this - but it's multi-gpu setups that need the bandwidth.  The results while reassuring to see are nothing unexpected bearing in mind what we've seen in the past.  Multi GPU would have been far more useful.  Check out vega's data - it shows that pci-e bw is a huge limitation for multi-gpu.  Multi-gpu users are probably as much of a minority as 680 users.

What I'd like to see is a compare of 2/3/4 way on x79 (modded drivers for pci-e3) vs 4 way on 2/3/4 way Z77 with the plx chip and without the plx chip for 2 way.  Of course you'll need suitably overclocked cpus too.


----------



## Badelhas (May 7, 2012)

Excelent review. 
But I would love to know if x8x8x4 PCI-e 2.0 would be ok for multi-gpu, anyone knows?

Cheers


----------



## vega22 (May 7, 2012)

w1zz will you be testing 2 690s/7990s?

only they are the cards that will need pcie3 if any do


----------



## W1zzard (May 7, 2012)

marsey99 said:


> w1zz will you be testing 2 690s/7990s?



no plans for that. right now the plan is 2x 680, 2x 7970, 1x 690, 1x 7990.

send me another 690 and 7990


----------



## manofthem (May 7, 2012)

Awesome review with great info!  Glad to know this before planning my next upgrade path. 

Small itty-bitty typo I believe: in conclusion, 6th bullet: "expect" should be "except" I think

Otherwise, just pure goodness


----------



## W1zzard (May 7, 2012)

manofthem said:


> Small itty-bitty typo I believe: in conclusion, 6th bullet: "expect" should be "except" I think



fixed. thanks


----------



## DarkOCean (May 7, 2012)

It will take many years before gpus will need the kind of bandwith pcie 3.0 @ 16x has to offer.


----------



## Completely Bonkers (May 7, 2012)

DarkOCean said:


> It will take many years before gpus will need the kind of bandwith pcie 3.0 has to offer.


Disagree. With PCIe 3.0, desktops or laptops can be designed smaller, cheaper, more efficient, but using *fewer lanes*.  With 3.0 there really is no need for x16 anymore. We can use x8 sockets or even x4. Save space. Save costs. And every socket can be an x8 meaning stick your GPUs or SLIs whereever you want. But with x4 we will be getting close to saturating bandwidth


----------



## vega22 (May 7, 2012)

W1zzard said:


> no plans for that. right now the plan is 2x 680, 2x 7970, 1x 690, 1x 7990.
> 
> send me another 690 and 7990



if i had them dude i would be happy to so we could all see the results 

as it is am im broke as a joke and can only dream 

i look forward to it none the less as it will be good to see your findings with those cards


----------



## swirl09 (May 8, 2012)

Nice to see up to date figures on this.

I always laughed when people argued its a poor setup to run multi-gpu on a mainstream platform thanks to the "crippling" effect of "only" having x8 lanes. Previous reviews on this topic showed the difference to be negligible with such setup.

Pleased with my new rig and happy in the knowledge I can throw another gpu in at a later time with no fuss or penalty.


----------



## nothappy (May 8, 2012)

I remembered last at about 2010 I bought an HD 5770 from HIS and used my MSI K9a-platinum. I plugged the board and pushed the power button, nothing happened. Changed to the GT 7100 I had lying around and it worked. Bought a new PSU (from CX 500 to TX 650) still no joy, banged my head real hard, still no joy. 

Wrote an email to MSI and would'nt you know it, gave me a new BIOS about 3 hours later, and all started to work. The question is, I got a 6950 now, would it only require a BIOS to run on that old rig of mine? since it was PCIE 1.1 already and X16 on two slots?

My new rig does maximize all components, but it kept me wondering about the tech race I am in. Any thoughts guys?


----------



## Bjorn_Of_Iceland (May 8, 2012)

Thanks, now I can slap to their faces (those that upgrade for only this reason) that 2.0 16x has not substantial difference vs 3.0 16x.


----------



## cadaveca (May 8, 2012)

Bjorn_Of_Iceland said:


> Thanks, now I can slap to their faces (those that upgrade for only this reason) that 2.0 16x has not substantial difference vs 3.0 16x.



For a single card, yes, 100% correct. There IS a very small difference, but not one that is really noticible. Very similar to the performance boosts offered by some pre-overclocked VGAs.


I noticed some larger differences between x16/x8/x8 VS. x16/x8/x16 in trifire with 6950's, to the tune of 1000 points in 3DMark Vantage, just in PCIe 2.0(clearly, with 6950's). I am not sure exactly why there is a noticible difference, as it could be the extra lanes allowing the PCIe controller a bit more wiggle room for assigning data to each PCIe link, it could be the lesser overhead of PCIe 3.0 encoding for PCIe 2.0...I dunno.


I also am unsure if that difference in Vantage is seen elsewhere, as I ended up RMA'ing the third card before I did more testing. Frankly, because I RMA'd the card, it could have just been the card acting funny.


I've always held the opinion that PCIe bandwidth only matters when the bus has been saturated, and it seems quite obvious that a single card barely makes use of the added 8 lanes from a x16 link vs. a x8 link. It will be very interesting to see W1zz's testing of the multi-GPU cards, and whether that will have a larger impact.

We could also surmise that the driver itself might not take full advantage of the PCIe bandwidth offered...or in otherwords, the driver may not be optimized to notice the difference in PCIe link, and could be optimized for a X8 link, or something. THere is no difference, for sure, but there's not a lot of quntitative info that declares WHY it doesn't matter.


----------



## EarthDog (May 8, 2012)

Thank you for this testing. What would be of particular interest to a lot of people, IMO, would be to test the 690.


----------



## brandonwh64 (May 8, 2012)

Once again THANK YOU WIZ! Your hard work and dedication to this forum is TRULY appreciated!

I see now that going from 2.0 to 3.0 at x16 is kinda worthless due to the net of only 1FPS or 1%


----------



## slim142 (May 8, 2012)

I think this was a great review. Thanks a lot for the hard work.

For those asking for more, or trying to prove a point why IB is not worth upgrading. Most people had realized that by now.

And those saying PCI-E 3.0 is just a gimmick, well, would you like it to be like SATA? Where SATA 6 is about (or already a) to be a bottleneck with 550/525Mb SSDs rapidly getting into the market. I dont think so. So please let us have bandwidth available before we saturate it to the max.


----------



## jihadjoe (May 10, 2012)

Some folks over at xtremesystems actually did a similar scaling test, but with all sorts of configs all the way up to quad sli/cfx. It seems once you go multi-card, bandwidth is VERY important.

Here's their result for 680SLI at 2.0 x16/x16 vs 3.0 x16/x16







I'll post their link up once I get a chance to quickly rummage through my bookmarks when I get home from work.


----------



## Darkrealms (May 15, 2012)

Thanks W1zz as always great review.

Cant wait to see the multi card/gpu tests....  if amd ever releases that 7990....


----------



## plundra (May 18, 2012)

Nice test, although I have a suggestion!

For future tests, it would be very nice if x1 (2.0, maybe 1.1) were tested too!
That way the results would be useful for eGPU-configurations in laptops, where you use the ExpressCard-slot as connection. Where only 1 lane is available.

Haven't checked if new Ivy Bridge laptops have 3.0 in the expresscard slot, if that's the case it should be on par with x4 1.1 from this test I suppose.
Although I'm on a Sandy Bridge with 2.0 only so wouldn't help that much 

(I'm really looking for some benchmarks to push me over the edge and purchase a ViDock with some sort of modern graphics card. Interesting to see the Radeon scaling better than Nvidia!)


----------



## Nite_Owl (Mar 24, 2015)

This is one fantastic article! 

Thank you very much for information that is still relevant almost three years later!


----------



## Mussels (Mar 24, 2015)

plundra said:


> Nice test, although I have a suggestion!
> 
> For future tests, it would be very nice if x1 (2.0, maybe 1.1) were tested too!
> That way the results would be useful for eGPU-configurations in laptops, where you use the ExpressCard-slot as connection. Where only 1 lane is available.
> ...



you can generally do some basic math and figure out where performance would be, there may be individual variance but most of it will be broadly applicable.


----------



## Aquinus (Mar 24, 2015)

Not that I endorse posting on a necro'ed thread, but there is a more up to date version of the PCI-E scaling review.



Mussels said:


> you can generally do some basic math and figure out where performance would be, there may be individual variance but most of it will be broadly applicable.


They got you! (Check the date.)


----------

