# NVIDIA GeForce GTX 1080 SLI



## W1zzard (Jun 21, 2016)

Crysis 4K at more than 60 FPS! We take a close look at multi-GPU SLI with two GeForce GTX 1080 cards in 16 games and at 4 resolutions. We used an SLI HB bridge for all testing, but also have numbers with the old bridge to find out whether a high-bandwidth bridge is absolutely necessary for SLI on Pascal.

*Show full review*


----------



## Ferrum Master (Jun 21, 2016)

Ughr... The scaling isn't really good.


----------



## Aquinus (Jun 21, 2016)

In other words, SLI'ing the 1080 only makes sense if you're running 4k, otherwise the improvement is pretty minor, probably due to a CPU bottleneck.


----------



## Eroticus (Jun 21, 2016)

Aquinus said:


> In other words, SLI'ing the 1080 only makes sense if you're running 4k, otherwise the improvement is pretty minor, probably due to a CPU bottleneck.



CPU bottleneck in 900p/1080p, but everything fine in 4K ?

Sorry but what ? did i missed something ?


----------



## TheLostSwede (Jun 21, 2016)

Looks like Nvidia has a lot more work to do on their drivers...


----------



## Aquinus (Jun 21, 2016)

Eroticus said:


> CPU bottleneck in 900p/1080p, but everything fine in 4K ?
> 
> Sorry but what ? did i missed something ?


Yeah, CPU load tends to scale to frame rate not resolution. It's easy for the GPU to churn out frames at lower resolution which puts more burden on the CPU. Higher resolutions have the GPU spending more time doing work which reduces frame rate and in turn, less needed CPU power for rendering over time.


----------



## ZoneDymo (Jun 21, 2016)

Well guess we can now estimate it will be about a gen or 2 past this one before we can all do 4k for a reasonable price.


----------



## Jack1n (Jun 21, 2016)

Well, what a surprise, Nvidia lied about the necessity of the their new SLI bridge.


----------



## Chaitanya (Jun 21, 2016)

Jack1n said:


> Well, what a surprise, Nvidia lied about the necessity of the their new SLI bridge.


Plus you can always use two "classic" bridges in case extra "bandwidth" is needed.


----------



## Frick (Jun 21, 2016)

When it works it works seemingly fine. Look at BF4 and GTA V.


----------



## AHMAD_ESLIM (Jun 21, 2016)

why the hell rise of tomb raider is not good scaling on DirectX 12 ??
I guess Nvidia is ruining the performance again in DirectX 12 To make AMD asynchronous compute not useful for DirectX 12
like what they did by put heavy tessellation on crysis 2
See This " 







 "


----------



## techy1 (Jun 21, 2016)

a clear step back in sli scaling... yea I know it is always "the drivers not out there yet", but it was factor also before  and previous gen cards in sls offered more improvements.


----------



## TheLostSwede (Jun 21, 2016)

Chaitanya said:


> Plus you can always use two "classic" bridges in case extra "bandwidth" is needed.



Have you tested that? I'm pretty sure I read somewhere that it doesn't work properly.


----------



## W1zzard (Jun 21, 2016)

TheLostSwede said:


> Have you tested that? I'm pretty sure I read somewhere that it doesn't work properly.


I did some very quick testing and it looks like only one bridge is active when two are connected (checked with a scope), but the scope is kinda slow with 100 MHz bandwidth, so no conclusive evidence.


----------



## Ferrum Master (Jun 21, 2016)

Frick said:


> GTA V.



You sure. Look at Strix and Strix SLI not the plain 1080FE.


----------



## Enterprise24 (Jun 21, 2016)

HB bridge scale well in firestrike benchmark.

http://www.vmodtech.com/th/article/nvidia-hb-bridge-for-pascal-gpus-review/page/2


----------



## bug (Jun 21, 2016)

ZoneDymo said:


> Well guess we can now estimate it will be about a gen or 2 past this one before we can all do 4k for a reasonable price.


I think you're optimistic, next gen won't offer playable 4k@reasonable price. In my case, reasonable price is ~$200. I'm betting on 2-3 more generations, but that depends on how 10bit and HDR catch on, because those would incur further performance penalties.
I would love to play games and edit my photos @4k, but I'm not spending $599 on a video card. Hell, I'm not spending $400.


----------



## enya64 (Jun 21, 2016)

It makes you wonder how the Xbox Scorpio and Playstation Neo are going to tout full 4k gaming when even 2 of the fastest current pc cards on planet can barely do it. I just 4K at 20-30 fps will be the norm in a year for console gaming. Let's bring on the dual 1070 sli review and see how close the performance is!!!


----------



## Frick (Jun 21, 2016)

Ferrum Master said:


> You sure. Look at Strix and Strix SLI not the plain 1080FE.









? Maybe should have added "at 4K".


----------



## TheLostSwede (Jun 21, 2016)

bug said:


> I think you're optimistic, next gen won't offer playable 4k@reasonable price. In my case, reasonable price is ~$200. I'm betting on 2-3 more generations, but that depends on how 10bit and HDR catch on, because those would incur further performance penalties.
> I would love to play games and edit my photos @4k, but I'm not spending $599 on a video card. Hell, I'm not spending $400.



In 2-3 generations we'll have 8k screens...
"A new 31.5" IPS panel (part number tbc) with "8K4K" resolution of 7680 x 4320."
http://www.tftcentral.co.uk/news_archive/35.htm#panels_update


----------



## Nihilus (Jun 21, 2016)

Rather crummy results.  I would like to see 6 core CPU results and up, but we shouldn't need a $1000 platform to run SLI BEFORE we even buy the cards.  

It seems the 980ti/fury did much better with less CPU, even when going to trifire/quadfire.


----------



## okidna (Jun 21, 2016)

AHMAD_ESLIM said:


> why the hell rise of tomb raider is not good scaling on DirectX 12 ??
> I guess Nvidia is ruining the performance again in DirectX 12 To make AMD asynchronous compute not useful for DirectX 12



There's no official SLI and CrossFire support for DirectX 12 version of Rise of the Tomb Raider.


----------



## azngreentea01 (Jun 21, 2016)

Would techpowerup able to do 1070 sli review? 

Also, it would seem it a waste of money to buy the new HBbridge, i dont see any different.


----------



## mroofie (Jun 21, 2016)

Aquinus said:


> In other words, SLI'ing the 1080 only makes sense if you're running 4k, otherwise the improvement is pretty minor, probably due to a CPU bottleneck.


Another reason why we need dx12


----------



## SpAwNtoHell (Jun 21, 2016)

Ok, as w1zzard stated, but also concluded looking at graphs... SLI is worth only for 4k, as long as games are made to scale in SLI and the gain when they do can be up to 90% roughly... And i have a feeling hbm is usefull only above 4k. 

Not a fan of sli and as many i prefer single gpu solution but i have to ask, w1zzard how minimum framerate scales in sli as i had a look at some other sli benchmark with 1080 and frankly i was dissapointed as minimum was few fps above a single card?!?! Tho average reflected pretty much what you posted obviously in the titles that match. I know some will argue that is not the same test system but still was pretty similar and should of not affect that much.

Another question: was the tipic SLI micro stutering fixed improved somehow or the same with 1080?

Ps: 1080 is not overkill for FHD if above 100fps is necessary as a minimum on a 144hz monitor but yes will be for any lower res and refresh.


----------



## efikkan (Jun 21, 2016)

techy1 said:


> a clear step back in sli scaling... yea I know it is always "the drivers not out there yet", but it was factor also before  and previous gen cards in sls offered more improvements.


It's more due to poorly coded games rather than poor drivers. Multi-GPU support is simply not a priority among developers.


----------



## jsfitz54 (Jun 21, 2016)

Chaitanya said:


> Plus you can always use two "classic" bridges in case extra "bandwidth" is needed.





W1zzard said:


> I did some very quick testing and it looks like only one bridge is active when two are connected (checked with a scope), but the scope is kinda slow with 100 MHz bandwidth, so no conclusive evidence.



*What is the take-away on this?*

The new HB bridge is better or just a superfluous *up-sell* over using 2 classic bridges?
Is it just "pretty" with lights?

*Only a single bridge is required?*

W1zzard said:
"You can still use the classic SLI bridge that came with your motherboard, though."

"As mentioned earlier, you can use the classic 2-way bridge included with your motherboard, and NVIDIA's driver will enable SLI for you, but with a reminder that you can improve your experience with a "higher performance SLI bridge." This reminder goes away as soon as a SLI HB bridge is installed."
[Page 2 of review.]

"So I picked a few scenarios and ran them with *one* flexible "old" bridge as well."
[Page 22 of review]


----------



## truth teller (Jun 21, 2016)

W1zzard said:


> The bridge isn't just two classic bridges fused into one, though.





W1zzard said:


> I did some very quick testing and it looks like only one bridge is active when two are connected (checked with a scope), but the scope is kinda slow with 100 MHz bandwidth, so no conclusive evidence.


does the second part/side of that 40 western euros contraption even become active too (signaling)? or does it only provide power to the led (lol)?
couldnt you just trace the bridge pcb (continuity/ohm with a multimeter should do, if there are no visible traces on the pcb sides [inner layer holds traces]) and check if its any different from 2 bridges slammed together? maybe, if the need for this bridge even becomes a thing in the future (really doubt that), people can just bodge two $2.4 sli bridges together with some minor patches.
no performance improvement, what a rip off, but then again, the people who buy it wont really care...


----------



## Sah7d (Jun 21, 2016)

So... in a few words; there is NO POINT to get a SLI nor the HB Bridge...

Nice Nvidia... nice move!

I just received my GTX1080, I had a GTX980Ti SLI and all I can say is that is less noise; system runs cooler and games are barely with good performance at 2560x1440 in ULTRA
Yes, this card is better than a SLI because the support on games are HORRIBLE for SLI but lets be honest; we are no ready yet for 4K playing.

GTX980Ti SLI barely moves the games at 40FPS, GTX1080 at 4K has no better performance on this stage neither.
Sorry but We need at least 75 FPS to get a smooth experience at 4K on ULTRA settings and this is not going to happen soon; probably in 3 years but now is not possible; and this gen is not good with SLI at all so you get your conclusions here.


----------



## GreiverBlade (Jun 21, 2016)

they need to stop using MSRP ... definitively 

as low as 1200 considering 599$ baseprice or 1400 considering 699 founder ... not likely ...

ok again: a 1080 SLI is more 1826$ and above than 1200$ (considering custom model prices of 899chf or the actual founder price of 849chf, respectively 936.87$ and 884.76$)

blessed are those who live where the actual price is close to the announced price  (AKA: probably no one .... )


----------



## ZoneDymo (Jun 21, 2016)

bug said:


> I think you're optimistic, next gen won't offer playable 4k@reasonable price. In my case, reasonable price is ~$200. I'm betting on 2-3 more generations, but that depends on how 10bit and HDR catch on, because those would incur further performance penalties.
> I would love to play games and edit my photos @4k, but I'm not spending $599 on a video card. Hell, I'm not spending $400.



Well keeping in mind the prices of years past 200 bucks is quite cheap for a videocard new (good job AMD).
But im thinking, its taking 2 of these, we still need to see the GTX1080Ti.
The GTX1170 will probably have the performance of the GTX1080Ti whenever its released.
The GTX1260 will probably have the performance of the GTX1170 whenever its released.

Thats why I think it comes down to about 2 gens.


----------



## the54thvoid (Jun 21, 2016)

ZoneDymo said:


> Thats why I think it comes down to about 2 gens.



What about big Vega? 

Also, 4k is already playable on a 980ti (which is price dropped lower than Fury X).  It's also down to dev's how well a game runs.  BF4 still looks good after years and Battlefront is well pretty (if boring).


----------



## jabbadap (Jun 21, 2016)

Hmm, does dsr work with gtx 1080 sli, wonder if you can test it with 5k or even 8k... Nonetheless, great review as always


----------



## jihadjoe (Jun 21, 2016)

> Crysis 3 60fps @4k

It can play Crysis!


----------



## trog100 (Jun 21, 2016)

W1zzard said:


> I did some very quick testing and it looks like only one bridge is active when two are connected (checked with a scope), but the scope is kinda slow with 100 MHz bandwidth, so no conclusive evidence.




i tried it with my pair of 980 TI cards.. it produced strange horizontal interference lines so i gave up on the idea and went back to a single bridge..

so for me it didnt work.. i tried it for cosmetic reasons.. nothing else..  it looks better.. he he

trog


----------



## SimpleTECH (Jun 21, 2016)

Enterprise24 said:


> HB bridge scale well in firestrike benchmark.
> 
> http://www.vmodtech.com/th/article/nvidia-hb-bridge-for-pascal-gpus-review/page/2



Different drivers though.

TPU: 365.10
Vmodtech: 368.19


----------



## 2big2fail (Jun 21, 2016)

@W1zzard can you add 980ti SLI to the performance summary?


----------



## SithLord (Jun 21, 2016)

good review, i was wondering about that SLI HB bridge, good to know its not really needed as nvidia claims. not that i was going to buy a 1080 (or two for that matter) since i game on 1080p and a single 970 does just fine.


----------



## Rahmat Sofyan (Jun 21, 2016)

so in other word.. HB SLI Bridge= LB SLI Bridge / Old SLI Bridge but more style and blink on it and plus wire inside?

I didn't get it.. already 2016, and still kept with SLI bridge...


----------



## Cataclysm_ZA (Jun 21, 2016)

Wow, that's really disappointing. No wonder they won't put much stock into triple and quad SLI. There's just nothing to take advantage of it now.


----------



## efikkan (Jun 21, 2016)

The SLI bridge will still be useful in the future. The multi-GPU support in Direct3D 12 doesn't change that, that's only the front-end changing. It will still be advantageous to use the bridge when using two Nvidia cards, and due to the latency problems without it this is what most multi-GPU users will continue to do.


----------



## Beast96GT (Jun 21, 2016)

Um, seems a lot of you guys didn't read the article.


> Right now, SLI HB bridges cost $39.99 and are in limited supply, especially the popular three-slot bridge most motherboards use is scarce. So as long as you are gaming at 4K@60Hz or lower, you will be fine with a classic bridge.


The HB SLI bridge is obviously not useless, it just fits a need that many of you may or may not have.  I don't game in 4K@60Hz or lower, I game in nVidia Surround (6000x1080) G-Sync 144hz.   Many of you may not believe it, but there is a large group of people that will buy two GTX 1080s, other than me, and put together far more tricked-out rigs than I've got.  I'm just really glad that TechPowerUp wrote such an awesome article to discern the difference.   Now game on.


----------



## iLLz (Jun 21, 2016)

In BF3 and BF4, you seem to be hitting the FPS cap of 200 that's set in the default config file for the game.  Considering the framerate is equal in both 1080p and 1440p.  Have you tried to raise the limit above 200?  Is it even possible?  Genuinely curious as this could have an affect on some of the performance accumulation charts, since BF3 and BF4 make up a decent chunk of the games tested here.


----------



## jsfitz54 (Jun 21, 2016)

Beast96GT said:


> Um, seems a lot of you guys didn't read the article.
> The HB SLI bridge is obviously not useless, it just fits a need that many of you may or may not have. I don't game in 4K@60Hz or lower, I game in nVidia Surround (6000x1080) G-Sync 144hz. Many of you may not believe it, but there is a large group of people that will buy two GTX 1080s, other than me, and put together far more tricked-out rigs than I've got. I'm just really glad that TechPowerUp wrote such an awesome article to discern the difference. Now game on.



The point is, I have 2 candy jars on the counter, each has the same candy in it, one jar says 5 cents the other says 20 cents.
You would by the 20 cent candy?
If eye bling is the only criteria then say so, and market it, as such.
It appears that there is no special magic in this new HB SLI Bridge. Wow, it lights up...
If that's the case, shame on Nvidia for the money grab.


----------



## TirNaNog (Jun 21, 2016)

It seems that it scales quite well, unless there is no SLi support... Nice card though!


----------



## rainzor (Jun 22, 2016)

Well, i thought nvidia (Petersen?) said the point of HB bridge is to improve frametimes i.e. provide additional smoothness to an SLI setup.  Also, there seems to be some CPU bottlenecking in FO4 and WoW, maybe even in Hitman as well since the FPS is pretty much the same 900p-4K.


----------



## MxPhenom 216 (Jun 22, 2016)

AHMAD_ESLIM said:


> why the hell rise of tomb raider is not good scaling on DirectX 12 ??
> I guess Nvidia is ruining the performance again in DirectX 12 To make AMD asynchronous compute not useful for DirectX 12
> like what they did by put heavy tessellation on crysis 2
> See This "
> ...



Yes because we should totally blame Nvidia for a game developers garbage implementation of a feature in an API.


----------



## Beast96GT (Jun 22, 2016)

> The point is, I have 2 candy jars on the counter, each has the same candy in it, one jar says 5 cents the other says 20 cents.
> You would by the 20 cent candy?
> If eye bling is the only criteria then say so, and market it, as such.
> It appears that there is no special magic in this new HB SLI Bridge. Wow, it lights up...
> If that's the case, shame on Nvidia for the money grab.



That's _not_ the point.  And not a valid analogy at all.

This is from the article: 


> A single frame at 4K resolution is 32 MB (3840x2160x4 bytes per pixel). At 30 frames per second, that's 960 MB, which is just shy of the 1 GB/s bandwidth classic SLI provides. This means that SLI HB provides no benefit until you exceed that limit, which happens at higher resolutions, like 5K or 4K@120.



It does you no good unless you're running greater than 4K @ 60hz, that's what's written in the article.  As I said, I plan to exceed the 1 GB/s bandwidth limitation of classic SLI.   Nvidia surround resolution 6000 x 1080 x 4 bytes per pixel at 144hz is about 3.5 GB/s.  That said, I doubt I'll get 144 fps.  I need myself an HB SLI Bridge, just not you.


----------



## jsfitz54 (Jun 22, 2016)

Beast96GT said:


> That's _not_ the point. And not a valid analogy at all.



Your aren't seeing things clearly. The community here understands you need a bridge for SLI.  The point is you don't need this particular bridge according to the review or there is no clear benefit in paying extra for this component over what is supplied by your motherboard manufacturer.

To that end my analogy is spot on.

"The SLI HB bridge, which uses a rock-solid metal case, reveals a rigid fiberglass PCB with two SLI slots wired along its length when taken apart. Each slot is reinforced with a metal sheath, much like PCI-Express x16 slots are in some of the newer motherboards. *The bridge isn't just two classic bridges fused into one, though. It has a green LED that lights up only when SLI is enabled at the driver-level*. On custom-design SLI HB bridges by NVIDIA partners, LEDs of different colors are available. Some even have RGB LEDs with manual color selection."

The magic is the driver not the bridge. SEE PAGE 22 of Review.  I am not blinded, impressed or willing to pay more for the shiny green LED, that lights up.

If Nvidia wants to say, "Hey, we redesigned our FE shroud and here is a bridge that compliments our new design", that's OK.
But to claim that connecting two points of metal together with this particular HB Bridge implies some sort of advanced magic, is false.
This HB Bridge doesn't have anything over any previous designs that use ribbon cables, that connect two points together.
I did not see any advanced circuit design on the HB Bridge pictured. The only chips controlled the LED lighting.
Any old bridge or two, will do.
I never said bridges aren't required, at any time.


----------



## Beast96GT (Jun 22, 2016)

> I did not see any advanced circuit design on the HB Bridge pictured. The only chips controlled the LED lighting.
> Any old bridge or two, will do.
> I never said bridges aren't required, at any time.



I'm glad you can tell 'advanced circuit design' by looking a picture.  This is crazy and um, I think I'll go with Nvidia and the author of the story that says the HIGH BANDWIDTH SLI bridge has HIGH BANDWIDTH instead of listening to you.


----------



## Siman00 (Jun 22, 2016)

The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...


----------



## Frick (Jun 22, 2016)

jsfitz54 said:


> Your aren't seeing things clearly. The community here understands you need a bridge for SLI.  The point is you don't need this particular bridge according to the review or there is no clear benefit in paying extra for this component over what is supplied by your motherboard manufacturer.



You're actually agreeing with each other. You don't need the bridge until you go 4K 120hz. Or surround 144hz, like @Beast96GT. At least according to the specs, I would love to see it tested properly. And no, w1z's tests is not enough as he probably doesn't exceed that bandwidth, and as said on the last page it wouldn't result in FPS drops but stuttering.


----------



## TirNaNog (Jun 22, 2016)

Anyway, a 1080 Sli would not run 4K 120Hz@120FPS, nor 1080p/1440p Surround 144Hz@144FPS on any modern game... 
May be a couple of 1080Ti OC'ed wil do though!


----------



## Tatty_One (Jun 22, 2016)

Beast96GT said:


> I'm glad you can tell 'advanced circuit design' by looking a picture.  This is crazy and um, I think I'll go with Nvidia and the author of the story that says the HIGH BANDWIDTH SLI bridge has HIGH BANDWIDTH instead of listening to you.


And of course you can choose to believe whoever you wish, in this case you choose to go with someone who has something to gain from the sales of the bridge as opposed to someone who doesn't


----------



## Beast96GT (Jun 22, 2016)

Tatty_One said:


> And of course you can choose to believe whoever you wish, in this case you choose to go with someone who has something to gain from the sales of the bridge as opposed to someone who doesn't



I bet NVidia just rolls in hundreds of dollars from HB SLI bridges!  What a great money making deal!  That's what you believe?  Really?     No getting anything past you...


----------



## Breit (Jun 22, 2016)

AHMAD_ESLIM said:


> why the hell rise of tomb raider is not good scaling on DirectX 12 ??
> I guess Nvidia is ruining the performance again in DirectX 12 To make AMD asynchronous compute not useful for DirectX 12
> like what they did by put heavy tessellation on crysis 2
> See This "
> ...



Rise of the Tomb Raider disables the second card when run in DX12 mode and always has been. No point really in benching SLI under DX12. The review should at least mention that and also provide a DX11 score!


----------



## BiggieShady (Jun 22, 2016)

Siman00 said:


> Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu"


Yes, every PCIe device can act as a master and use DMA to transfer data without having to go through cpu, but the cpu (driver thread) initiates the transfer and that happens at the end of every frame after all the draw calls have been issued (to correctly sync the vram of both gpus) ... as I see it, the big difference here is in hiding the latency to better control SLI frame pacing: with direct link between two gpus, as the cpu is busy issuing the draw calls, direct sli link can be busy syncing the data progressively as the draw calls arrive. Modern engines use many frame buffers and g-buffers that get composited into the final frame buffer so the bandwidth requirements are getting higher ... meaning good pcie dma transfer implementation would require batching all transfers into one so that cpu can fire and forget and go back to the draw calls. Other than saturating the pcie bus at that particular moment, that level of granularity goes against the effort to hide the latency and do proper frame pacing.
Of course, I'm just guessing as this is all speculation for the nvidia secret sauce recipe.


----------



## Tatty_One (Jun 22, 2016)

Beast96GT said:


> I bet NVidia just rolls in hundreds of dollars from HB SLI bridges!  What a great money making deal!  That's what you believe?  Really?     No getting anything past you...


Well I would assume that they are not in it to lose money?  But of course you know very well that was not my point which was that these companies have a lengthy history of misleading consumers, the fact that some would choose to believe what they say is as I said earlier up to them, if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.


----------



## Breit (Jun 22, 2016)

Siman00 said:


> The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...


I'm guessing he meant that the PCIe root complex sits in the CPU and since PCIe is a point-to-point connection and not a common bus, the data has to go through the CPU (the root complex) in order to reach the second card. It however does not mean that the data from GPU1 is being processed in the CPU (introducing additional latency) before it is on its way to GPU2.


----------



## Beast96GT (Jun 22, 2016)

Tatty_One said:


> Well I would assume that they are not in it to lose money?  But of course you know very well that was not my point which was that these companies have a lengthy history of misleading consumers, the fact that some would choose to believe what they say is as I said earlier up to them, if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.



You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge.  I'm saying you have no reason to believe that at all, but you do.    To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc.   For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps.   Have at it.


----------



## Tatty_One (Jun 22, 2016)

Beast96GT said:


> You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge.  I'm saying you have no reason to believe that at all, but you do.    To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc.   For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps.   Have at it.



You are wrong, I am not implying anything about the HB Bridge at all, I am implying that sometimes we the consumer get misled and therefore it's not always a good thing to take what is told us by them at face value, I didn't disagree with your original comment, I merely commented on the fact that you would prefer to believe the company in the context of my first sentence here, you could have been talking about 4GB of memory on the GTX 970 and I would have said the same, or of course just to keep things fair, the performance of Bulldozer.


----------



## cadaveca (Jun 22, 2016)

Beast96GT said:


> You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge.  I'm saying you have no reason to believe that at all, but you do.    To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc.   For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps.   Have at it.


What you've said, in my eyes, is "here, we got new cards, but if you want the best, buy this additional hardware."

Its hard to not to take it like that. I'm the guy that has one of the MSI advanced bridges that came out with the 9-series GPUs that I paid $50 for. But I bought it for looks, not performance. I also went from Tri-SLI 780 TI (that I paid about $700 a card for) to a single GTX980. I had one GTX980 that I was given by MSI to use in my motherboard and memory reviews, and was so happy with it that I bought a second to replace those 780 TIs for $650. I then bought the MSI bridge for when I wanted to SLI the two cards.








Now, If I want to get a couple of 1080's, which I do want, I gotta pay these prices:








I want dual 1080s because I want to upgrade my 2560x1600 Dell3008WFP( that I've had since like 2008) to a 4K panel, which is also a considerable cost, since I don't want to downsize from that 30-inch size...

And I'm left asking myself, since the "normal" bridge comes free with motherboards, how come there isn't a coupon in the box of the 1080's for this bridge? I gotta spend another "X" dollars when I'm already looking at roughly $3000 for a video solution? Peanuts! But dammit, where's the enticement to go that route? Like, I'm waiting for MSI GAMING-Z cards at least. I'd like a bridge that matches the cards, I want a decent panel, too, so I'm definitely being picky about what I gotta spend this cash on, and rightly so when it's a considerable chunk of change we are talking about. I'm not broke, either. My other hobby is RC vehicles, and I can get some damn good stuff and a heck of a lot of hours of enjoyment out of that $3000+. I spend it in my other hobby, and I'll get rebates in the least! I won't even get a free game with these VGAs...


----------



## Beast96GT (Jun 22, 2016)

Tatty_One said:


> You are wrong, I am not implying anything about the HB Bridge at all, I am implying that sometimes we the consumer get misled and therefore it's not always a good thing to take what is told us by them at face value, I didn't disagree with your original comment, I merely commented on the fact that you would prefer to believe the company in the context of my first sentence here, you could have been talking about 4GB of memory on the GTX 970 and I would have said the same, or of course just to keep things fair, the performance of Bulldozer.



The original poster, jsfitz54, was who said there was no difference in the bridges at all, maybe other than an LED.  So when you jump in and support the idea behind his logic--that Nvidia was simply creating the same product with a different name and look to mislead consumers--it would appear you feel the same way.  If not, I'm glad you're open to the idea that the HB Bridge actually _is_ different from the classic bridge.   But we need more tests to determine the differences, as this article doesn't answer the question.



cadaveca said:


> What you've said, in my eyes, is "here, we got new cards, but if you want the best, buy this additional hardware."
> 
> Its hard to not to take it like that. I'm the guy that has one of the MSI advanced bridges that came out with the 9-series GPUs that I paid $50 for. But I bought it for looks, not performance. I also went from Tri-SLI 780 TI (that I paid about $700 a card for) to a single GTX980. I had one GTX980 that I was given by MSI to use in my motherboard and memory reviews, and was so happy with it that I bought a second to replace those 780 TIs for $650. I then bought the MSI bridge for when I wanted to SLI the two cards.



Yep.  Just like when you buy a new DDR5 motherboard, you need DDR5.  At least in the case of the bridge, you don't _have_ to upgrade, the old one will work, albeit with possibly less throughput.  Ah, the price of having the best.


----------



## yogurt_21 (Jun 22, 2016)

so when sli works, 4k scaling is epic. Clearly a solution only meant for high-end enthusiasts, hence the 12-1800$ price point. Still though for that group several games saw the 2nd card scale above 80% at 4k. So there is something there for the money.

Still though its odd that so many games would shelve sli support. The argument of "well consoles are only single gpu" can only go so far. Consoles are all AMD. I'm sure it would be much easier to just simply go all AMD and call it a day if that was the case. So they are already spending money on NV support, why not add in SLI? Being the way NV has always catered to Devs and handed out free support for optimization you would think it wouldn't be too hard. Unless of course its Nvidia itself that is shelving sli, or at least backing away from it.


----------



## Tatty_One (Jun 22, 2016)

Beast96GT said:


> The original poster, jsfitz54, was who said there was no difference in the bridges at all, maybe other than an LED.  So when you jump in and support the idea behind his logic--that Nvidia was simply creating the same product with a different name and look to mislead consumers--it would appear you feel the same way.  If not, I'm glad you're open to the idea that the HB Bridge actually _is_ different from the classic bridge.   But we need more tests to determine the differences, as this article doesn't answer the question.



I quoted your post in my first post in this thread not his (the Op) so how on earth am I "jumping in and supporting his logic"?  You commented on how he could know by looking at a picture and that you would prefer therefore to go with NVidia's statement/claim, I merely pointed out that in my opinion their claims are sometimes misleading, so I will say one more time, my comment has nothing to do with the bridge, just about who we choose to believe.


----------



## truth teller (Jun 22, 2016)

Beast96GT said:


> You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge.  I'm saying you have no reason to believe that at all, but you do.    To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc.   For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps.   Have at it.





Siman00 said:


> The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...



"new posters" defending nbadia ripping off customers with two $2.5 bridges and 1 led slammed together for 40 bucks. how much are you getting paid for this?


----------



## Beast96GT (Jun 22, 2016)

truth teller said:


> "new posters" defending nbadia ripping off customers with two $2.5 bridges and 1 led slammed together for 40 bucks. how much are you getting paid for this?



Defending?  No.  Ripping off customers?  Hah, now THAT'S a statement!  They holding you hostage and stealing your money, truth??     Ooo, it's all conspiracy, huh!  People not trashing Nvidia--they must be getting paid!     No, I'm new because I wanted to read about the SLI tests just like everyone else.  EAD.


----------



## GhostRyder (Jun 22, 2016)

Well, I don't think its any surprise the new bridge makes no distinguishable difference (Least at the moment).  SLI has not been held back by the old one so why would the new one make it any better.  The difference may appear later when we get ridiculous 120hz setups on 4K and maybe even surround 1440p/2160p setups but as for now its only for looks (Heck if I go this round the 1080 or 1080ti I may grab one for the looks only).

The only thing I see disappointing is the scaling in SLI... SLI scaling has been pretty good in the past and this seems unusually low even when removing the games that don't scale.  Probably need to just give it some time as drivers are still premature on the cards with the new architecture.  Oh well, at least we know using the old bridges won't harm our experience in anyway!


----------



## efikkan (Jun 22, 2016)

BiggieShady said:


> Yes, every PCIe device can act as a master and use DMA to transfer data without having to go through cpu


A GPU still can't transfer directly to another GPU without the CPU in the current implementation.



BiggieShady said:


> but the cpu (driver thread) initiates the transfer and that happens at the end of every frame after all the draw calls have been issued (to correctly sync the vram of both gpus) ... as I see it, the big difference here is in hiding the latency to better control SLI frame pacing: with direct link between two gpus, as the cpu is busy issuing the draw calls, direct sli link can be busy syncing the data progressively as the draw calls arrive.


The SLI bridge has been used up to this day for the purpose of synchronizing the GPUs, which works much better than CrossFireX. Still, all texture data etc. is transfered over PCIe. It's possible that Nvidia would want to transfer something over the SLI HB, but that is still going to be limited to a few MB per frame in order to not cause too much latency.

Any game which needs to synchronize a lot of data in a multi-GPU configuration is going to scale very badly, basicly rendering the multi-GPU configuration pointless; E.g. if your game should scale to multiple GPUs, then any shared data should be complete before the next GPU starts or your game is going to suffer. Transferring hundreds of MB is basically out of the question, since it's too costly.



BiggieShady said:


> Modern engines use many frame buffers and g-buffers that get composited into the final frame buffer so the bandwidth requirements are getting higher ... meaning good pcie dma transfer implementation would require batching all transfers into one so that cpu can fire and forget and go back to the draw calls. Other than saturating the pcie bus at that particular moment, that level of granularity goes against the effort to hide the latency and do proper frame pacing.
> Of course, I'm just guessing as this is all speculation for the nvidia secret sauce recipe.


Although split frame rendering is still technically possible, there are basically no game which utilizes it since it scales very badly and has a high overhead cost. I only see this used in professional applications, because it _can_ work very well when the scene is "static" and has super-high details, but it works very badly in a game where a camera moves through a landscape. Splitting regions or rendering passes between GPUs is possible but scales badly and greatly limits any changes in data.

This is why AFR is the only relevant multi-GPU rendering for games, and provided that the engine is designed accordingly this can scale well on 2-4 GPUs.



GhostRyder said:


> The only thing I see disappointing is the scaling in SLI... SLI scaling has been pretty good in the past and this seems unusually low even when removing the games that don't scale.  Probably need to just give it some time as drivers are still premature on the cards with the new architecture.  Oh well, at least we know using the old bridges won't harm our experience in anyway!


No, multi-GPU scaling is up to the game engines, and it's simply far too few games prioritizing this today. On the other hand, microstuttering between frames in multi-GPU rendering is up to Nvidia/AMD to solve.


----------



## BiggieShady (Jun 22, 2016)

efikkan said:


> A GPU still can't transfer directly to another GPU without the CPU in the current implementation.
> 
> 
> The SLI bridge has been used up to this day for the purpose of synchronizing the GPUs, which works much better than CrossFireX. Still, all texture data etc. is transfered over PCIe. It's possible that Nvidia would want to transfer something over the SLI HB, but that is still going to be limited to a few MB per frame in order to not cause too much latency.
> ...



With new diff based memory compression algorithms frame buffer transfer between multiple gpus could benefit from that ... even with AFR there is an issue with any postprocessing shader that takes many frames into account like temporal anti-aliasing.
With split frame rendering, open terrain graphics issue where sky is easier to render than terrain can be solved with vertical split  I don't see how much overhead for cpu is to batch 2 sets of draw calls for each gpu (each half of the frustum) - the resources are the same on both cards
And with AFR shared data being ready before other gpu starts rendering is getting much harder when fps to scale has triple digits ... the question is how much uncompressed data can HB bridge transfer in 8 ms (120 fps)


----------



## GhostRyder (Jun 22, 2016)

efikkan said:


> No, multi-GPU scaling is up to the game engines, and it's simply far too few games prioritizing this today. On the other hand, microstuttering between frames in multi-GPU rendering is up to Nvidia/AMD to solve.


Not exactly, while yes they can program it better to work with multi-GPU its also up to the graphics companies to make it efficient at it and be able to utilize it.  We would not have to wait for profiles except to fix bugs otherwise...  That is also the reason why some games show vast improvements with different driver revisions in SLI or CFX.


----------



## efikkan (Jun 22, 2016)

BiggieShady said:


> With new diff based memory compression algorithms frame buffer transfer between multiple gpus could benefit from that ...


That would not be nearly enough.

A normal SLI bridge has a bandwidth of 1 GB/s. At 120 FPS 1 GB/s is 8.5 MB per frame. So using basic math you can see that just transferring the final image over this bus is not an option. In fact, even over PCIe it's pretty slow to transfer between GPUs:


 
(source at ~29:30)
This is part of the explanation why we get microstuttering in multi-GPU configurations. The picture from the second GPU needs to be transferred back to the primary GPU while also disturbing it. This is why we get the typical pattern of a long interval between frames, followed by a shorter interval, followed by a longer, ... Even a 1-2 ms difference is quite noticable.



BiggieShady said:


> even with AFR there is an issue with any postprocessing shader that takes many frames into account like temporal anti-aliasing.


True, it's one of several types of dependencies which will limit the multi-GPU scalability of a game.
BTW; people should use proper AA like MSAA or SSAA rather than these poor AA techniques.



BiggieShady said:


> With split frame rendering, open terrain graphics issue where sky is easier to render than terrain can be solved with vertical split.


It depends on what kind of camera angles and movements you are talking about. I would have no problems creating a terrain which the left part of the screen is 10 times as demanding to render as the right. As mentioned, it's technically possible to do split frame rendering, but you will get very poor utilization for a free camera, bare no gain at all. While AFR can (if your engine is well design) double, triple quadruple with more GPUs. 



BiggieShady said:


> And with AFR shared data being ready before other gpu starts rendering is getting much harder when fps to scale has triple digits ... the question is how much uncompressed data can HB bridge transfer in 8 ms (120 fps)


Even if the HB bridge have double lanes (assuming 2 GB/s total), we are only talking of 17 MB for a frame. You'll at least need NVLink to do what you are dreaming about...



GhostRyder said:


> Not exactly, while yes they can program it better to work with multi-GPU its also up to the graphics companies to make it efficient at it and be able to utilize it.  We would not have to wait for profiles except to fix bugs otherwise...  That is also the reason why some games show vast improvements with different driver revisions in SLI or CFX.


No, this is a common misunderstanding.
It's up to the game engine developers to design well for multi-GPU support (how queues are built up).

What you are confused about are driver side tweaks from AMD and Nvidia, which is manipulations of the driver and the game's rendering pipeline to work around issues. These tweaks are limited in scope, and can only do minor manipulations such as working around strange bugs and such. If the game is not built for multi-GPU scaling, no patch from Nvidia nor AMD can fix that properly.


----------



## BiggieShady (Jun 23, 2016)

efikkan said:


> The picture from the second GPU needs to be transferred back to the primary GPU while also disturbing it. This is why we get the typical pattern of a long interval between frames, followed by a shorter interval, followed by a longer, ... Even a 1-2 ms difference is quite noticable.


Right, all things I'm "dreaming about" assume some hypothetical new architecture ... in this case specifically, a data sync for the next frame that doesn't disturb the rendering of the current frame - sync that happens independently while the current frame is being rendered to hide at least the part of that latency ... possibly through nvlink


efikkan said:


> BTW; people should use proper AA like MSAA or SSAA rather than these poor AA techniques.


I don't know about that, going to 4k seems to need less AA and more temporal AA to reduce shimmering with camera movement


efikkan said:


> It depends on what kind of camera angles and movements you are talking about. I would have no problems creating a terrain which the left part of the screen is 10 times as demanding to render as the right.


Oh yes, to successfully cover all scenarios, you'll need to split frame buffer into tiles (similar to what Dice did in Frostbite engine for cell implementation in PS3) and distribute tile sets with similar total complexity to different gpus ... which would increase cpu overhead but I think it's worth investigating for engine makers ... maybe even use integrated gpu (or one of the multi gpus asymetrically using simultaneous multi-projection) in a preprocessing step only to help speed up dividing jobs for multi gpus based on geometry and shading complexity (on a screen tile level, not on the pixel level).


efikkan said:


> You'll at least need NVLink to do what you are dreaming about...


Agreed


----------



## truth teller (Jun 23, 2016)

Beast96GT said:


> Defending?  No.


right...



Beast96GT said:


> Ripping off customers?  Hah, now THAT'S a statement!  They holding you hostage and stealing your money, truth??


what is physx
what is gsync
what is a "new and improved" sli bridge "requirement" for a "how its meant to be played" experience, that _costs 8 times more_ than the old "alternative"



Beast96GT said:


> People not trashing Nvidia--they must be getting paid!     No, I'm new because I wanted to read about the SLI tests just like everyone else.


if you just wanted to read about it, then why did you bother to create that account and post all that "not defending" stuff?



Beast96GT said:


> EAD.


resorting to profanity and insults already? what the matter, running out of arguments already? lol


----------



## qubit (Jun 23, 2016)

Tatty_One said:


> if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.


mmmmm, marketing. Tasty and nutritious! Gonna go get me some press release infusions...


----------



## efikkan (Jun 23, 2016)

BiggieShady said:


> Right, all things I'm "dreaming about" assume some hypothetical new architecture ... in this case specifically, a data sync for the next frame that doesn't disturb the rendering of the current frame - sync that happens independently while the current frame is being rendered to hide at least the part of that latency ... possibly through nvlink


The thing that bothers me about multi-GPU is not only that we actually need it, but also that it's something that should be solvable. As mentioned multi-GPU scaling is up to the games, but all the stuttering problems are up to the drivers/hardware. NVLink would be nice, a big NVLink SLI bridge would be awesome. Still we would need more speed than both 1st generation NVLink and PCIe 3.0 provides in order to push the transfer times of the final frame down to <0.1ms, so that wouldn't happen any time soon.

I do have a "solution" that will work though without any hardware changes; create a new "AFR mode" which uses the primary GPU only for display, and all the rendering is done by "slaves". At least then the latency will be constant, but there still will be some latency.

Back to my previous point that we need multi-GPU. The last ~3 years or so the gaming market has changed from gamers targeting 1080p/60 Hz to 1440p/120 Hz and soon 2160p/144+ Hz. Even with the great improvements of Kepler, Maxwell and Pascal, gamers are not able to get the performance they want. We are not going to get to the point where a single GPU is "fast enough" in the next two GPU generations (Volta/post-Volta), so there is a place for multi-GPU. The problem with multi-GPU is essentially the chicken and the egg problem, but I'm pretty sure that if multi-GPU were "stutter free" and scaled well in most top games, then a lot of people would buy it. At this point it's more or less a "benchmarking feature", people who want smooth gaming stay away from it. (Multi-GPU works fine though for professional graphics where microstutter doesn't matter...)



BiggieShady said:


> I don't know about that, going to 4k seems to need less AA and more temporal AA to reduce shimmering with camera movement


You are right that shimmering/flicker is a problem for camera movement, especially when there are a lot of objects moving only slightly (like grass waving in the wind). TXAA combines MSAA with temporal filtering and previous pixels which essentially blurs the picture, which kind of defeats the purpose of higher resolution in the first place, since it removes the sharpness and detals. The reason for adding this technique is proper AA is too expensive for higher resolution. The advantage of SSAA(supersampling) and MSAA is that it reduces the aliasing while retaining the sharpness. The problem with all types of post-process (or semi post-process) AA techniques (incl. TXAA) is that they work on the rendered image, and can essentially just run different filters on existing data. Proper AA does on the other hand sample the data at a higher resolution and then average it out, essentially just rendering it at a higher resolution. SSAA is the best and most demanding. Even 2x SSAA almost quadruples the load, essentially rendering a 1080p image in 4K and scaling down the result. You might understand why this gets expensive in 1440p, 2160p and so on...



BiggieShady said:


> Oh yes, to successfully cover all scenarios, you'll need to split frame buffer into tiles (similar to what Dice did in Frostbite engine for cell implementation in PS3) and distribute tile sets with similar total complexity to different gpus ... which would increase cpu overhead but I think it's worth investigating for engine makers ...


I'm quite serious, I'm very familiar with landscape rendering. Most games renders the terrain from a first person perspective, which will make some regions quick to render and some slow. If you split vertically on the middle, you'll end up in a situation where both GPUs are not well saturated at any time, and continuously adjusting the split is not a good solution either. This is the reason why split frame rendering has been abandoned for games. Why use it when AFR scales so much better?



BiggieShady said:


> maybe even use integrated gpu (or one of the multi gpus asymetrically using simultaneous multi-projection) in a preprocessing step only to help speed up dividing jobs for multi gpus based on geometry and shading complexity (on a screen tile level, not on the pixel level).
> Agreed


I'm just going to address this quickly. Offloading some simple stuff to an integrated GPU is of course possible, but the latency would not make it worth the effort. The integrated GPU will only perform like 2-5% of a powerful one anyway, and you have to remember that every MB of transfer between them is expensive. If you want to offload some preprocessing, physics, etc. then the result needs to be as compact as possible. If you need to transfer like 250 MB in each frame, then you'll get like 20 FPS and huge latency problems


----------



## Steevo (Jun 23, 2016)

Enterprise24 said:


> HB bridge scale well in firestrike benchmark.
> 
> http://www.vmodtech.com/th/article/nvidia-hb-bridge-for-pascal-gpus-review/page/2




That game is awesome to play.......


But I prefer games with more play time and interaction than sitting and stroking my penis and looking at numbers.


----------



## Enterprise24 (Jun 24, 2016)

Steevo said:


> That game is awesome to play.......
> 
> 
> But I prefer games with more play time and interaction than sitting and stroking my penis and looking at numbers.



I just show that in synthetic benchmark it has some scaling. No need to troll.


----------



## BiggieShady (Jun 24, 2016)

efikkan said:


> I'm quite serious, I'm very familiar with landscape rendering.


So far there is nothing to prove otherwise, so no worries.
Consequently you should quite seriously read how Dice cleverly divided deferred rendering jobs for all the SP in the cell processor in old PS3 version of Frostbite engine: http://www.slideshare.net/DICEStudio/spubased-deferred-shading-in-battlefield-3-for-playstation-3
They did it for different reasons ... to help the weak gpu with the lighting pass (multiple lights + global illumination light probes). 
I'm suggesting a research following the same principle, not using exact same solution.
No need to get stuck with either horizontal nor vertical frame split in two pieces, when engine workload for a single frame is already split into passes and when tile based approach may offer better granularity for balanced job division, for example:



 
these are the tiles for the lighting pass rated by calculation complexity. Every pass could work similarly. Essentially doing the split frame rendering by splitting the workload rather than frame .


----------



## enya64 (Jun 24, 2016)

If the new sli bridges eliminated microstutter, then I wouldn't care if there was no improvement in frame rate. But reading other forums, sli users are still dealing with the same level of microstutter as before.


----------



## Breit (Jun 24, 2016)

To end all the discussion about when those bridges are needed and when not, Nvidia should make the connection info about those bridges available through the driver as they are doing with all the other stats that can be tracked in realtime, like core/mem clock speed or GPU utilisation. I'm thinking of clockspeed, bandwidth and utilisation of the SLI bridge and maybe even provide a setting to change the frequency of the SLI connectors manually  in the driver.


----------



## Aquinus (Jun 24, 2016)

So, even if you solve the bandwidth problem with sending frame data to the GPU actually displaying the frame, you'll still have this weird latency problem where the frame coming from the local GPU will always be ready to be rendered before the secondary GPU. The funny thing is (when it works,) running a third GPU can sometimes reduce the amount of micro stutter, probably because the two non-primary GPUs have the same amount of latency to communicate with the primary. So consider this, to get a frame from one GPU to another is measures on the scale of milliseconds where the transfer of a frame on the local GPU is literally measured in nanoseconds. The primary will *always* get its frame to the active frame buffer for the display before a secondary or tertiary GPU will. This latency alone is the source of most of the woes with micro-stutter. The only way to solve that problem is to normalize the rate at which frames and produced and received.

Micro-stutter gets worse when you start pushing a GPU to its limits because the GPU can't get the frame over to the primary as quickly as if it were running at 50% and is getting the frame to where it needs to be earlier so, when you're running at well over 60FPS, using something like v-sync makes it possible to buffer frames and essentially smooth out the latency with the frame times.

With that said, if you had a device that didn't do the rendering but, handled only the display output and buffering for a monitor or monitors would be the solution. Equal latency between the devices that produce the frames to the thing that consumes the frames is what will give us the best experience but, so long as one GPU is responsible for outputting that frame, they're going to have real fun trying to get the two sources of frame data to maintain a consistent latency between rendered frames to be displayed.

tl;dr: The less you tax a multi-GPU setup, the less likely it's going to cause micro-stutter issues because the latency between frames will become more consistent.


----------



## Steevo (Jun 24, 2016)

Enterprise24 said:


> I just show that in synthetic benchmark it has some scaling. No need to troll.




It was more of a jab at the only reasons to have/use it is for epeen and benchmarking, so its essentially useless for 99.999999999999999999999999999% of users.


----------



## enya64 (Jun 25, 2016)

Okay I drove to newegg yesterday and picked up my two gtx 1070s and an MSI hb sli bridge, just in case there was some future benefit. Every game that I tried (Witcher 3, Project Cars, GTA 5, Mortal Kombat X, The Evil Within, Mad Max, Fallout 4) ran ABOVE 60 fps in 4K at max/ultra settings. The exceptions were Crysis 3 which hovered around 53 fps and GTA 4 which averaged 53 fps in 4K.

The 1070s are gigabyte g1 gaming cards and the overclock software I used to push these in sli is EVGA Precision X. To overclock I simply increased the power target to the max of 111%, the core clock to in increments of +25 to land at +75 ( +100 caused the Heaven Valley Benchmark to crash ), set a max temp of 81 degrees, and I haven't touch the memory clock yet.  This land me at a stable clock speed of 2078 with no fluctuation during gameplay at a max temp of 72 degrees.  I'm still testing because I've only had these cards for a couple of hours, but it is a huge improvement over my two 970s and it so happens that the games I play have sli profiles.

I didn't notice any microstutter, but because I was testing with vsync off on all the games there was slight screen tearing when panning the camera too quickly in a few of the games. It happened far less than with 970 sli and was less noticeable in comparison due to frame rate being above 60 compare to the previous 35fps average of the 970 sli. I've noticed through my different builds over the last 2 years that gaming in 4K introduces a higher probability of screen tearing that is more pronounced the lower the frame rate, which is solved by vsync at the cost of performance . 60 fps almost drastically lowers the visibility and occurrences of screen tearing without the need for vsync.

I'll keep testing and post any updates.


----------



## InquisitorDavid (Jun 27, 2016)

Long time reader, first time member here.

Wanted to share some very interesting findings in this vid by Hardware Unboxed:










Big gains can apparently be had from the HB Bridge vs old Flex Bridge depending on the title (Fallout 4, Black Ops 3, Witcher 3). Thing is, the HB results could be replicated by putting 2 Flex Bridges together, so the gain is not from the HB Bridge itself, but from the linking of both SLI connectors. Can TPU confirm their findings by running those games under similar conditions? Noted the review had BO3, but only for 1600x900.


----------



## Caring1 (Jun 27, 2016)

InquisitorDavid said:


> Big gains can apparently be had from the HB Bridge vs old Flex Bridge depending on the title (Fallout 4, Black Ops 3, Witcher 3). Thing is, the HB results could be replicated by putting 2 Flex Bridges together, so the gain is not from the HB Bridge itself, but from the linking of both SLI connectors.


It's already been stated flexible bridges do not have the same gains, even if two are used.


----------



## InquisitorDavid (Jun 27, 2016)

Stated where?

It seemed to be game-specific, so I wanted to get verification on this.


----------



## Caring1 (Jun 27, 2016)

InquisitorDavid said:


> Stated where?
> 
> It seemed to be game-specific, so I wanted to get verification on this.


Pretty sure W1zz confirmed it in one of the threads discussing the 1080's.


----------



## trog100 (Oct 1, 2016)

Caring1 said:


> It's already been stated flexible bridges do not have the same gains, even if two are used.




i tried using two flexible bridges.. more for cosmetic reasons than anything else.. two look better than one.. he he

sadly it didnt work.. it produced very pronounced interference lines across the image.. i am guessing some kind of cross chatter between the two cables was taking place.. i had to revert back to just the one..

its a shame about the lack of game support.. having said that i have two of the games listed as not using the second card.. just cause and primal.. i didnt know they lacked sli support until i read about it.. they played just fine.. mind you i dont run 4K.. 

trog


----------



## Silence48 (Mar 18, 2017)

I am wondering if I need to invest in a high bandwidth SLI bridge. I ran Unigine heaven 4.0 benchmark on my 1080 ti SLI setup and the results seem to leave something to be desired even though they are pretty awesome.
Can you tell me what you think? You can see the video here.


----------



## Caring1 (Mar 19, 2017)

Silence48 said:


> I am wondering if I need to invest in a high bandwidth SLI bridge. I ran Unigine heaven 4.0 benchmark on my 1080 ti SLI setup and the results seem to leave something to be desired even though they are pretty awesome.
> Can you tell me what you think? You can see the video here.


You may get a slight increase, but better of waiting for drivers to be optimised first, as the 1080Ti is still in it's infancy as a card, especially in SLI where you might only see a small increase in performance.


----------



## GhostFace0621 (May 6, 2017)

I have no complaints with my GTX 1080 FE SLI setup. I've got a 1440Pmonitor and playing games at higher refresh rate with higher resolution? It just looks amazing. Also, I think that the sweet spot for gaming right now is still 1440P rather than 4K.

Also, I can't wait for explicit mGPU support to start rolling out on a lot of DX12 games. I hear that it'll utilize your GPUs better than your traditional SLI/Crossfire setup. I mean it would make sense, with explicit mGPU not needing the drivers to utilize both GPUs.


----------



## trog100 (May 7, 2017)

Silence48 said:


> I am wondering if I need to invest in a high bandwidth SLI bridge. I ran Unigine heaven 4.0 benchmark on my 1080 ti SLI setup and the results seem to leave something to be desired even though they are pretty awesome.
> Can you tell me what you think? You can see the video here.



 turn one of the cards off and see what happens.. quite what part of the video you think looks awesome has me a bit baffled it all looks like a jerky load of crap to me..

trog


----------

