• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Gigabyte GTX 980 Ti Waterforce Xtreme Gaming 6 GB

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,935 (3.75/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Gigabyte's GTX 980 Ti Xtreme Gaming comes with a watercooling solution onboard, which provides excellent temperatures and low noise. In our testing, the card turns out to be the fastest GTX 980 Ti we ever tested, and with $720, it's not as expensive as the MSI GTX 980 Ti Lightning either.

Show full review
 
Last edited:
Did you say you only expect Pascal to be 20% faster than maxwell???

Compared to the GTX 980 Ti reference design, the increase is 21% at 4K, which is probably similar to the performance uplift that we can expect from NVIDIA's next-generation Pascal cards

That would be disappointing.....
 
Ok. +1 to that^

.... what? the biggest jump in lithography and memory bandwidth in decades gonna result in 20% performance increase? how?
 
Did you say you only expect Pascal to be 20% faster than maxwell???
That would be disappointing.....
Also seems to go against Nvidia's own presentation unless Nvidia are adopting half-rate double precision for Pascal*. Common theory points to 1:3:6 (FP64:FP32:FP16) even with 128 cores per SM rather than 192. (1:3:6 ratio would put Pascal in line with Nvidia's previous estimates: 4TFLOPs FP64, 12TFLOPs single precision)

thumb-bd02ae3f8b0ab56cef561d87f439b32e_1448420895_1979_992x558.jpg


...which (if W1zzard is correct) would point to a relatively small GPU as Nvidia's flagship, or the flagship HPC Pascal differs from the gaming flagship Pascal. The CUDA DLL does seem to list a GP 102 in addition to the more usual 100/104/106/107/108 naming nomenclature.

* (As indicated by the SC15 presentation)
VrptSfb.jpg

@the54thvoid
I just re-read the quote you quoted. It could also be interpreted as Pascal having the same overclock headroom as Maxwell (!). Given the transistor density of 16nmFF+ I'd take that as good news.
 
Last edited:
perfrel_3840_2160.png


  • 21% faster than the GTX 980 Ti reference at 4K

What I see in this chart is that the 980 Ti has only 79% of the performance of GB 980 Ti WF .. which translates into +~26.6% boost for GB card relatively to 980 Ti reference.
Or am I wrong?
 
Did you say you only expect Pascal to be 20% faster than maxwell???



That would be disappointing.....
NVIDIA is smart, they will milk us with small performance increments, look at history, how big the changes were between generations
 
Interesting card, it's like Fury X... but good.

It's not like Maxwell needs to go under water anyway, the Palit JetStream is still my fave.
 
NVIDIA is smart, they will milk us with small performance increments, look at history, how big the changes were between generations
This is the hard truth. Nvidia only provides the bare minimum of performance, most noticeably starting with Kepler where they started holding back the big chips till later. They use to launch with those.

If they were being more aggressively challenged by AMD these past few years I think either A) cards now would be twice as powerful or B) we'd at least have a longer time to enjoy our top tier cards being top tier because if they were launching with the big chips we'd have more of a lull between each architecture release. Prices may also be better. Either way we've suffered for this lack of technological competition.
 
This is the hard truth. Nvidia only provides the bare minimum of performance, most noticeably starting with Kepler where they started holding back the big chips till later. They use to launch with those.

If they were being more aggressively challenged by AMD these past few years I think either A) cards now would be twice as powerful or B) we'd at least have a longer time to enjoy our top tier cards being top tier because if they were launching with the big chips we'd have more of a lull between each architecture release. Prices may also be better. Either way we've suffered for this lack of technological competition.

It's funny isn't it? All those leaked graphs apparently showing Fiji was going to destroy Nvidia and finally crush them under the shining light that is AMD goodness.

Yet Nv release and slightly cut GM200 based GTX 980 Ti and basically offer better value from day one.

Wonders never cease.
 
This is the hard truth. Nvidia only provides the bare minimum of performance, most noticeably starting with Kepler where they started holding back the big chips till later. They use to launch with those.

If they were being more aggressively challenged by AMD these past few years I think either A) cards now would be twice as powerful or B) we'd at least have a longer time to enjoy our top tier cards being top tier because if they were launching with the big chips we'd have more of a lull between each architecture release. Prices may also be better. Either way we've suffered for this lack of technological competition.


well lets not rule out AMD and Nvidia playing a game together, agreeing not to push it too far so both and enjoy easy profits
 
well lets not rule out AMD and Nvidia playing a game together, agreeing not to push it too far so both and enjoy easy profits
That is probably closer to the mark I think. AMD/ATI and Nvidia have colluded in the past, and their distinct lack of interest in initiating any kind of price war (aside from the occasional limited run salvage part) tends to indicate that they are quite happy with their revenue streams at the expense of true competition.
This is the hard truth. Nvidia only provides the bare minimum of performance, most noticeably starting with Kepler where they started holding back the big chips till later. They use to launch with those.
That is basic strategy for the productization and ROI for silicon, it's just that if you are the dominant player in the market, you are under less pressure with product cadence (see Intel)
If they were being more aggressively challenged by AMD these past few years I think either A) cards now would be twice as powerful or
Very unlikely. Both vendors are bound by the fabrication process (and its die size limits) and adhering to a common specification (ATX). Without the latter it is impossible to achieve large scale commoditization for add-in hardware components.
B) we'd at least have a longer time to enjoy our top tier cards being top tier because if they were launching with the big chips we'd have more of a lull between each architecture release
If you look back to when we had multiple graphics vendors ( ATI, 3dfx, S3, Matrox ) and even discounting the low end (Trident, SiS, 3DLabs, VideoLogic/Imagination, Tseng Labs etc.), that was never really the case either. Admittedly the strides were greater and the product lives shorter because the 3D graphics pipeline evolution and the pace of memory introduction were faster - something we are revisiting currently. I'd tend to note that the only reasons last generation (or earlier) cards aren't deemed competitive is because of shoddy game coding, people fixated with 4K screen resolution, API advancements, and the consumers addiction to the next best thing.

Add-in card sales have fallen consistently over the years. Launching your biggest and best at the beginning of a process node just means you generally have no room for improvement for the 2-4 years the node lasts. That becomes a tough economic sell for companies who tend to rely upon serial upgraders.
Prices may also be better.
Maybe. That used to be the case...but lower prices means lower margins, and that means a war of attrition (and deepest pockets). There's a reason that there used to around 50 graphics IHV's and now there are just a little over a handful (including the embedded market).

Anyhow, regarding the actual review topic, an interesting comparison between AIO implementations as @Fluffmeister noted. Academic interest only for me though. If I want a watercooled card I'd just add one to my loop and avoid all the extra plumbing.
 
Last edited:
720 USD doesn't seem like much, although it's really tempting to just get a couple cheaper GTX 970s (like Zotac Omega ones) instead, they seem to be handling this competition pretty well, and since they're Maxwell-based the power consumption isn't going to be a problem... Of course, if you've got a capable motherboard and a 600W+ PSU, which is not my case, apparently.

Also seems to go against Nvidia's own presentation unless Nvidia are adopting half-rate double precision for Pascal
There's a presentation on that?.. I'm looking for any materials related to Pascal's internals (handling of arithmetic ops, driver API, pipeline), do you happen to know where I can find that? Don't really follow the events that take place in USA, so... I finally have some spare time to spend on watching presentations and stuff (TPU, gaming), but I don't really know where to start, so I figured you could help? Thanks in advance.
 
Just a suggestion on the reviews. I notice the performance charts for each game are ordered 1600x900, 1920x1080, 2560x1440, 3840x2160. But then the order flips when you get to the Performance Summary page. I think it might throw people off if they aren't paying attention. It might be better to have all the performance charts in the same orders.
 
VRAM cooling , where ??? . i have palit gtx980ti super jetstream , under heavy load i can't hold hand on backplate , i think 85c+
 
There's a presentation on that?.. I'm looking for any materials related to Pascal's internals (handling of arithmetic ops, driver API, pipeline), do you happen to know where I can find that? Don't really follow the events that take place in USA, so... I finally have some spare time to spend on watching presentations and stuff (TPU, gaming), but I don't really know where to start, so I figured you could help? Thanks in advance.
Well, I'm not in the U.S. either, but do have more than a passing interest in architectures. I don't think there is much in the way of publicly disseminated info on Pascal - hence the speculation. Nvidia's most recent available info on Maxwell ( the CUDA toolkit and Tegra X1) probably provides a good baseline measure of FP16 per SMM, and we know from an earlier presentation that Nvidia is quoting Pascal's FP16 rate as being four times that of Maxwell...
ceomath.jpg

...then the equation comes down to clock speed and ALU count/ALU per module, which is where it all breaks down. Do Nvidia go for broke on the biggest GPU they can put together on a new (and late) process node, or do they scale back (maybe the GP102 listed earlier) for the first iteration - say upping ROP:TAU:Core by a third or so from GM200 and keep the GPU to a reasonable size. I haven't heard anything reputable regarding this. All the talk of 17 billion transistors for Pascal seems to originate from clickbait fantasy writers from such industry leaders as wccftech and fudzilla. :rolleyes:
WRT to graphics pipeline, your guess is good as mine. Risk silicon for GP100 has already been shipped for validation and verification, but who knows whether it is representative of final silicon (extremely unlikely), or just proof of concept with further revision to take place. I presume you already have the SC15 presentation (>>here (pdf) <<<). Nothing to get excited about on the Pascal front unfortunately.
 
I was lucky enough to get this card yesterday from Amazon. It is a decent card. I was not able to overclock as well as this review, I got about +50MHz and +400 for Graphics score in firestrike of 20500. My card has a 70 ASIC score. My biggest issues are the fan noise, anything past 30% is quite noticeable, the short hoses at about 11" and the horrible OC Guru software. At stock clocks the fan doesn't get over 30% so that isn't a big deal and the hose issue just required me to move my H220X over so that this radiator could be in the top back spot in my case. The OC guru software is needed to change the LED color, and boy is that software the biggest pile of you know what. Fortunately the color change seems to be sticking between reboots so I am able to use afterburner. If anyone has questions I am happy to answer.
 
Last edited:
My biggest issues are the fan noise, anything past 30% is quite noticeable ...
Did you try to set a custom fan profile in afterburner that goes max to 30% fan speed ... temps never go past 50 C anyway
 
thanks very much for the review!
you mentioned that the uP1983A voltage controller is used on gigabyte 980 g1 gaming card? there you say :OnSemi NCP81174 voltage controller, the same as on the NVIDIA GTX 980 reference design.
 
Did you try to set a custom fan profile in afterburner that goes max to 30% fan speed ... temps never go past 50 C anyway
Yes, I found setting the curve at a constant 30% works pretty well. Fan can't be heard and cooling seems sufficient in any of the games I have played.
 
"The latest installment of Activision's Call of Duty Series..."
"The latest entry to Ubisoft's smash-hit stealth sandbox franchise...."
These two lines need to be updated. Great review.
 
What I see in this chart is that the 980 Ti has only 79% of the performance of GB 980 Ti WF .. which translates into +~26.6% boost for GB card relatively to 980 Ti reference.
Or am I wrong?
You're right. It bothers me when he does this too. What he should be saying is that it's 26.6% higher performance. Or alternatively he could say it is 21 percentage points higher.
 
You're right. It bothers me when he does this too. What he should be saying is that it's 26.6% higher performance. Or alternatively he could say it is 21 percentage points higher.
yup, that is the correct way to use it
 
Hi, I have a question about your overall performance charts, what kind of 970 do you use and what clock speed does it have while testing?

I was thinking to move from dual 970s to a 980 Ti because I'm tired of SLI. But I see this highly OCed 980 Ti cannot surpass 970 SLI at 1440p... if you're showing result for stock clocked 970s then I think I'd be losing a lot more performance than I thought going OC 970s -> OC 980 Ti. I was expecting maybe 10% less from 980 Ti when compared stock to stock and instead I find nearly 25% which seriously makes me reconsider.
 
looks cool but not so worth the money IMO... would rather grab a reference GTX980Ti, Corsair's HG10 N980 GPU Bracket & Hydro H60 AIO Kit then bump up both core & memory while saving money at the same time.
 
Back
Top