• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Nvidia Pascal - GT / GTX10xx Owners Club [With Poll

Which card(s) do you own?

  • GTX 1070

    Votes: 66 33.2%
  • GTX 1080

    Votes: 60 30.2%
  • Neither

    Votes: 32 16.1%
  • GTX 1080Ti

    Votes: 49 24.6%
  • GTX 1070 Ti

    Votes: 7 3.5%
  • GTX 1060

    Votes: 4 2.0%
  • GTX 1050

    Votes: 1 0.5%
  • GTX 1050 Ti

    Votes: 1 0.5%
  • GT 1030

    Votes: 1 0.5%

  • Total voters
    199
It's a shame these were voltage locked so conservatively. I got mine up to 2177 / 5500 (not really tuned the memory much), at only 120% power draw and +0mv
Makes you wonder though if a voltage unlock would really do any good. There is a point of diminishing returns with this kind of thing.

On the plus side, this is the first GPU in many years I have had significant real world gains from overclocking it.
 
I'm surprised there aren't really any reviews out for Palit GTX 1080 super jetstream. Guess it's too similar to the gamerock ones. Anyway, got this card now for a week and I am super happy with it.
 
I'm surprised there aren't really any reviews out for Palit GTX 1080 super jetstream. Guess it's too similar to the gamerock ones. Anyway, got this card now for a week and I am super happy with it.

Do you have a monitor capable of higher than 60hz? Does the cooling of the jetstream help in those situations? I am curious how the air coolers do on these cards. Even with the liquid cooling, the heat difference between 60fps/hz, 96 and 120 covers around 10-12c as the GPU usage goes from (varies by program) 30% up to 100%. You get the same scenario with going from 1080p > 1440p > 2160p. 120fps/2160p must be a nightmare on air cooling.

I don't think many reviewers properly talk about heat, power, voltage and GPU activity when playing at over 60fps/hz, and consumers end up buying products wondering why the temperatures are so different.
 
Do you have a monitor capable of higher than 60hz? Does the cooling of the jetstream help in those situations? I am curious how the air coolers do on these cards. Even with the liquid cooling, the heat difference between 60fps/hz, 96 and 120 covers around 10-12c as the GPU usage goes from (varies by program) 30% up to 100%. You get the same scenario with going from 1080p > 1440p > 2160p. 120fps/2160p must be a nightmare on air cooling.

I don't think many reviewers properly talk about heat, power, voltage and GPU activity when playing at over 60fps/hz, and consumers end up buying products wondering why the temperatures are so different.

Might have a difference for FE cards but with many semi-passive air cooled cards, the only difference is that the fan starts up sooner / spins faster and keeps the GPU core at what is essentially the maximum temperature that can be reached under most load scenarios (I still remember GCN 1.0 having a terrible time with FurMark, reaching temps that no other load scenario would match, but I don't think that's applicable anymore). And under 120 / 4K, well, the cooler isn't going to work any harder as the card is definitely at its performance ceiling and is going to struggle with that.

Reviewers do, however, use test benches and well-ventilated cases; this pisses me off as an ITX builder who buys air-cooled cards since it's not a very realistic result in just about all reasonably sized ITX cases.
 
did some heaven benchmarks and was kinda confused with this new 1070. stock I get 4398 which is almost double from my 970. I them OC the 1070 to 2100mhz core with voltage limit boosted up but my score actually dropped to 4143. any idea why this happened?
 
Have you put the power limit to maximum as well? If you have not and you just maxed the voltage, than you card is probably throttling because it is hitting the power limit.
 
Have you put the power limit to maximum as well? If you have not and you just maxed the voltage, than you card is probably throttling because it is hitting the power limit.

yes power limit and temp limit are maxed out on the sliders
 
Do you have a monitor capable of higher than 60hz? Does the cooling of the jetstream help in those situations? I am curious how the air coolers do on these cards. Even with the liquid cooling, the heat difference between 60fps/hz, 96 and 120 covers around 10-12c as the GPU usage goes from (varies by program) 30% up to 100%. You get the same scenario with going from 1080p > 1440p > 2160p. 120fps/2160p must be a nightmare on air cooling.

I don't think many reviewers properly talk about heat, power, voltage and GPU activity when playing at over 60fps/hz, and consumers end up buying products wondering why the temperatures are so different.

Sorry, I play on 1440p 60Hz, mostly with vsync or frame limiter on. Yesterday I hit max 69°C with avg 65°C on overwatch max settings (w/ 100% render, 70 fps limit).
 
yes power limit and temp limit are maxed out on the sliders

Keep GPU-Z running on the sensors tab while benching and see if there is a performance cap reason...than report back.
 
so i hear that some of the newer 1070s with micron GDDR5 chips are having issues?
 
so i hear that some of the newer 1070s with micron GDDR5 chips are having issues?

A while ago yes, luckily mine has samsung chips, no problems here.
 
I have one with micron chips and apart from oc-ing a little less i have no problems.
 
I have one with micron chips and apart from oc-ing a little less i have no problems.

I see you flashed the vBIOS, what are you getting in Valley Extreme HD or 3dmark graphics score?
 
Valley.png


and this is 3dmark:

http://www.3dmark.com/3dm/15296246?
 
New OC, tried to OC the core some more:

gpu-z OC new1.JPG


00014.jpg
 
So with no voltage increase and 120% power, I'm getting artifacts around 2176 on the core clock
2141 seems to be a safer position.

I wonder if the memory speed affects the stability of the core on newer GPUs?
 
Seems to show up more in windowed mode, but in fullscreen, it's rare.

Depends. Have you waited a bit?

I have my 'show OSD' button set on numpad / and whenever I press the button, especially when a game is in a load sequence or has just launched, it takes a while, up to 5-8 seconds for the OSD to come up. Most notably in DOTA 2 and TW3. Seems to vary wildly per game.
 
Depends. Have you waited a bit?

I have my 'show OSD' button set on numpad / and whenever I press the button, especially when a game is in a load sequence or has just launched, it takes a while, up to 5-8 seconds for the OSD to come up. Most notably in DOTA 2 and TW3. Seems to vary wildly per game.

I believe I resolved it by ticking the 64bit support feature in RTSS options. I haven't had a problem since.
Though you are right, some programs it takes a whle.
 
Picked up on something more and more lately and it's bothering me.

Stock boost clocks for the FE cards is 1911/5006mhz.

When you start up a fullscreen program, load into a world etc, you get the aforementioned clocks. Then shortly after that 1911 turns into 1898. It seems to be (and this the only 'pattern') that as the % of GPU activity gets higher, say 60-70% it acts as a threshold and then the clocks fall to 1898mhz. This becomes the more consistent average speed that you experience. Yet there are times where the core goes as low as 1828mhz and again seemingly has no pattern except that the higher the % of GPU usage, the lower the clocks.

That was my initial observation.

Then I loaded up some games and realized that it wasn't necessarily the GPU %, it was more of whether the GPU was being taxed.

For example, if I look at the sky or the ground (which generally results in higher FPS as the GPU is technically working less [despite the GPU % being higher]) the clock pegs at it's maximum capability i.e. 1911mhz.

When I look straight ahead into a sand storm, with all the hardcore particles going and the framerate has dropped, the clocks start to diminish and bounce around.

I tested this with my overclock as well 2168/5537.
As soon as I look into a distance and /or the card is being taxed, it drops as far down as 2040-2050 range.


This is the opposite of what we'd expect with dynamic clocks where the less % of GPU activity and or the less stress on the card, the clock speed lowers itself intentionally as the higher clocks/performance is not needed to keep up the frame rate.

I tried compensating by using an increased power limit and then increased core voltage. It appeared to help allow the card to keep a more stable upper 1800s (i.e. 1898) though it eventually started dropping down again.

This wouldn't normally be an issue except the Pascal cards show really significant real world gains from overclocking as opposed to their ancestors. And even 100mhz can be a crucial 5fps between 55-60, or 91 and 96 etc.

I am struggling to understand why with a card that's not reaching it's thermal limits, that it would be downclocking instead of staying pegged.
 
Last edited:
Hi guys,

I got my Palit GTX1080 Game Rock working at 2070/5250mhz, powerlimit is at 110%, temps around 60-63 degrees and fanspeed around 60% in full load.
 
Last edited:
Picked up on something more and more lately and it's bothering me.

Stock boost clocks for the FE cards is 1911/5006mhz.

When you start up a fullscreen program, load into a world etc, you get the aforementioned clocks. Then shortly after that 1911 turns into 1898. It seems to be (and this the only 'pattern') that as the % of GPU activity gets higher, say 60-70% it acts as a threshold and then the clocks fall to 1898mhz. This becomes the more consistent average speed that you experience. Yet there are times where the core goes as low as 1828mhz and again seemingly has no pattern except that the higher the % of GPU usage, the lower the clocks.

That was my initial observation.

Then I loaded up some games and realized that it wasn't necessarily the GPU %, it was more of whether the GPU was being taxed.

For example, if I look at the sky or the ground (which generally results in higher FPS as the GPU is technically working less [despite the GPU % being higher]) the clock pegs at it's maximum capability i.e. 1911mhz.

When I look straight ahead into a sand storm, with all the hardcore particles going and the framerate has dropped, the clocks start to diminish and bounce around.

I tested this with my overclock as well 2168/5537.
As soon as I look into a distance and /or the card is being taxed, it drops as far down as 2040-2050 range.


This is the opposite of what we'd expect with dynamic clocks where the less % of GPU activity and or the less stress on the card, the clock speed lowers itself intentionally as the higher clocks/performance is not needed to keep up the frame rate.

I tried compensating by using an increased power limit and then increased core voltage. It appeared to help allow the card to keep a more stable upper 1800s (i.e. 1898) though it eventually started dropping down again.

This wouldn't normally be an issue except the Pascal cards show really significant real world gains from overclocking as opposed to their ancestors. And even 100mhz can be a crucial 5fps between 55-60, or 91 and 96 etc.

I am struggling to understand why with a card that's not reaching it's thermal limits, that it would be downclocking instead of staying pegged.

My Evga Classified default clock is 1721MHz and boost should be1860MHZ but for some reason boost has always been 1987MHz constantly without drop, it doesn't matter if i play Games or stress the GPU with some stability test software it doesn't drop.

Without OC max boost and not drop.
IMG-20161012-WA0006.jpeg



4 days ago i have C'ed my GPU 2136MHz without touching voltages, but when playing Games or doing some stability test from 2136MHz it drops to 2126MHz.
 
Last edited:
Do you have a monitor capable of higher than 60hz? Does the cooling of the jetstream help in those situations? I am curious how the air coolers do on these cards. Even with the liquid cooling, the heat difference between 60fps/hz, 96 and 120 covers around 10-12c as the GPU usage goes from (varies by program) 30% up to 100%. You get the same scenario with going from 1080p > 1440p > 2160p. 120fps/2160p must be a nightmare on air cooling.

I don't think many reviewers properly talk about heat, power, voltage and GPU activity when playing at over 60fps/hz, and consumers end up buying products wondering why the temperatures are so different.

This is because so many reviewers are bought and paid for.
 
Back
Top