Wednesday, May 18th 2016
NVIDIA GeForce GTX 1070 Clock Speeds Revealed
NVIDIA posted the product page of its upcoming GeForce GTX 1070 graphics card, confirming its clock-speeds, and related specifications. The card features a nominal GPU clock speed of 1506 MHz, with a maximum GPU Boost frequency of 1683 MHz. The memory is clocked at 2000 MHz (actual), or 8 GHz (GDDR5-effective), working out to a memory bandwidth of 256 GB/s. The company also rates the card's single-precision floating point performance at 6.45 TFLOP/s. Other key specs include 1,920 CUDA cores, 120 TMUs, and 64 ROPs. The GeForce GTX 1070 goes on sale, on the 10th of June, 2016.
123 Comments on NVIDIA GeForce GTX 1070 Clock Speeds Revealed
I'm not talking about the artificial BIOS limitations. Kepler had the same limitation (1.21v) but was temperature limited before voltage.
Jump from 28nm to 16nm and you are right there on air lol
Maybe Pascal will be different, but currently, everything points to the opposite. If all cards cap out at 2 Ghz, which now seems to be the case on either water or air from the numbers we have, that's Maxwell v2. And let's not forget Nvidia needed those numbers to make their claims - while their stock clocks are a good 300 mhz lower. In the meantime their architecture talks speak of removing GPU functionalities that were not strictly needed for gaming or streamlining them to achieve higher clocks (enter the GP100 + Nvlink release for the pro market). All of this doesn't point to a big gain from going 28nm > 16nm, but rather a combination of these efforts.
So far the only real gain we see from 16nm is the vastly reduced leakage which results in lower power draw - the actual performance still requires roughly the same power envelope because it needs higher clock to get there.
I really thought Nvidia would have dumped the two flagship card strategy and just gone with one flagship, and one extreme (or flagship +). I guess their manufacturing still isn't up to snuff, that they keep pumping out what are defective products and have to downgrade them.
1080 has much to offer and they will sell as hotcakes for $600 until Christmas when the price will be reduced with the launch of AMD Vega series.
I've been treating the x70/x80/Titan as the flagships, with three variants. Anything under that is midrange or lower.
I'll say then, that I wish they'd dump the x70 iteration of enthusiast products, only have the high end x80 model and then the big daddy flagship.
However, because little brothers are just broken big brothers, I can't see a way around that. Unless they make flagships first, treat them as big brother and use the broken ones as little brothers.
You know if we just replace 'brother' with 'sister' people will think we're talking about Bioshock and not graphics cards. And the release of the TI (or whatever they call the next flagship). Who here is going to feel silly when the TI launches for the same price as the standard 1080 did?
Come on board partners, throw a wrench in here somewhere and stop the madness.
Nvidia making GK104, a midrange GPU and ended up as fast/faster vs HD 7970, AMD's high end, so they turned it into an x80 part instead of their usual x60 part for their usual midrange for their Gx04 codenames.
And more rumour/guess specs for their "true" flagship and mid-mainstream, could see that happening on the 1060 though heh
It was such a big deal before... now nobody wants to/can comment??
videocardz.com/60265/nvidia-geforce-gtx-1070-3dmark-firestrike-benchmarks
thankfully i have good video cards in my case 2 GTX 560Ti 2GB in SLi and i have laying around a GT 740 spare just in case if my video cards ever failed
Oh wow, with a beefy OC the 1070 can *match* a 980ti at stock.
Who's not running circles around what?? @EarthDog, I had expected more of you mate.
2. Did I miss where they overclocked in that chart after they said they didn't? They said what 3DMark is REPORTING, but, with new cards, particularly unreleased cards, it typically isn't accurate. It also shows its about 5% faster. So if you take this supposed overclock you are talking about away, its likely right around 980Ti speeds... a far cry from 'running circles' around it.... whatever you define 'running circles' around it to be...
EDIT: If you will note in W1z's 1080 review a 453MHz overclock yielded 12.8% increase in performance. So even if that result is overclocked (which I don't believe it is), a 200Mhz overclock would yield right around 5% gains putting it at 980Ti speeds. Again a far cry from 'running circles' around it.
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/30.html
Nvidia needed 2.1 Ghz clocks on the 1080 to produce a snappy marketing line. Let's not get ahead of ourselves :) With regards to the question posed by the OP, I think it is a safe bet to say the 980ti will be at the same perf level, if not a slightly higher perf level taking the end result of OC's into account on both cards.
It is quite common for 3DMark to have wrong clocks... I just said that. Now, am I sure...? Of course not. However, when it matches a 1080 exactly, and knowing it has a history of not reporting the correct clocks (particularly on new cards), it isn't a leap to think the clocks are reporting wrong in 3DMark. I also mentioned the overclocking with the 1080 and what that yielded...so...
Only time will tell, but they were spot on with the 1080 in their leaked benchmarks... I wouldn't bet my life on it, but its going to be damn close to the 980ti contrary to your assertion (which you are backing off of now).