Thursday, May 5th 2016

NVIDIA GeForce GTX 1080 Put Through 3DMark

Some of the first 3DMark performance numbers of NVIDIA's upcoming GeForce GTX 1080 graphics card made it to Futuremark's online database. The results page hint at samples of the GTX 1080 running on early drivers, on two separate machines (likely from two different sources). The first source, who ran the card on a machine with a Core i7-5820K processor, scored P19005 on 3DMark 11 (performance preset). The second source, who ran the card on a machine with a Core i7-3770K processor, scored 8959 points on 3DMark FireStrike Extreme. Both scores point at GTX 1080 being faster than a GTX 980 Ti.
Source: VideoCardz
Add your own comment

163 Comments on NVIDIA GeForce GTX 1080 Put Through 3DMark

#26
EarthDog
True... but not sure what your point is in regards to the previous context (dude just said Ti is an overclocked version... its NOT from 980 to 980Ti.. its a different GPU).
Posted on Reply
#27
efikkan
newtekie1The only Ti I remember that wasn't high end was the GTX560Ti, and it was a different GPU than the GTX560.
What about GTX 750 Ti then? (Low end)
Posted on Reply
#28
MagnuTron
.. So it can run games in 1080P? .. he.. he.. my humor is amazing *cries in solitude*
Posted on Reply
#29
Toothless
Tech, Games, and TPU!
MadsMagnus.. So it can run games in 1080P? .. he.. he.. my humor is amazing *cries in solitude*
1080i and struggling
Posted on Reply
#30
EarthDog
efikkanWhat about GTX 750 Ti then? (Low end)
That was the first Maxwell GPU... I think the 750 is a different GPU as well?

There was also the 660Ti and I believe it was overclocked.
Posted on Reply
#31
xvi
EarthDogBut yeah, 980 to 980Ti are two different cores under the hood. 980Ti and Titan X have the same GPU. They do NOT share it with the 980Ti.
Oh, my head.



Makes sense in a weird nVidia kind of way since they release the Ti a bit later. From a marketing perspective, they wouldn't want the x80 slot empty in their lineup only to be filled in a while later. Instead, I'm guessing they'll advertize the 1080 as top dog until enough competition comes out for them to release their actual top dog, 1080 Ti.

Edit: Seems like that's going to put a decent performance gap between the 1080 and 1080 Ti though. Somewhere for AMD to sneak in?
Posted on Reply
#32
efikkan
EarthDogThat was the first Maxwell GPU... I think the 750 is a different GPU as well?

There was also the 660Ti and I believe it was overclocked.
No, it was different binnings of the same: GM107-400-A2 and GM107-300-A2.
GTX 660 was GK106-400-A1 and GTX 660 Ti was GK104-300-KD-A2.
"GTX 650 Ti boost" as an overclocked GTX 650 Ti.
Posted on Reply
#33
idx
I think the GTX1080 or whatever the name will be .. will have a low number of shaders and a really high clock speed. this is exactly what nvidia did with GM204 and GM200 it was boosting up to 1500Mhz in game.
In general low number of shaders means much less power consumption and they do make up for the low shaders with the high clocks.

if AMD end up again with a lower GPU clocks ... it will be really bad for the red team. I really hope AMD also going the same path.
Posted on Reply
#34
efikkan
idxI think the GTX1080 or whatever the name will be .. will have a low number of shaders and a really high clock speed. this is exactly what nvidia did with GM204 and GM200 it was boosting up to 1500Mhz in game.
In general low number of shaders means much less power consumption and they do make up for the low shaders with the high clocks.

if AMD end up again with a lower GPU clocks ... it will be really bad for the red team. I really hope AMD also going the same path.
The ~200m² GPU from AMD will not be able to compete with GTX 1080, the closest thing from AMD will be the Fury X. AMD will also have a hard time with GTX 1070, but probably be able to match GTX 1060(Ti).
Posted on Reply
#35
ppn
Well 1080Ti isn't going to impress either. So 330mm² chip has 2560 Cuda, and 610mm² sporting only 3540 out of 3840. If it isn't at least full 5120 Cuda. It's a joke. Better release the 400mm² chip with full 3072 Now! or just forget about it. this generation begins and ends with 1060 Ti.
Posted on Reply
#36
Sah7d
Well... no point for upgrading my 980Ti SLI
So much hype for the "next gen" when actually Nvidia has ALWAYS managed this way
the GREAT new generations of 10% more efficiency for 20% more cost

In other words, If You want the performance of a 980Ti for the mid end range
wait for the GTX 2070 line in 2018 and thats optimistic but sure for the GTX 3070 in 2020
Posted on Reply
#37
idx
efikkanThe ~200m² GPU from AMD will not be able to compete with GTX 1080, the closest thing from AMD will be the Fury X. AMD will also have a hard time with GTX 1070, but probably be able to match GTX 1060(Ti).
Nothing for sure yet. we will just have to wait and see. there is still one thing that amd might be doing this time. The number of ROPs if they go for something really high like 128ROPs? Then they may really pull off something special this time.
I dont think Nvidia is going for more than 80ROPs in GP10x.
Posted on Reply
#38
Space Lynx
Astronaut
KursahThat's what the 1080Ti is for. :D
14nm was supposed to a HUGE leap though, the 1080 non-ti should have smashed a 980 ti... 1080 ti 15% increase max if we lucky, it is sad honestly. all the hype saying 50% increase cause finally going to a new die shrink, same hype they said about skylake, and my 4.8ghz 2500k owns stock skylake in games still, lulz..
Posted on Reply
#39
efikkan
ppnWell 1080Ti isn't going to impress either. So 330mm² chip has 2560 Cuda, and 610mm² sporting only 3540 out of 3840. If it isn't at least full 5120 Cuda. It's a joke. Better release the 400mm² chip with full 3072 Now! or just forget about it. this generation begins and ends with 1060 Ti.
GTX 1080 Ti isn't going to be based on GP104, and wouldn't arrive for many months yet.

Which 400mm² chip are you talking about?
Sah7dWell... no point for upgrading my 980Ti SLI
So much hype for the "next gen" when actually Nvidia has ALWAYS managed this way
the GREAT new generations of 10% more efficiency for 20% more cost
You are right, it's no sense upgrading your GTX 980 Tis, but we all knew that already? It has been known for a while that the first Pascals will be the GP104, so it wouldn't push the high-end just yet.
idxNothing for sure yet. we will just have to wait and see. there is still one thing that amd might be doing this time. The number of ROPs if they go for something really high like 128ROPs? Then they may really pull off something special this time.
I dont think Nvidia is going for more than 80ROPs in GP10x.
Neither AMD nor Nvidia is limited by ROP performance at this time, and AMD will only compete in the lower mid-range and low-end with Polaris, so I don't see why ROPs are going to matter.
Posted on Reply
#40
EarthDog
efikkanNo, it was different binnings of the same: GM107-400-A2 and GM107-300-A2.
GTX 660 was GK106-400-A1 and GTX 660 Ti was GK104-300-KD-A2.
"GTX 650 Ti boost" as an overclocked GTX 650 Ti.
Excellent! THanks for the clarity! :)
Posted on Reply
#41
Tsukiyomi91
I'll keep my GTX970s for another 2 more years since Pascal isn't really taken off, despite promising benchmarks stating it can more or less keep up with a GTX980Ti. Seems that we need to give some time for Pascal to mature I guess...
Posted on Reply
#42
ppn
A hypothetical GP102-400mm² would be nice, There is no reason for 3540 lower clocked GP100-based. If 3072 Cuda can do the same job. So we don't know what Ti is going to be, surely 610mm² seems inefficient - 1/4 of it is represents dead weight.
Posted on Reply
#43
DarkOCean
idxNothing for sure yet. we will just have to wait and see. there is still one thing that amd might be doing this time. The number of ROPs if they go for something really high like 128ROPs? Then they may really pull off something special this time.
I dont think Nvidia is going for more than 80ROPs in GP10x.
Higher clocks=more Gpixels/s so i doubt it will have more than 64.
Posted on Reply
#44
Casecutter
If priced appropriately below the last "top" mid-range offering, it might present some counterpose to this commensurate performance increase. To me it feels like a $400 MSRP
Posted on Reply
#45
jabbadap
efikkanNo, it was different binnings of the same: GM107-400-A2 and GM107-300-A2.
GTX 660 was GK106-400-A1 and GTX 660 Ti was GK104-300-KD-A2.
"GTX 650 Ti boost" as an overclocked GTX 650 Ti.
GTX650ti boost has base and boost clocks, which gtx650ti does not have. Other difference is boost has full gk106:s memory interface 192bit and non-boost has 128bit. All added boost is well faster than non-boost.
Posted on Reply
#46
idx
efikkanGTX 1080 Ti isn't going to be based on GP104, and wouldn't arrive for many months yet.

Which 400mm² chip are you talking about?


You are right, it's no sense upgrading your GTX 980 Tis, but we all knew that already? It has been known for a while that the first Pascals will be the GP104, so it wouldn't push the high-end just yet.


Neither AMD nor Nvidia is limited by ROP performance at this time, and AMD will only compete in the lower mid-range and low-end with Polaris, so I don't see why ROPs are going to matter.
ROPs really does matter specially if you have high number of shaders, the more shaders you got the more pixels that can be pushed through (actually processed), however there are other stuff to be done along the pipeline with each pixel, like vertices (polygons) and here where ROPs comes in. Yes with more shaders you can run high res but you also need to push a huge number of polygons every frame.
notice that Fiji cards where actually doing great when it comes to running games at high res but it didnt seem to go that much faster at lower res, thats because the gpu was limited by its much smaller ROPs (only 64) for such a huge GPU with 4k SP that was really a bottleneck. If Fiji gpus had more ROPs and TMUs, things would have been much better for AMD.
AMDs Raja did actually say something about this, and that the reason was something related to the 28nm limitations with such huge number of shaders (honestly I didn't understand part of what he said but it was something also related to that).

sorry for getting too nerdy.. lol
Posted on Reply
#47
efikkan
idxROPs really does matter specially if you have high number of shaders, the more shaders you got the more pixels that can be pushed through (actually processed), however there are other stuff to be done along the pipeline with each pixel, like vertices (polygons) and here where ROPs comes in.
ROPs only matter when you increase the amount of raserization; higher resolution, higher framerate, higher AA, higher resolution temporary framebuffers, etc. More Gflop/s increases the throughput and does of course increase ROP load if there are no other bottlenecks, but then someone first would need to increase the throughput. AMD is currently struggling a lot with their architecture, despite having up to 50% more GFlop/s than Nvidia, so the other bottlenecks will have to be addressed first.
idxnotice that Fiji cards where actually doing great when it comes to running games at high res but it didnt seem to go that much faster at lower res, thats because the gpu was limited by its much smaller ROPs (only 64) for such a huge GPU with 4k SP that was really a bottleneck.
As stated above, not true at all. Higher resolution increases the load on the ROPs, so that's irrelevant in this case. The problem with Fury X is the scheduler which is unable to feed the massive amount of cores.
Posted on Reply
#48
truth teller
truth telleryou guys are expecting way too much from this, its just a rehash from maxwell, there are no architectural changes. performance gains will be marginal in most cases. and in case of improvement in performance it will most likely be related to ram/bus saturation of the previous gen (inb4 3.5).
i dont really get why people are so surprised of the outcome, we already _knew_ this would happen.

the sad part is nbadia is still going to sell a lot of these "best" gpus to their religious flock...
"hey look im part of that orderof10tacles secrect club stuff! is awesum! i pity u lame poor outsiders"
Posted on Reply
#49
EarthDog
truth telleri dont really get why people are so surprised of the outcome, we already _knew_ this would happen.

the sad part is nbadia is still going to sell a lot of these "best" gpus to their religious flock...
"hey look im part of that orderof10tacles secrect club stuff! is awesum! i pity u lame poor outsiders"
Is it just a rehash though? Before we go down this rabbit hole, how do you define 'rehash'? I define it as AMD does it... reBRAND the same EXACT GPU (with more vRAM and different clocks). THAT is a rehash/rebrand to me.

I am pretty sure Pascal has some architectural differences over Maxwell as well as being more efficient per clock.
Posted on Reply
#50
HumanSmoke
the54thvoidWhat the videocardz piece says as well is that the LN2 cooled 980ti FM results at similar clocks destroy the 1080. So perhaps Pascal isn't very good until we see daddy Pascal. Same as 980 Maxwell wasn't really much better than a 780ti - this is the same all over again. It's only because of it's high clock speed that it's improving...
That is at least something. A theoretical clock-for-clock "loss" to GM200 doesn't mean too much in the real world if 99.999% of 980Ti/Titan X users are capped at <1500-1550MHz. Like the previous generation, big die to small die isn't the natural upgrade path, but it does take the shine off the previous kings of the hill.
I think more interest (at least for me) will be
ROP count - although Maxwell wasn't particularly ROP limited- just as an indicator of what we might see with GP102/GP100
TMU throughput - something that has limited Maxwell to a degree. It still wouldn't surprise me to see the 980Ti/Titan X shade GP104 in texture fill and narrow any gap to the new Pascal at high resolutions/DSR/FSAA.
Board power limits and overclock headroom.
truth tellerthe sad part is nbadia is still going to sell a lot of these "best" gpus to their religious flock...
"hey look im part of that orderof10tacles secrect club stuff! is awesum! i pity u lame poor outsiders"
Because that is solely a Nvidia trait obviously...

Demand the best.......get a neutered Tonga....no hyperbole there then
Posted on Reply
Add your own comment
Feb 17th, 2025 20:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts