Friday, March 18th 2016

NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series

With the GeForce GTX 900 series, NVIDIA has exhausted its GeForce GTX nomenclature, according to a sensational scoop from the rumor mill. Instead of going with the GTX 1000 series that has one digit too many, the company is turning the page on the GeForce GTX brand altogether. The company's next-generation high-end graphics card series will be the GeForce X80 series. Based on the performance-segment "GP104" and high-end "GP100" chips, the GeForce X80 series will consist of the performance-segment GeForce X80, the high-end GeForce X80 Ti, and the enthusiast-segment GeForce X80 TITAN.

Based on the "Pascal" architecture, the GP104 silicon is expected to feature as many as 4,096 CUDA cores. It will also feature 256 TMUs, 128 ROPs, and a GDDR5X memory interface, with 384 GB/s memory bandwidth. 6 GB could be the standard memory amount. Its texture- and pixel-fillrates are rated to be 33% higher than those of the GM200-based GeForce GTX TITAN X. The GP104 chip will be built on the 16 nm FinFET process. The TDP of this chip is rated at 175W.
Moving on, the GP100 is a whole different beast. It's built on the same 16 nm FinFET process as the GP104, and its TDP is rated at 225W. A unique feature of this silicon is its memory controllers, which are rumored to support both GDDR5X and HBM2 memory interfaces. There could be two packages for the GP100 silicon, depending on the memory type. The GDDR5X package will look simpler, with a large pin-count to wire out to the external memory chips; while the HBM2 package will be larger, to house the HBM stacks on the package, much like AMD "Fiji." The GeForce X80 Ti and the X80 TITAN will hence be two significantly different products besides their CUDA core counts and memory amounts.

The GP100 silicon physically features 6,144 CUDA cores, 384 TMUs, and 192 ROPs. On the X80 Ti, you'll get 5,120 CUDA cores, 320 TMUs, 160 ROPs, and a 512-bit wide GDDR5X memory interface, holding 8 GB of memory, with a bandwidth of 512 GB/s. The X80 TITAN, on the other hand, features all the CUDA cores, TMUs, and ROPs present on the silicon, plus features a 4096-bit wide HBM2 memory interface, holding 16 GB of memory, at a scorching 1 TB/s memory bandwidth. Both the X80 Ti and the X80 TITAN double the pixel- and texture- fill-rates from the GTX 980 Ti and GTX TITAN X, respectively.
Source: VideoCardz
Add your own comment

180 Comments on NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series

#151
Prima.Vera
InVasManiIt's better than any of the Diablo games really it doesn't matter which one seeing as it's got better game mechanics, graphics, story, humor and overall fun factor. The game literally has improved every time I've played it for the last year and half or so. Diablo 2 wasn't a bad game by any means, but even in comparison to that Grim Dawn is still head and shoulders a superior ARPG it has so many great things going for it I wish your typical EQ/WoW class based MMO's were more like it in fact.
Listen man. Going off topic now, but the only thing Grim Dawn has better than Diablo 2 is the graphics. There is no humanly possible way to even consider to compare the gameplay, story, music, overall atmosphere and rest with any other clones released afterwards... Including the PIECE OF SHIT called Diablo 3.
Posted on Reply
#152
Frick
Fishfaced Nincompoop
It's probably good though. It's supposed to be made by many of the people working on Titan Quest which is the best Diablo clone after Diablo 2 IMO.
Posted on Reply
#153
PP Mguire
medi01Seriously?

I like how you keep quoting this image like anybody gives a shit. TPU's own benchmarks put the Fury X behind the Titan X and 980ti in all 3 top resolutions. So you can quit spamming the fud.
Posted on Reply
#154
medi01
PP MguireTPU's own benchmarks
Include "we didn't get a penny from nvidia (c) gameworks" games (the marvelous PrCa in particular)
PP Mguire...put the Fury X behind the Titan X and 980ti in all 3 top resolutions...
ORLY? Ah, I see, 102% vs 102%, still, one is behind the other, true.



that chart surely debunks this as FUD, doesn't it? I mean, find one difference:



And clearly, Fury X is, to quote the original statement you had no problem with "so far behind".
Posted on Reply
#155
EarthDog
PP MguireBlops 3 actually scales to your VRAM and system RAM then uses all that it can. It can run smooth on a 4GB card but on my machine will utilize around 11.5GB.
Just curious where you see 11.5GB... is MSI AB? The reason I ask is because in BF4 with a 295x2, I was seeing ram use at almost 7GB (Ultra 2560x1440 + 30% resolution scaling). THat said, that was 3.5GB per card... I an wondering if you are sitting at 5.75GB for each card. ;)
Posted on Reply
#156
PP Mguire
medi01Include "we didn't get a penny from nvidia (c) gameworks" games (the marvelous PrCa in particular) and don't cover multi GPU configurations.

And even if not, original statement was there was no competition in GPU market ("so far behind"), which itself is FUD regardless of which card has several % advantage, yet that wasn't a problem for you, but you absolutely had to call legit results a "FUD". Which item in that list is "FUD"? pretty please? Can't say? Oh, how freaking surprising...

Which part of this picture contradicts what in the link that you referred to as "FUD" pretty please?



single GPU results match exactly.
Oh wow, so you're trying to say TPU gets paid for their results?? :roll::roll::roll::roll:

Firstly, you're looking at a benchmark put on WCCF which is known to spread all kinds of FUD. So yea, calling that right there. Second, the single doesn't even match up either meaning the others won't and sites like Hard, Guru, and techspot also favor that fact.

Third, you didn't address the initial post properly. I said and I quote "you keep spamming this like anybody gives a shit" because anybody worth their salt looking at these threads knows the card lineup AND knows the picture you keep posting constantly is in fact, FUD.
EarthDogJust curious where you see 11.5GB... is MSI AB? The reason I ask is because in BF4 with a 295x2, I was seeing ram use at almost 7GB (Ultra 2560x1440 + 30% resolution scaling). THat said, that was 3.5GB per card... I an wondering if you are sitting at 5.75GB for each card. ;)
MSI AB and eVGA OSD, not with each card, that was running single. I only recently put the second card back in my machine and Blops 3 came out a while ago.
Posted on Reply
#157
medi01
PP MguireTPU gets paid
No, I thought you knew the favourite quote by PrCa dev.
PP MguireSecond, the single doesn't even match up either
It perfectly matches up with TPU results. (were you bad at math at school?)
PP Mguirelike anybody gives a shit
That's just your opinion, and I value it. Very much.
PP Mguire...picture you keep posting constantly is in fact, FUD.
Except you failed to support that statement and failed to comprehend TPU charts showing the opposite.
Posted on Reply
#158
PP Mguire
medi01No, I thought you knew the favourite quote by PrCa dev.



It perfectly matches up with TPU results. (were you bad at math at school?)


That's just your opinion, and I value it. Very much.


Except you failed to support that statement and failed to comprehend TPU charts showing the opposite.
Maybe you need glasses bud.

TPU shows the Fury X being at 100 and 980ti at 102 without exclusions while the FUD shows as 97% for both. Now, let's go on a limb here and say exclusions are ok and that the 102/102 and 97/97 are matching sure, but what about TItan X. Can you not count chief? TItan X is 4% higher than the matching, and 6% higher than the actual Fury X figure. Oh, those FUD #s also differ from the above mentioned Crossfire benchmarks or do I need to quote those for you so you know what to Google? I'm not doing the work for you. And it still doesn't detest my original statement. It's not my opinion, you get ignored practically every time you post that chart until now.
Posted on Reply
#159
medi01
PP MguireMaybe you need glasses bud.
Maybe you need to work on reading comprehension.

TPU site shows both 980Ti and Fury X at 102%, in case "we didn't get a penny from nVidia (c) Project Cars" is excluded. Oh, so it is in DGLee review (which lists exactly which games it tested, and "we didn't get a penny from nVidia" clearly isn't one of them), same results too.

Now Titanium X (single card vs single cards) is merely 4% faster than Fury X on TPU site, 3% faster in DGLee review, which is well within error margins, especially, taking into account that set of tested games is actually different.

Now, the fact that crossfire scales better than SLI should be known to a person who visits tech review sites regularly (unless most time is spent sharing with people your valuable opinion on who gives a flying f*ck about what) so looks not even remotely suspicious.

PS
Good Lord, just realized what you meant bay "ahaaaa, on one site both are 97% while on the other they are at 102%". Jeez. Think about what 100% are in both cases.
Posted on Reply
#160
EarthDog
Jesus, and Bill Bright isn't even a part of this thread...?!!!

Take it to PM guys... if I wanted to see/read/hear incessent bitching, I would go home to your wives. o_O :laugh:


BOOM.
Posted on Reply
#161
rtwjunkie
PC Gaming Enthusiast
EarthDogJesus, and Bill Bright isn't even a part of this thread...?!!!

Take it to PM guys... if I wanted to see/read/hear incessent bitching, I would go home to your wives. o_O :laugh:


BOOM.


:laugh::laugh:
Maybe that will work better than my plea earlier.
Posted on Reply
#162
EarthDog
I tried to lighten the mood... even if that garner's me an infraction for busting their chops. Worse than me........ an incredible feat! :)
Posted on Reply
#163
arbiter
medi01Seriously?

medi01And clearly, Fury X is, to quote the original statement you had no problem with "so far behind".
Problem with that graph is its pretty deceptive. Good % that puts fury ahead is 1 game, that 1 game was an AMD sponsored title so take that outta the mix they are pretty close to dead even.
Posted on Reply
#164
medi01
arbiterProblem with that graph is its pretty deceptive. Good % that puts fury ahead is 1 game, that 1 game was an AMD sponsored title so take that outta the mix they are pretty close to dead even.
Which one game is that? (and dead even in which combination?)
Posted on Reply
#165
EarthDog
Ok, not to jump in this mess, but I do have a question about that graph.... how can one tell from it the performance of individual games? particularly at the single and dual card level? It seems like the point of this graph is to show GPU scaling overall and its awfully difficult to tell who is winning what at dual and below.

With that, across their testing, the Titan X is 3% faster than the 980Ti and Fury X with a single card...
Posted on Reply
#166
medi01
EarthDoghow can one tell from it the performance of individual games?
How can you tell performance of inidividual games on TPU chart?
`

There is only so much information you can squeeze into a bar. In DGLee's chart I can see that, say, FuryX from Crossfire onward is faster than Titanium X in Alien Isolation, or Metro 2033, Metro Last Night.
Posted on Reply
#167
EarthDog
medi01How can you tell performance of inidividual games on TPU chart?
`

There is only so much information you can squeeze into a bar. In DGLee I can see that, say, FuryX from Crossfire onward is faster than Titanium X in Alien Isolation, or Metro 2033.
By looking at the individual games on the pages prior? Did you link this asian website you sourced that graph from earlier in your back and forth which shows individual game performance?

Just saying that, in single card, its nearly impossible to tell. As you use more cards, you can see more of a difference between games. In the end, its a terrible way, particulary with a single card, to discern that data from the graph.
Posted on Reply
#168
medi01
EarthDogBy looking at the individual games on the pages prior?
No, from the summary chart alone. For some games it's easy to tell, while others are hard,
EarthDogits a terrible way..
Providing more information (you don't see anything at all on TPU summary chart) is definitely not "a terrible way", and if you thought that individual charts are hidden, scroll down:
wccftech.com/amd-radeon-r9-fury-quad-crossfire-nvidia-geforce-gtx-titan-quad-sli-uhd-benchmarks/
Posted on Reply
#169
EarthDog
I didn't think anything medi... I saw this single graph and no links until now.............o_O

......... that link shows the different resolutions, not games (as the TPU reviews DO show indivdual game performance.

Again, single card, its nearly impossible to tell, but as the number of cards go up, you can tell.
Posted on Reply
#170
xfia
just be smart and never read anything from them.. they lie all the time and it starts crap a lot worse than this.
Posted on Reply
#171
medi01
EarthDogthat link shows the different resolutions, not games
No, it shows games (and benchmarks, 3DMark isn't a game, right) at different resolutions:




etc.
EarthDogI saw this single graph and no links until now....
I posted link earlier, post number 125 in this thread.
xfiajust be smart and never read anything from them
FFS, bloody results MATCH RESULTS OF TPU...
Posted on Reply
#172
Legacy-ZA
Found a nice video on Linustechtips for people that say VRAM doesn't matter. ;-)

Just an example; I have a 4GB card, when I run Rise of Tomb Raider, can my GPU handle it? Yes. Can my amount of VRAM handle it? No. When I run the benchmark, my FPS drops from 50 to 1 depending what new sections need to be loaded, not to mention the texture pop-ins. I suppose there are people that haven't run into these problems because they can always afford the best, perhaps they will stay unconvinced since they won't run into the problem or rarely. Exactly the same problem that Linus had with Shadows of Mordor.

Perhaps some of you will remember when Battlefield 4 launched, most people that still had 2GB VRAM cards complained that they have a lot of stuttering and weird graphical pop-in issues etc. When they replaced it with newly released nVidia GFX cards with 4GB VRAM, no more complaints, how weird huh?

I hope this video will also help convince those that weren't and those that had doubts that they really should understand, that VRAM MATTERS!
Posted on Reply
#173
InVasMani
You can generally lower settings resolution or to a point if you have less VRAM. It's one of those things where it most defiantly can matter and impact frame rates depending on the former. If you run out of VRAM having enough system memory and faster speed can help minimize the negative impacts from inadequate VRAM to fill the demand requirements up to a point. It's all a system of balance from high to low and fast to slow it's harder to juggle more balls at once.
Posted on Reply
#174
Prima.Vera
Legacy-ZAI hope this video will also help convince those that weren't and those that had doubts that they really should understand, that VRAM MATTERS!
Actually that video just clearly explains that there is no clear answer and it all depends on the game and resolution mostly...

P.S.

I still fermly belive that 3GB of VRAM are more than enough of running a game in 1080p with Ultra(Max) Details and SMAA Antialiasing. So far there is no game I had issues...
Posted on Reply
#175
Legacy-ZA
Prima.VeraActually that video just clearly explains that there is no clear answer and it all depends on the game and resolution mostly...

P.S.

I still fermly belive that 3GB of VRAM are more than enough of running a game in 1080p with Ultra(Max) Details and SMAA Antialiasing. So far there is no game I had issues...
It's true, it depends on the situation. All I am saying; I have run into the problem way more often than I would like and it's better to have more than less. :)
Posted on Reply
Add your own comment
May 21st, 2024 21:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts