# NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series



## btarunr (Mar 18, 2016)

With the GeForce GTX 900 series, NVIDIA has exhausted its GeForce GTX nomenclature, according to a sensational scoop from the rumor mill. Instead of going with the GTX 1000 series that has one digit too many, the company is turning the page on the GeForce GTX brand altogether. The company's next-generation high-end graphics card series will be the GeForce X80 series. Based on the performance-segment "GP104" and high-end "GP100" chips, the GeForce X80 series will consist of the performance-segment GeForce X80, the high-end GeForce X80 Ti, and the enthusiast-segment GeForce X80 TITAN. 

Based on the "Pascal" architecture, the GP104 silicon is expected to feature as many as 4,096 CUDA cores. It will also feature 256 TMUs, 128 ROPs, and a GDDR5X memory interface, with 384 GB/s memory bandwidth. 6 GB could be the standard memory amount. Its texture- and pixel-fillrates are rated to be 33% higher than those of the GM200-based GeForce GTX TITAN X. The GP104 chip will be built on the 16 nm FinFET process. The TDP of this chip is rated at 175W. 






Moving on, the GP100 is a whole different beast. It's built on the same 16 nm FinFET process as the GP104, and its TDP is rated at 225W. A unique feature of this silicon is its memory controllers, which are rumored to support both GDDR5X and HBM2 memory interfaces. There could be two packages for the GP100 silicon, depending on the memory type. The GDDR5X package will look simpler, with a large pin-count to wire out to the external memory chips; while the HBM2 package will be larger, to house the HBM stacks on the package, much like AMD "Fiji." The GeForce X80 Ti and the X80 TITAN will hence be two significantly different products besides their CUDA core counts and memory amounts. 

The GP100 silicon physically features 6,144 CUDA cores, 384 TMUs, and 192 ROPs. On the X80 Ti, you'll get 5,120 CUDA cores, 320 TMUs, 160 ROPs, and a 512-bit wide GDDR5X memory interface, holding 8 GB of memory, with a bandwidth of 512 GB/s. The X80 TITAN, on the other hand, features all the CUDA cores, TMUs, and ROPs present on the silicon, plus features a 4096-bit wide HBM2 memory interface, holding 16 GB of memory, at a scorching 1 TB/s memory bandwidth. Both the X80 Ti and the X80 TITAN double the pixel- and texture- fill-rates from the GTX 980 Ti and GTX TITAN X, respectively. 





*View at TechPowerUp Main Site*


----------



## Slizzo (Mar 18, 2016)

Well shit, to get HBM2 you gotta shell out for the Titan...

Hmmm... what to do...


----------



## jchambers2586 (Mar 18, 2016)

time to look at AMD for a new GPU if wattage goes down.


----------



## zedn (Mar 18, 2016)

It finally looks like a Titan.


----------



## the54thvoid (Mar 18, 2016)

Guru3D almost didn't want post this 'rumour' its that unsubstantiated...


----------



## msamelis (Mar 18, 2016)

16GBs of HBM2? This sounds unreal.. Plus, it will probably cost much more than the current Titan. It also seems like quite a jump in performance from the previous generation which is not something we are accustomed to - not that I will be complaining if it turns out to be true.


----------



## RejZoR (Mar 18, 2016)

Slizzo said:


> Well shit, to get HBM2 you gotta shell out for the Titan...
> 
> Hmmm... what to do...



AMD Fury X 2  Yeah, it's stupid to have to buy Titan to get HBM.


----------



## purplekaycee (Mar 18, 2016)

hopefully when the new line of cards ia released it would drive down price of the great gtx 980 ti


----------



## oinkypig (Mar 18, 2016)

double the floating point so i wouldnt expect AMDs to close the performance gap for another year or so after release of their polaris  "
	

	
	
		
		

		
			





". should 16nm be quite as impressive as it is on paper. we'll be in for a real treat.


----------



## matar (Mar 18, 2016)

So we are gonna miss the old GTX name


----------



## oinkypig (Mar 18, 2016)

if you think anyone ever missed anti-aliasing.. crytek cryengine 1, which was supporting dx10, wrecked havoc on the first gen GTXs for their lack of driver support, and one year later dx11 was released. DirectX 12 should be plenty of fun


----------



## Legacy-ZA (Mar 18, 2016)

There are already games out that use more than 6GB VRAM, it will be laughable as well as pathetic at the same time if these new generation graphics cards don't have more than 8GB VRAM+ for things like higher resolutions... Anti-Aliasing... HD Texture mods for games etc.

Well, it is only rumors, but if it turns out to be true, I will both laugh and cry at the same time. I still don't know which.


----------



## BiggieShady (Mar 18, 2016)

So what's after X80, X80 Ti and X80 Titan ... does that mean that dual gpu card would be x90 and next gen silicon will go with X180, X180 Ti, X180 Titan and X190 for dual volta gpu?


----------



## Gungar (Mar 18, 2016)

THIS IS FAKE! the "1080 gtx" has 8go of vram! and it has only 6 here!!!


----------



## RejZoR (Mar 18, 2016)

BiggieShady said:


> So what's after X80, X80 Ti and X80 Titan ... does that mean that dual gpu card would be x90 and next gen silicon will go with X180, X180 Ti, X180 Titan and X190 for dual volta gpu?



Yeah, I was wondering that too. I guess it's the only way to progress the model numbers like this.


----------



## P4-630 (Mar 18, 2016)

ATi X80.... X80TI


----------



## medi01 (Mar 18, 2016)

I missed what was the source of this information.

Is it "*some TPU guy just made this up*"? =/


----------



## RejZoR (Mar 18, 2016)

Well, knowing NVIDIA, this sounds more legit than them going full HBM on everything with new series as initially suggested.


----------



## Frick (Mar 18, 2016)

BiggieShady said:


> So what's after X80, X80 Ti and X80 Titan ... does that mean that dual gpu card would be x90 and next gen silicon will go with X180, X180 Ti, X180 Titan and X190 for dual volta gpu?



Then X800 and then they HAVE to make a X850XT PE for the lulz.


----------



## uuuaaaaaa (Mar 18, 2016)

Frick said:


> Then X800 and then they HAVE to make a X850XT PE for the lulz.



I have a Radeon X850XT PE AGP (R481 chip ) and this sounds offensive xD It still stands as the best GPU I have ever got!


----------



## efikkan (Mar 18, 2016)

GP100 with 512-bit GDDR5 *and* HBM2? It's not going to happen.

This is basically just some guys "randomly" guessing what a new generation could look like by increasing every feature by 50-100%, just for the attention. Almost everyone who has been following the news could probably end up guessing the next generation specs with abouth ~80% accuracy, unless there is a huge change in architecture like Kepler was. These early on wild guesses has been the norm for every generation, e.g. Fermi, Kepler, Maxwell... and they've never been right. The actual specs are usually leaked within a month or so before the product release.


----------



## Vayra86 (Mar 18, 2016)

medi01 said:


> I missed what was the source of this information.
> 
> Is it "*some TPU guy just made this up*"? =/



No, some random source made it up and not a very bright one either. They just picked old information and reapplied it to the upcoming releases, with a pinch of salt and a truckload of wishful thinking. The amount of issues with this chart is endless.

- X80 does not really sound well when you think of cut down chips. X75?? For the Pascal 970? Nahhh
- X does not fit the lower segments at all. GTX in the low end becomes a GT. So X becomes a.... Y? U? Is Nvidia going Intel on us? It's weird.
- X(number) makes every card look unique and does not denote an arch or a gen. So the following series is... X1? That would result in an exact repeat of the previous naming scheme with two letters omitted. Makes no sense.
- They are risking exact copies of previous card names, especially those of the competitor.

Then on to the specs.

- Implementing both GDDR5 and HBM controllers on one chip is weird
- How are they differentiating beyond the Titan? Titan was never the fastest gaming chip. It was and is always the biggest waste of money with a lot of VRAM. They also never shoot all out on the first big chip release. X80ti will be 'after Titan' not before.
- HBM2 had to be tossed in here somehow, so there it is. Right? Right.
- How are they limiting bus width on the cut down versions? Why 6 GB when the previous gen AMD *mid range* already hits 8?


----------



## Caring1 (Mar 18, 2016)

The picture is from a 2 year old article about stacked memory.
https://devblogs.nvidia.com/parallelforall/tag/memory/


----------



## Prima.Vera (Mar 18, 2016)

Legacy-ZA said:


> There are already games out that use *more than* 6GB VRAM..



Which Games, on what settings and resolution please? Otherwise I'm calling this a BS


----------



## Frick (Mar 18, 2016)

Prima.Vera said:


> Which Games, on what settings and resolution please? Otherwise I'm calling this a BS



Some actually do, but I have no idea if it impacts performance


----------



## Kissamies (Mar 18, 2016)

I guess they invented the name in couple of seconds.


----------



## okidna (Mar 18, 2016)

Prima.Vera said:


> Which Games, on what settings and resolution please? Otherwise I'm calling this a BS



Rise of The Tomb Raider @ 1600x900 with Very High preset.


----------



## geon2k2 (Mar 18, 2016)

They forgot to mention the prices I think.

Let me correct that:

X80 - 1000 $
X80ti - 1500$
X80 Titan - 3000$


----------



## buildzoid (Mar 18, 2016)

okidna said:


> Rise of The Tomb Raider @ 1600x900 with Very High preset.


You do realize that many games will hog available VRAM just because they can not because they have to?

After all the Fury X manages to beat the TITAN-X in TR at 4K even though it only has 4GB of RAM compared to the TITAN-X's 12GB.




So TR doesn't need 7.5GB of VRAM it will just use them if they are provided.


----------



## Legacy-ZA (Mar 18, 2016)

Prima.Vera said:


> Which Games, on what settings and resolution please? Otherwise I'm calling this a BS



I fear some articles on this website goes to waste on some people. Just do a search.

Rise of the Tomb Raider
Shadows of Mordor
GTA V

There might be other titles that also use more, however, these are the games I am interested in and have noticed use a lot of VRAM. 

With DX12 games coming out, I am sure we will see even more VRAM used in the near future. Games like Deus Ex - Mankind Divided and so forth. 

Then there is the modding community, for games, which add extra details to textures, more objects, more landscapes. I am sure when Fallout 4's Creation Kits comes out, that will also go over the 6GB VRAM usage for some of us.

Things like Anti-Aliasing and Resolution also comes into play for people like me that don't like jaggies. Downside? More VRAM usage.

Then there are all the monitors with their refresh rates... 60hz, 75hz 85Hz 100Hz 120Hz 144Hz+ I prefer mine stable at 120FPS if you run out of VRAM, you can be sure to see those numbers dip more often than you like... loosely translated... in-game stuttering.

And last but not least. It's always a good thing to have that little extra head room when it comes to VRAM to make things a little future proof, especially when you are someone like me that doesn't enjoy upgrading his GPU once a year and being wallet raped while doing it.

Many factors are involved, more than people would like to realize.



Frick said:


> Some actually do, but I have no idea if it impacts performance



The performance impact translates into what people refer to as "Stuttering" this happens when new textures have to be loaded into the VRAM and older textures have to be removed. When there is enough VRAM headroom for all the textures, then there is no problem, especially when you have to move back and forth to prior locations within the game world.

Note: there are other components that will also have to work in harmony and conjunction with each other for everything to work smoothly. For example, a PC using a SSD won't struggle as much vs one that still uses an HDD.

Often referred to as a bottleneck.


----------



## RejZoR (Mar 18, 2016)

Games use such huge amounts of memory just because they can, not because it's necessary. Unreal Engine already knows Texture Streaming, a system that dynamically loads textures as they are necessary based on field of vision, object range, scene radius around player and other heuristic parameters.

I've tested this well in Killing Floor 2 as I was making tweaker for it. With Texture Streaming enabled, on 4GB VRAM graphic card, memory can go up to 2 GB with everything set to Ultra at 1080p. But if I disable Texture Streaming, memory usage goes to 3,9 GB. Game doesn't look any better. So, there's that. If I had 12GB, game could fit all of it in memory. Would there be any actual benefit? Not really. Maybe less stuttering on systems with crappy data storage subsystem (slow HDD). If you have SSD or SSHD like I do, I frankly don't notice any difference.
Don't know how clever other engines are, but Texture Streaming does wonders when you have tons of textures, but you're limited by VRAM capacity.
If you want to target as many users as possible while still delivering high image quality, this is excellent way to do it.


----------



## Vayra86 (Mar 18, 2016)

Legacy-ZA said:


> I fear some articles on this website goes to waste on some people. Just do a search.
> 
> Rise of the Tomb Raider
> Shadows of Mordor
> ...



Great story, but largely untrue in practice.

VRAM is filled because it is possible, not because it is required. When you increase IQ, you decrease the ability for the GPU to push out 120/144 fps, which means the 'stutter' is masked by the lower framerate. Frametimes are never stable and neither is FPS. The real limitation is the GPU core, not the VRAM. Memory overclocking gains are always lower than core overclocking.

There are almost zero situations in which the GPU is out of horsepower before the VRAM starts to complain. High end GPU's are balanced around core versus VRAM usage.

Now, the real issues with regards to having 'more gpu core horsepower' than the VRAM can work with, can occur in SLI/Crossfire. But when you go into that realm, you are already saying goodbye to the lowest input lag numbers ánd you say goodbye to a fully stutter-free gaming experience. GPU's in SLI are never 100% of the time in perfect sync. Any deviation from that sync will either result in a higher frametime OR higher input lag.

You can't just say 'a game can use 8 GB so I need to have 8 GB GPU's'.


----------



## 64K (Mar 18, 2016)

VRAM used doesn't always mean VRAM needed for smooth play.

From the review here on the Titan X






W1zzard's conclusion after testing

"Our results clearly show that 12 GB is overkill for the Titan X. Eight GB of memory would have been more than enough, and, in my opinion, future-proof. Even today, the 6 GB of the two-year-old Titan are too much as 4 GB is enough for all titles to date. Both the Xbox One and PS4 have 8 GB of VRAM of which 5 GB and 4.5 GB are, respectively, available to games, so I seriously doubt developers will exceed that amount any time soon.

Modern games use various memory allocation strategies, and these usually involve loading as much into VRAM as fits even if the texture might never or only rarely be used. Call of Duty: AW seems even worse as it keeps stuffing textures into the memory while you play and not as a level loads, without ever removing anything in the hope that it might need whatever it puts there at some point in the future, which it does not as the FPS on 3 GB cards would otherwise be seriously compromised."

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/33.html

Does COD AW use 7.3 GB of VRAM if available? Yes.
Does COD AW need 7.3 GB of VRAM? No.


----------



## rtwjunkie (Mar 18, 2016)

medi01 said:


> I missed what was the source of this information.
> 
> Is it "*some TPU guy just made this up*"? =/


Apparently you missed the source link in btarunr's OP, and you missed @the54thvoid post about Guru3D being hesitant to post anything on it since it is not substantiated yet.  FTR, bta link is always to bottom left in his OP.


----------



## bogami (Mar 18, 2016)

To be charged up to hernias !
Again, they will be sold  incomplite cut out chip for best gameplay product (x80ti it is scrapped chip 5120 core !!!!!!!!!!!!!! and scraped RAM DDR5 to !!!!!!!!!!!!!!!!!!!!!!!!) ! TITAN is another class( for D.P.and CUDA use) I hope that  polaris will be better .12.6TFLOPS on TITAN.It sounds good to be a little in relation to the predicted 300%. 10.5TFLOPS X80ti also does not promise 100% jump. We will see ,sustained waiteng and earneng money !I hope that will be wise enough and quickly issued a dual GPU product.


----------



## L.ccd (Mar 18, 2016)

rtwjunkie said:


> Apparentlu you missed the source link in btarunr's OP


Regarding this, the source article itself says:


> the so called leakers just can’t live without speculating and creating their own charts, because I suppose that’s exactly what we are seeing on this graph.


And yet the same unsubstantiated crap is linked and talked about on a number of hardware sites. Why does TPU relay that made-up crap? I'd rather them not post anything on nvidia's next gen before they have something solid to talk about. I come here to read solid info, not the same bullshit so many sites publish.

Sorry for the rant.


----------



## rtwjunkie (Mar 18, 2016)

L.ccd said:


> Regarding this, the source article itself says:
> 
> And yet the same unsubstantiated crap is linked and talked about on a number of hardware sites. Why does TPU relay that made-up crap? I'd rather them not post anything on nvidia's next gen before they have something solid to talk about. I come here to read solid info, not the same bullshit so many sites publish.
> 
> Sorry for the rant.



True enough!  I agree completely.

Your position, however, is alot different than saying bta or someone else on TPU made it up.


----------



## MxPhenom 216 (Mar 18, 2016)

Frick said:


> Some actually do, but I have no idea if it impacts performance



The only one i can think of off the top of my head is Shadow of Mordor.


----------



## GhostRyder (Mar 18, 2016)

Well, this is just a rumor so ill just think of it that way until we have substantial proof.  However, if the name series is the Geforce X80 (Or similar) that going to take some getting used to lol.


----------



## EarthDog (Mar 18, 2016)

I love how we have seen GTX1080 spoken as if it was The Gospel when people have nooooooooooooooooooooooooooooooooo idea. Thanks!



LOL @ vRAM discussion... SOME games will use more ram for giggles and not show stuttering (until something else needs the space). BF4 did this... but make no mistake about it... there was stuttering with 2GB cards at 1080p Ultra settings.... regardless if it used 3.5GB on a 4GB card or 2.5GB on a 2GB card.


----------



## nickbaldwin86 (Mar 18, 2016)

I see what they did there.

X80 = X = 10 = 1080  so they thought X sounded "cooler" because marketing!

Time for a set of X80 Ti,s ...


----------



## PP Mguire (Mar 18, 2016)

Lol at the convo. This would actually make a lot of sense.

Leaks, PR, and Nvidia have all said Pascal is bringing back compute performance for deep learning which could limit all but the Titan GPU to GDDR5x. Reasoning behind this is because the compute cards were pretty much confirmed already to have HBM2 and a minimum of 16GB at that, and if that's the case it could also mean the Titan will have the double duty classification brought back with HBM2, leaving the rest with GDDR5x. This really isn't an issue, but I suppose people will argue otherwise.


----------



## TheDeeGee (Mar 18, 2016)

geon2k2 said:


> They forgot to mention the prices I think.
> 
> Let me correct that:
> 
> ...



Fixed it for ya.


----------



## trog100 (Mar 18, 2016)

can we assume there will be an  X60 and an X70 series.. 

the naming makes sense the rest of it sounds a tad on the optimistic side.. but who knows.. he he..

6 gigs of memory is a realistic "enough" the fact that some amd cards have 8 is pure marketing hype.. no current game needs more than 4 gigs.. people who think otherwise are wrong..

i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life.. 

trog


----------



## nickbaldwin86 (Mar 18, 2016)

trog100 said:


> can we assume there will be an  X60 and an X70 series..



of course they will make x6x and x7x but no one cares about those as far as rumors go.  this is the future state of dreams and hopes for the big cards. but again rumors.


----------



## AuDioFreaK39 (Mar 18, 2016)

I do not like the name "X80." It seems like a backwards step for such a powerful architecture, especially from "GTX 980," "GTX 780," "GTX 680," etc. The other issue is that many people may be inclined to believe that "GTX 980" is more powerful simply because it sounds as such.

It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.

History of notable GPU naming schemes:
GeForce 3 Ti500 (October 2001)
GeForce FX 4800 (March 2003)
GeForce 6800 GT (June 2004)
GeForce GTX 680 (March 2012)

GeForce PX sounds more appropriate for Pascal, considering they already branded the automotive solution Drive PX 2. I would have no problem purchasing a GPU called "GeForce PX 4096," "GeForce PX 5120 Ti" or "GeForce PX Titan," where 4096 and 5120 indicate the number of shaders.

They could even do the same with Volta in 2018: GeForce VX, GeForce VX Ti, and GeForce VX Titan, etc.


----------



## 64K (Mar 18, 2016)

trog100 said:


> can we assume there will be an  X60 and an X70 series..
> 
> the naming makes the rest of it sounds a tad on the optimistic side.. but who knows.. he he..
> 
> ...



Yes the 980 Ti will be relegated to mid range in comparative performance to the Pascal and Polaris lineup once the flagships are released. It's expensive to stay on the cutting edge if that's what you want.


----------



## nickbaldwin86 (Mar 18, 2016)

AuDioFreaK39 said:


> I do not like the name "X80." It seems like a backwards step for such a powerful architecture, especially from "GTX 980," "GTX 780," "GTX 680," etc. The other issue is that many people may be inclined to believe that "GTX 980" is more powerful simply because it sounds as such.
> 
> It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.
> 
> ...



You should apply for their "Lead of naming things in marketing Director of name stuff" I hear they have a new opening at launch of every new card series. Bit of a long title but small font and you can get it on a business card 

Sorry just love how people get hung up on the name!

You must remember that marketing names these and it is based feel good not technical reasons. how ever they can sell more!  a X in front of anything sells!!!!!


----------



## rtwjunkie (Mar 18, 2016)

trog100 said:


> i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life..



This is always how it ends up.


----------



## EarthDog (Mar 18, 2016)

rtwjunkie said:


> This is always how it ends up.


+1... That is just the way it works.

But look at the bright side... you neuter yourself (limit FPS) so when you need the horsepower, you have some headroom built in from your curious methods.


----------



## nickbaldwin86 (Mar 18, 2016)

trog100 said:


> i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life..
> 
> trog




Yes we should just go back to hitting things with rocks!



rtwjunkie said:


> This is always how it ends up.



course


----------



## GhostRyder (Mar 18, 2016)

trog100 said:


> can we assume there will be an  X60 and an X70 series..
> 
> the naming makes sense the rest of it sounds a tad on the optimistic side.. but who knows.. he he..
> 
> ...


 I don't get that, its better that the next cards put the old cards to shame because that means they are worthy to upgrade.  This round with Maxwell and R9 3XX the upgrades were very minimal compared to the previous generation which to me was poor because it made me wonder why I would want to upgrade.  We need reasons to upgrade and a card that puts the old generation flagship to shame is the perfect thing in my book as long as its done on pure performance of the cards (And not gimping performance on older generation cards).

As for the 8gb part, it is overkill in most cases but beyond 4gb itself is not.  It just means if there is a need for beyond 4gb we will still be a ok.  I do agree its overkill, but I would put better safe than sorry.


----------



## PP Mguire (Mar 18, 2016)

trog100 said:


> can we assume there will be an  X60 and an X70 series..
> 
> the naming makes sense the rest of it sounds a tad on the optimistic side.. but who knows.. he he..
> 
> ...


My cards have 12GB, it's definitely not marketing hype.

People who think future titles and 4k gaming is realistic with 4GB is wrong. Actually, if people thought 4GB was enough for 1080p gaming there wouldn't be artards complaining about the 970 still like it's still a way to troll.


----------



## AuDioFreaK39 (Mar 18, 2016)

nickbaldwin86 said:


> You should apply for their "Lead of naming things in marketing Director of name stuff" I hear they have a new opening at launch of every new card series. Bit of a long title but small font and you can get it on a business card
> 
> Sorry just love how people get hung up on the name!
> 
> You must remember that marketing names these and it is based feel good not technical reasons. how ever they can sell more!  a X in front of anything sells!!!!!



How about this instead:

GeForce PX 4000
GeForce PX 5500 Ti
GeForce PX 6500
GeForce PX 7500
GeForce PX 8500 Ti
GeForce PX2 9500 (dual-GPU)
GeForce PX Titan
GeForce PX Titan M


----------



## overclocking101 (Mar 18, 2016)

this just doesn't seem legit to me. one would think that with the progression of cards now the new nvidia cards would come with 8gb vram. i'll believe it when I see it though the new naming scheme would make sense so that part of it may be true. we will see I guess


----------



## rtwjunkie (Mar 18, 2016)

AuDioFreaK39 said:


> How about this instead:
> 
> GeForce PX 4000
> GeForce PX 5500 Ti
> ...



I feel this gets too close to their prior naming schemes in the 4000, 5000, 6000, 7000, and 8000 series.  I'm guessing Nvidia thought so too.

The very first thing I thought of when I saw this were Ti 4200 and Ti4400.


----------



## PP Mguire (Mar 18, 2016)

overclocking101 said:


> this just doesn't seem legit to me. one would think that with the progression of cards now the new nvidia cards would come with 8gb vram. i'll believe it when I see it though the new naming scheme would make sense so that part of it may be true. we will see I guess


Nah. The GP104 chip is midrange ala 980. In other words upping the high end midrange to 6GB, 8GB for enthusiast card and then 16GB HBM2 for the TItan level card. Falls into their typical strategy really. Don't expect huge amounts of memory until midrange cards start going HBM due to stacking. Getting high amounts of GDDR5 on a board amounts to lots of chips.


----------



## nickbaldwin86 (Mar 18, 2016)

AuDioFreaK39 said:


> How about this instead:
> 
> GeForce PX 4000
> GeForce PX 5500 Ti
> ...



I hope you don't get the job


----------



## trog100 (Mar 18, 2016)

EarthDog said:


> +1... That is just the way it works.
> 
> But look at the bright side... you neuter yourself (limit FPS) so when you need the horsepower, you have some headroom built in from your curious methods.



not quite the same thing though is it.. my gpu power will be more than adequate for quite some time to come.. but more than adequate aint quite the same as top end though.. he he

but as other have said keeping at the leading edge is f-cking expensive and as soon as you catch up the goal posts get f-cking moved.. he he

as old as i am e-peen still plays a part in my thinking.. maybe it shouldnt but at least i am honest enough to admit that it does. .

years ago when Rolls Royce were ask the power output of their cars they had one simple reply.. they used to say "adequate".. i aint so sure that "adequate" is good enough now though.. 

trog


----------



## trog100 (Mar 18, 2016)

PP Mguire said:


> My cards have 12GB, it's definitely not marketing hype.
> 
> People who think future titles and 4k gaming is realistic with 4GB is wrong. Actually, if people thought 4GB was enough for 1080p gaming there wouldn't be artards complaining about the 970 still like it's still a way to troll.



i think it is which is why i bought a 6 gig 980 TI instead of a titan x.. at least for playing games on 12 gig  isnt of much use.. but i was really thinking of certain lower power amd cards when i made the market hype comment..

but back on topic i recon 6 gigs is okay for the next generation of graphics cards.. there will always be something higher for those with the desire and the cash for more..

in this day and age market hype aint that uncommon.. he he.. who the f-ck needs 4K anyway.. 4K is market hype.. maybe the single thing that drives it all.. i would guess the next thing will be 8K.. the whole game will start again then.. including phones with 6 inch 8K screens.. he he

i like smallish controlled jumps with this tech malarkey.. that way those who bought last year aint that pissed off this year.. i recon its all planned this way anyway.. logically it has to be..

trog

ps.. if one assumes the hardware market drives the software market which i do 6 gigs for future cards is about right.. the games are made to work on mass market hardware.. not the other way around..


----------



## 64K (Mar 18, 2016)

6 GB should be plenty for even the high end of mid range Pascal card for the next few years. I would like to have an X80 Ti with 8 GB though. That quote from W1zzard in my post #33 that 4 GB VRAM is enough for all games to date was from a year ago and VRAM demands will continue to go up especially for people running DSR and ultra settings.


----------



## xfia (Mar 18, 2016)

pretty speculative on vram amounts too seeing the difference that dx12 can make in a actual game.

lower system latency means faster rendering..


----------



## PP Mguire (Mar 18, 2016)

trog100 said:


> i think it is which is why i bought a 6 gig 980 TI instead of a titan x.. at least for playing games on 12 gig  isnt of much use.. but i was really thinking of certain lower power amd cards when i made the market hype comment..
> 
> but back on topic i recon 6 gigs is okay for the next generation of graphics cards.. there will always be something higher for those with the desire and the cash for more..
> 
> ...


12GB is great because I literally have never capped my VRAM usage even at 4k. I have seen 11GB used at the highest, and that's with the newer engines barely coming out and DX12 is barely making it's way onto our PCs. It'll be necessary later on, trust me. 

4k is not market hype, it literally looks a ton crisper than 1080p and 1440p. You can't sit here and say 4x the average resolution is "market hype", it's the next hump in the road whether some people want to admit it or not. I got my TV lower than most large format 4k monitors so I took the dive knowing Maxwell performance isn't to par, but with 4k I don't need any real amounts of AA either. 2x at most in some areas depending on the game. That being said, I'd rather not have small incremental jumps in performance because some either can't afford it or find a way to afford it. That's called stagnation, and nobody benefits from that. Just look at CPUs for a clean cut example of why we don't need stagnation. 

Games are made to work on consoles, not for PCs unless specified. That hardly happens, because money is in the console market. That doesn't mean that shit ports won't drive up the amount of VRAM needed regardless of texture streaming. I agree though, 6GB for midrange next gen should be plenty as I doubt the adoption to 4k will great during the Pascal/Polaris generation. That'll be left to Volta.


----------



## nickbaldwin86 (Mar 18, 2016)

PP Mguire said:


> 12GB is great because I literally have never capped my VRAM usage even at 4k. I have seen 11GB used at the highest, and that's with the newer engines barely coming out and DX12 is barely making it's way onto our PCs. It'll be necessary later on, trust me.
> 
> 4k is not market hype, it literally looks a ton crisper than 1080p and 1440p. You can't sit here and say 4x the average resolution is "market hype", i



it isn't he has just never played on a 4K and he is just jelly. the only person that would say 4K sucks is someone that hasn't experienced it.

I don't "like"/"want" 4K right now only because of 60hz but that is another subject completely!

When 4K hits 144hz I will move up but going from 144hz to 60hz hurts my eyes in a real way. 4K is amazing to look at just don't move fast LOL


----------



## PP Mguire (Mar 18, 2016)

nickbaldwin86 said:


> it isn't he has just never played on a 4K and he is just jelly. the only person that would say 4K sucks is someone that hasn't experienced it.
> 
> I don't "like"/"want" 4K right now only because of 60hz but that is another subject completely!
> 
> When 4K hits 144hz I will move up but going from 144hz to 60hz hurts my eyes in a real way. 4K is amazing to look at just don't move fast LOL


I have no issues with 60 and I came from over a year's use of 100+. Then again, my Battlefield playing days are basically over. It's all boring to me now.


----------



## xfia (Mar 18, 2016)

I feel the same.. when I play a lot games now. zzz... I get the people that like consoles for the two player and having fun with friends. some would say that dreamers live another reality when they sleep because how long they spend doing so. If I have multiple realities to live I dont want one of them to be zeros and ones.


----------



## arbiter (Mar 18, 2016)

geon2k2 said:


> They forgot to mention the prices I think.
> 
> Let me correct that:
> 
> ...


Let me guess your an AMD fanboy that conveniently forgot that last 2 price drops were result of Nvidia settings prices lower then AMD? 290x was 500+$ card, here comes gtx970 that is just as fast for 330$. Then 980ti for 650$ o look fury had priced same to compete.

Besides the AMD red fanboy claims about pricing, GP100 having both HBM and GDDR5(x) on that chart is bit suspect as to how close to true those are. Could like be x80ti using HBM but not gonna be our til later this year like current ti was


----------



## AuDioFreaK39 (Mar 18, 2016)

nickbaldwin86 said:


> I hope you don't get the job



Well, I posted it anyway as a suggestion. We will see what they say, and hopefully I will have some solid info during GTC next month.

http://fudzilla.com/news/graphics/40261-nvidia-leak-suggests-rather-dull-pascal-gpu-naming-scheme


----------



## xorbe (Mar 18, 2016)

TPU should be ashamed of the headline, if they read the original post.


----------



## Brusfantomet (Mar 18, 2016)

AuDioFreaK39 said:


> I do not like the name "X80." It seems like a backwards step for such a powerful architecture, especially from "GTX 980," "GTX 780," "GTX 680," etc. The other issue is that many people may be inclined to believe that "GTX 980" is more powerful simply because it sounds as such.
> 
> It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.
> 
> ...



There was never a GeForce FX 4000 series, but a FX 5000 series.


----------



## AuDioFreaK39 (Mar 18, 2016)

Brusfantomet said:


> There was never a GeForce FX 4000 series, but a FX 5000 series.


Ah yes, thank you. I had it confused with the Quadro FX 4800 from 2008.


----------



## HumanSmoke (Mar 18, 2016)

AuDioFreaK39 said:


> It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.


I actually thought that was exactly what has happened. Slow news week especially after the Capsaicin non-reveal, so why not plant some guesswork and watch the page hits flow.


AuDioFreaK39 said:


> GeForce PX sounds more appropriate for Pascal


To close to Nvidia's (+ ex-3Dfx's GPU design team) NV30 series I suspect. When PCI-Express first launched, Nvidia differentiated models from AGP with the PCX name. I suspect PX might not have the marketing cachet. 


PP Mguire said:


> Leaks, PR, and Nvidia have all said Pascal is bringing back compute performance for deep learning which could limit all but the Titan GPU to GDDR5x. Reasoning behind this is because the compute cards were pretty much confirmed already to have HBM2 and a minimum of 16GB at that, and if that's the case it could also mean the Titan will have the double duty classification brought back with HBM2, leaving the rest with GDDR5x. This really isn't an issue, but I suppose people will argue otherwise.


IMO I'd think GP100 would be the only real "compute" (FP64/double precision) chip. I'd really expect GP104 to pull double duty as a high end gaming GPU and enthusiast level mobile option so adding power hungry SFU's might not be an option. As far as I'm aware, all Pascal chips will feature mixed compute allowing for half-precision (FP16) ops (as will AMD's upcoming chips), since game engines as well as other applications can utilize it effectively. I really think the days of the a full compute second (and lower) tier gaming GPU are well and truly behind us - both Nvidia and AMD have sacrificed double precision in the name of retaining a more balanced approach to power, die size, and application of late.


----------



## arterius2 (Mar 18, 2016)

Vayra86 said:


> No, some random source made it up and not a very bright one either. They just picked old information and reapplied it to the upcoming releases, with a pinch of salt and a truckload of wishful thinking. The amount of issues with this chart is endless.
> 
> - X80 does not really sound well when you think of cut down chips. X75?? For the Pascal 970? Nahhh
> - X does not fit the lower segments at all. GTX in the low end becomes a GT. So X becomes a.... Y? U? Is Nvidia going Intel on us? It's weird.
> ...



No, the names sound believable as Nvidia probably want to simply their naming schemes, I always thought the whole GTX/GT/GTS shenanigans were stupid from the start, this is a move in the right direction.

X means 10 in roman numerals, it looks even more stupid when you write shit like gtx1080.

Nobody is copying from competitors, AMD does not have any cards named X80.


----------



## Ithanul (Mar 18, 2016)

Prima.Vera said:


> Which Games, on what settings and resolution please? Otherwise I'm calling this a BS


Unless the person was talking about a heavily modded Skyrim or other game.  Then again, I have not touched any of the past year's new AAA games (none interest me at all).

Though, finally!  Compute be back, be able to have work horses again for other things.   

Be interesting though if the X80 and/or X80 Ti have the GDDR5 or GDDR5X.  I would like to see comparison with those overclocked to a Titan with HBM2 that is overclocked.  Just see if there be a difference in performance with the RAM.  Though, I could care less about RAM speed most of the time.  Since I don't even game at high resolutions.  *sites in corner with 1080P 60Hz screen*


----------



## arterius2 (Mar 18, 2016)

Ithanul said:


> Unless the person was talking about a heavily modded Skyrim or other game.  Then again, I have not touched any of the past year's new AAA games (none interest me at all).
> 
> Though, finally!  Compute be back, be able to have work horses again for other things.
> 
> Be interesting though if the X80 and/or X80 Ti have the GDDR5 or GDDR5X.  I would like to see comparison with those overclocked to a Titan with HBM2 that is overclocked.  Just see if there be a difference in performance with the RAM.  Though, I could care less about RAM speed most of the time.  Since I don't even game at high resolutions.  *sites in corner with 1080P 60Hz screen*


Should seriously upgrade your screen before anything in your case.


----------



## rtwjunkie (Mar 18, 2016)

There was the ATI/AMD x-1000 series in 2004 or 2005 IIRC, so it definitely would have been confusing had NVIDIA gone with the 1080, etc.


----------



## ppn (Mar 18, 2016)

There are 8Ghz chips only in 8Gb density, So 384-bit makes 12GB, And 1.5V will run them very hot. Better avoid that. definitely Go for HBM2.


----------



## PP Mguire (Mar 18, 2016)

HumanSmoke said:


> I actually thought that was exactly what has happened. Slow news week especially after the Capsaicin non-reveal, so why not plant some guesswork and watch the page hits flow.
> 
> To close to Nvidia's (+ ex-3Dfx's GPU design team) NV30 series I suspect. When PCI-Express first launched, Nvidia differentiated models from AGP with the PCX name. I suspect PX might not have the marketing cachet.
> 
> IMO I'd think GP100 would be the only real "compute" (FP64/double precision) chip. I'd really expect GP104 to pull double duty as a high end gaming GPU and enthusiast level mobile option so adding power hungry SFU's might not be an option. As far as I'm aware, all Pascal chips will feature mixed compute allowing for half-precision (FP16) ops (as will AMD's upcoming chips), since game engines as well as other applications can utilize it effectively. I really think the days of the a full compute second (and lower) tier gaming GPU are well and truly behind us - both Nvidia and AMD have sacrificed double precision in the name of retaining a more balanced approach to power, die size, and application of late.


By compute I was really only referencing the Titans because they should be the only ones with HBM2 like the Teslas.


----------



## 64K (Mar 18, 2016)

arterius2 said:


> Should seriously upgrade your screen before anything in your case.



@Ithanul mostly uses GPUs for Folding.


----------



## AuDioFreaK39 (Mar 18, 2016)

Ithanul said:


> .Since I don't even game at high resolutions.  *sites in corner with 1080P 60Hz screen*



http://slickdeals.net/f/8560302-upstar-28-4k-uhd-monitor-189-00-member-s-price-samsclub-com


----------



## Frick (Mar 18, 2016)

rtwjunkie said:


> There was the ATI/AMD x-1000 series in 2004 or 2005 IIRC, so it definitely would have been confusing had NVIDIA gone with the 1080, etc.



Not really confusing because those GPUs doesn't exist outside collector basements, and it was x1800/x1900, so some ways off numerically speaking.

Edit: No man it's coming back to me, they went down to x12xx, but those were integrated graphics, the dedicated GPU's started in x1300.


----------



## Ikaruga (Mar 18, 2016)

Another quality nonsense....


----------



## efikkan (Mar 18, 2016)

To my knowledge neither HBM2 nor GDDR5X will ship in volume products until Q4, so don't expect any high-end models any time soon.

I wouldn't give any credence to these suggested specs.


----------



## PP Mguire (Mar 18, 2016)

efikkan said:


> To my knowledge neither HBM2 nor GDDR5X will ship in volume products until Q4, so don't expect any high-end models any time soon.
> 
> I wouldn't give any credence to these suggested specs.


GDDR5x goes into mass production this summer.


----------



## Ikaruga (Mar 18, 2016)

efikkan said:


> To my knowledge neither HBM2 nor GDDR5X will ship in volume products until Q4, so don't expect any high-end models any time soon.
> 
> I wouldn't give any credence to these suggested specs.


Who would? That table says the same chip (GP100) will be available with ddr5 and hmb2 memory configurations. That alone is a dead giveaway, and lets not mention the rest.


----------



## MxPhenom 216 (Mar 18, 2016)

PP Mguire said:


> My cards have 12GB, it's definitely not marketing hype.
> 
> People who think future titles and 4k gaming is realistic with 4GB is wrong. Actually, if people thought 4GB was enough for 1080p gaming there wouldn't be artards complaining about the 970 still like it's still a way to troll.



Because a $1k GPU with 12GB means its not marketting hype...When its ENTIRELY marketting surrounding that product.


----------



## MxPhenom 216 (Mar 18, 2016)

xorbe said:


> TPU should be ashamed of the headline, if they read the original post.


There is nothing wrong with it.


----------



## PP Mguire (Mar 18, 2016)

MxPhenom 216 said:


> Because a $1k GPU with 12GB means its not marketting hype...When its ENTIRELY marketting surrounding that product.


I'm talking about the VRAM amount, not the card.


----------



## MxPhenom 216 (Mar 19, 2016)

PP Mguire said:


> I'm talking about the VRAM amount, not the card.



No shit...


----------



## Fouquin (Mar 19, 2016)

AuDioFreaK39 said:


> History of notable GPU naming schemes:
> GeForce 3 Ti500 (October 2001)
> GeForce FX 4800 (March 2003)
> GeForce 6800 GT (June 2004)
> GeForce GTX 680 (March 2012)



GeForce FX 5800*

Also not sure why you picked up on the GTX 680 as being notable, when that naming system began with GT200 and the GTX 280. (Or the GTS 150, depending on who you ask.)



AuDioFreaK39 said:


> GeForce PX sounds more appropriate for Pascal



While that's cool, it seems weird to have put so much time and money into the build-up of the "GeForce GTX" brand only to kill it off just because they need to figure out some different numbers to put in the name. It seems more likely to just decide on some new numbers, keep the decade old branding that everyone already recognizes, and move on.


----------



## Caring1 (Mar 19, 2016)

arterius2 said:


> Nobody is copying from competitors, AMD does not have any cards named X80.


But they did have the All In Wonder series the X800


----------



## kiddagoat (Mar 19, 2016)

I miss the All In Wonder series cards..... I wish they'd bring those back.... AMD/ATi always did a top notch job on the multimedia side of their cards, those media center remotes some of those cards came with were amazing for their time.


----------



## Octopuss (Mar 19, 2016)

PP Mguire said:


> 4k is not market hype, it literally looks a ton crisper than 1080p and 1440p. You can't sit here and say 4x the average resolution is "market hype", it's the next hump in the road whether some people want to admit it or not. I got my TV lower than most large format 4k monitors so I took the dive knowing Maxwell performance isn't to par, but with 4k I don't need any real amounts of AA either. 2x at most in some areas depending on the game. That being said, I'd rather not have small incremental jumps in performance because some either can't afford it or find a way to afford it. That's called stagnation, and nobody benefits from that. Just look at CPUs for a clean cut example of why we don't need stagnation.


Some people are more than happy with 1080/1200p and don't intend to buy larger monitors you know?


----------



## Prima.Vera (Mar 19, 2016)

Personally I am waiting for the new gen cards as a reason to buy those 3440x1440 21:9 curved monitors...


----------



## trog100 (Mar 19, 2016)

i expect a 25% to 30% performance increase for the same money from the next generation of cards..

which in reality means not all that much.. 80 fps instead of 60 fps or 40 fps instead of 30 fps..

my own view is that once you add in g-sync or free-sync anything much over 75  fps dosnt show any gains..

4 K gaming for those that must have it will become a little more do-able but not my much.. life goes on..

affordable VR will also become a bit more do-able..

trog


----------



## Kissamies (Mar 19, 2016)

nickbaldwin86 said:


> I see what they did there.
> 
> X80 = X = 10 = 1080  so they thought X sounded "cooler" because marketing!
> 
> Time for a set of X80 Ti,s ...


That's what ATI did 12 years ago. After 9800, came X800 = 10800, nothing new in this.


damn, should have read other comments too..


----------



## RejZoR (Mar 19, 2016)

AMD's current naming scheme is also good. The R9, R7 and R5 designators tell you what class it belongs and then the actual model digits following that. Plus it sounds good. R9 390X. Or R9 Fury X.


----------



## the54thvoid (Mar 19, 2016)

I think we'll all find the next Nvidia flagship card is called:

1) "That's so too super expensive - you must be fanboyz to buy it", or
2) "Nvidia crushed AMD, Team red is going bust lozers", or
3) some other alpha numerical variant with a possible 'Titan' slipped in.

And yes, I'm, pushing this pointless 'news' piece to get to the 100 post mark.  Come on everyone, chip in to make futility work harder.


----------



## PP Mguire (Mar 19, 2016)

Octopuss said:


> Some people are more than happy with 1080/1200p and don't intend to buy larger monitors you know?


The same people said the same thing about their 19" 1280x1024 Dell monitors and now I bet they're all running 1080p or 1440p. I too once said 24" 1200p is all I need for great gaming in 09 and now I'm running a 48" 4k TV. Times change, people change, and some faster than others. Even Sony/Microsoft are releasing revamped consoles to support 4k. Then again, nobody has a gun to your head saying upgrade either. You like 1080p? Cool, a 980ti is an absolute monster for 1080p.


----------



## trog100 (Mar 19, 2016)

printers are rated in dots per inch.. DPI..

maybe monitors need rating the same way.. PPI.. pixels per inch.. your 4K 48 inch TV makes sense to me but when i see 4K on a 17 inch laptop it just makes a nonsense of it..

its like the mega pixel race with still cameras.. for web viewing you dont need that many.. to make errr 48 inch prints you do though..

4K is 8 million pixels.. at what size point (or viewing distance) it simply becomes unnoticeable i havnt a clue but there must be one..

my 1080 24 inch monitor at my normal viewing distance looked okay to me.. my 1440 27 inch monitor at the same viewing distance still looks okay to me..

however quite what sticking my nose 12 inches away from a 48 inch TV would make of things i dont know.. 

4K dosnt come free.. at a rough guess i would say it takes 4 x the gpu power to drive a game at than 1080 does..

i would also guess that people view a 48 inch TV from a fair distance away.. pretty much like they do with large photo prints..

but unless viewing distances and monitor size are taken into account its all meaningless..

trog


----------



## 64K (Mar 19, 2016)

Tech relentlessly moves forward. It always does. Always has. Always will. What is considered adequate this year is considered obsolete in a few years. 

I remember when 320 X 200 resolution was considered cool on my C64. I used Display List Interrupts (Raster Interrupts) using Machine Language to put more Sprites on screen at once than would have normally been possible for the hell of it. Rose colored glasses? Yes, but it was good times for us.


----------



## xfia (Mar 19, 2016)

102


----------



## PP Mguire (Mar 19, 2016)

trog100 said:


> printers are rated in dots per inch.. DPI..
> 
> maybe monitors need rating the same way.. PPI.. pixels per inch.. your 4K 48 inch TV makes sense to me but when i see 4K on a 17 inch laptop it just makes a nonsense of it..
> 
> ...


You're forgetting what a higher rendering resolution does too, it's not all just screen and PPI. My 4k TV is the same distance as I had my 27" ROG Swift. The clarity in gaming is a substantial difference between even 1440p to 4k regardless of screen size. Before I was using copious amounts of AA and now the most I use is 2x for aggressive lines like fences or roof tops. Naturally it depends on the game in the end, but even silly things like Rocket League show a major improvement. To kind of keep it on topic, this is where me whining about wanting a newer GPU architecture comes into play. Yes, it does take more power. I wouldn't say 4x the GPU power but a bit more. Tbh I didn't expect to keep the TV as it was supposed to be handed off to my gf but I liked it so much I sold my Swift and now I'm paying the consequences 
That's ok. I'm on an old school gaming kick again so most of my time is spent on my 98 box lately. It'll tide me over enough and 2x Titan X under water is more than plenty to max the frame cap in Rocket League, even with INI mods.


----------



## xfia (Mar 19, 2016)

ya know the other day I held a galaxy s4 and s6 next to each other and took some pics.. even at such a small screen size the winner was clear.


----------



## Dethroy (Mar 19, 2016)

trog100 said:


> [...]
> 4K dosnt come free.. at a rough guess i would say it takes 4 x the gpu power to drive a game at than 1080 does..
> [...]
> trog


3.840/1.920 = 2
2.160/1.080 = 2
2*2 = 4

Good guess. But it only takes one glance at the numbers to see it is 4x the pixels 

But the 16:9 aspect ratio is so yesteryear. Can't believe that the adoption of 21:9 takes so long that even tech savvy people like here on tpu still don't have it on their radar.
I'd prefer 3.440 * 1.440 over 3.840x2.160 anyday. Can't wait for the arrival of 1.440p Ultrawide 144Hz HDR. 4K is dogshit and needs 67.44% more horsepower - but anyone foolish enough believing all that marketing crap deserves to pay the price...


----------



## PP Mguire (Mar 19, 2016)

Dethroy said:


> 3.840/1.920 = 2
> 2.160/1.080 = 2
> 2*2 = 4
> 
> ...


I have a 34" 3440x1440 Dell at work and I don't like it. I also owned an LG Freesync 29" ultrawide and didn't like that either. Vertical size is way too small for me. Most of my PC buds have the same issue with it.


----------



## Frick (Mar 19, 2016)

To the 4K users: how does old games look on it? Say Diablo 2 unmodded.


----------



## Prima.Vera (Mar 20, 2016)

Frick said:


> To the 4K users: how does old games look on it? Say Diablo 2 unmodded.


LOL. Saw that actually. 800x600 stretched on a 4K monitor....Blur pr0n fest. Like watching a very low res video on your 1080p monitor.... ))

However, if you use a hack, it looks like this ))


----------



## trog100 (Mar 20, 2016)

PP Mguire said:


> You're forgetting what a higher rendering resolution does too, it's not all just screen and PPI. My 4k TV is the same distance as I had my 27" ROG Swift. The clarity in gaming is a substantial difference between even 1440p to 4k regardless of screen size. Before I was using copious amounts of AA and now the most I use is 2x for aggressive lines like fences or roof tops. Naturally it depends on the game in the end, but even silly things like Rocket League show a major improvement. To kind of keep it on topic, this is where me whining about wanting a newer GPU architecture comes into play. Yes, it does take more power. I wouldn't say 4x the GPU power but a bit more. Tbh I didn't expect to keep the TV as it was supposed to be handed off to my gf but I liked it so much I sold my Swift and now I'm paying the consequences
> That's ok. I'm on an old school gaming kick again so most of my time is spent on my 98 box lately. It'll tide me over enough and 2x Titan X under water is more than plenty to max the frame cap in Rocket League, even with INI mods.



oddly enough i think using copious amounts of AA is more habit than anything else..

i have been experimenting with turning it off at 1440.. running the heaven  or valley benchmarks and looking very carefully is a good test to use.. 8 x AA creates a blurry image.. turning it off creates a sharper more detailed image.. can i see jaggies.. maybe in certain situations but as a general rule the image just looks sharper with better defined edges..

but i am currently running my games at 1440 with AA ether off or very low.. i am into a power saving mode which is why i dont just run everything balls out on ultra settings..

again this all comes down to screen size and viewing distances.. i know for fact it does with high quality still images.. so far i cant see any reason the same principle should not apply to moving images..

currently i am lowering settings to see just what i can get away with before it noticeably affects game play visuals..

trog


----------



## Prima.Vera (Mar 20, 2016)

Dethroy said:


> But the 16:9 aspect ratio is so yesteryear. Can't believe that the adoption of 21:9 takes so long that even tech savvy people like here on tpu still don't have it on their radar.
> I'd prefer 3.440 * 1.440 over 3.840x2.160 anyday. Can't wait for the arrival of 1.440p Ultrawide 144Hz HDR. 4K is dogshit and needs 67.44% more horsepower - but anyone foolish enough believing all that marketing crap deserves to pay the price...



Completely agree. Personally I am waiting for this card to see if it can push games on 3440x1440 without any issues, because the 34 21:9 incher is already in my plans.


----------



## Ithanul (Mar 20, 2016)

arterius2 said:


> Should seriously upgrade your screen before anything in your case.


Have no interest in upgrading my screen.  I don't game with both my Tis anyways.  Note, I do have a two higher res screens, but none of them are ever going to see gaming since they are for other things. *Dell Ultrasharp 30 inch 1600P and Wacom Cintiq 24HD that weighs a ton*

Plus, I have not game on my rig for the past several months.  Been busy playing the Wii U instead.  



64K said:


> @Ithanul mostly uses GPUs for Folding.


Yep, 24/7 to the WALL!  Plus, I abuse the crap out of GPUs.  I have a dead GTX Titan to my name (good thing for EVGA warrenty).  Though, I do more than just fold.  Main rig is my power house jack of trades, folding, boincing, 3D rendering, RAW photos, etc.  Though, I did just get two old IBM servers.  Woot, now to get some RAM and SCSI HDD for them and get them to working.


----------



## Katanai (Mar 20, 2016)

Well, personally I'm waiting for the next generation of GPU's to be released because I have to upgrade soon. I don't know if it will be exactly like this article says but I know Nvidia will be superior once again and I'll have to read countless posts on TPU about how AMD is better although it isn't, how Nvidia is evil bla bla bla. All I have to say is: see you in 2017 AMD fanboys when Nvidia will be superior to AMD like it was in 2016, 2015, 2014, etc... and I will be really sad about it:


----------



## PP Mguire (Mar 20, 2016)

trog100 said:


> oddly enough i think using copious amounts of AA is more habit than anything else..
> 
> i have been experimenting with turning it off at 1440.. running the heaven  or valley benchmarks and looking very carefully is a good test to use.. 8 x AA creates a blurry image.. turning it off creates a sharper more detailed image.. can i see jaggies.. maybe in certain situations but as a general rule the image just looks sharper with better defined edges..
> 
> ...


How AA affects your image quality entirely depends on the type of AA being used. Multi-sampling in general is an edge sharpener, Super-sampling basically renders the entire scene at a higher resolution (why Metro killed many rigs with it enabled), and techniques like FXAA use a blurring method to reduce jaggies at a much lower cost to performance. Of course this is really simplified but in general MS and SS shouldn't blur your image. Moving from 1080p/1440p to 4k is basically SS in itself which is why I now prefer it. I use 2x MS if the situation calls which is hardly ever.


----------



## Caring1 (Mar 20, 2016)

the54thvoid said:


> And yes, I'm, pushing this pointless 'news' piece to get to the 100 post mark.  Come on everyone, chip in to make futility work harder.


Futility seems pointless, I'm still thinking about procrastinating.


----------



## medi01 (Mar 20, 2016)

L.ccd said:


> And yet the same unsubstantiated crap is linked and talked about on a number of hardware sites. Why does TPU relay that made-up crap?


Cause, moar clicks and most active comments section, I guess...


----------



## Frick (Mar 20, 2016)

Prima.Vera said:


> LOL. Saw that actually. 800x600 stretched on a 4K monitor....Blur pr0n fest. Like watching a very low res video on your 1080p monitor.... ))
> 
> However, if you use a hack, it looks like this ))



And that is just daft IMO.  Higher res is good up to a point, but in most old games I've found it best to stay on the low side. Everything becomes so tiny.


----------



## the54thvoid (Mar 20, 2016)

Katanai said:


> Well, personally I'm waiting for the next generation of GPU's to be released because I have to upgrade soon. I don't know if it will be exactly like this article says but I know Nvidia will be superior once again and I'll have to read countless posts on TPU about how AMD is better although it isn't, how Nvidia is evil bla bla bla. All I have to say is: see you in 2017 AMD fanboys when Nvidia will be superior to AMD like it was in 2016, 2015, 2014, etc... and I will be really sad about it:



Spoken like a true fanboy. FWIW, if Fiji clocked 10-15% higher, it would be faster than the 980ti.  Only reason the 980ti is favoured is because it was designed to be more energy efficient, therefore clocks way higher.  And before you say I'm a fanboy my avatar is a 75.9% ASIC 980ti Kingpin.  With a hideously expensive US imported Bitspower WB.

Fury X is a good card - it pretty much equals (or exceeds) an out of the box 980ti.  It just doesn't match it when overclocked and as a lot of people point out - most non techy folk don't overclock.



Caring1 said:


> Futility seems pointless, I'm still thinking about procrastinating.



Wow!  That's a metaphysical state of knowing you're unsure about even knowing if you want to start thinking about it.  I bow to your supreme state of mental awareness.


----------



## BiggieShady (Mar 20, 2016)

Prima.Vera said:


> However, if you use a hack, it looks like this ))


Mindboggling. I couldn't resist overlapping original diablo 2 game view over your image to get a sense of scale.
Funny coincidence, what you see on screen at 4K is roughly the area of mini map in original:


----------



## Katanai (Mar 20, 2016)

the54thvoid said:


> Spoken like a true fanboy. FWIW, if Fiji clocked 10-15% higher, it would be faster than the 980ti.  Only reason the 980ti is favoured is because it was designed to be more energy efficient, therefore clocks way higher.  And before you say I'm a fanboy my avatar is a 75.9% ASIC 980ti Kingpin.  With a hideously expensive US imported Bitspower WB.



Yeah, yeah, IF Fiji clocked higher and IF 980ti clocked lower and IF we lived on another planet then it would be faster than the 980ti. And IF on that other planet Intel wouldn't exist then AMD would have the best CPU's in the world. In the meantime on planet Earth TPU is one of the last corners of the internet where people still believe that AMD is worth anything...


----------



## trog100 (Mar 20, 2016)

there is a very simple reason one teams chip can overclock more than the other teams.. the winning team sets the pace..

one chip is cruising (can go faster) the other chip is running close to balls out (cant go faster) trying to keep up..

this is also why the winning teams products (at stock) will use less power generate less heat and make less noise..  its a three way win..

in a way its a rigged race with one team running just fast enough to stay in the lead..

the winning teams currently are intel and nvidia.. not a lot else to say..


trog


----------



## EarthDog (Mar 20, 2016)

What??? 

You do know that AMD processors overclock more % wise on average, right? Also, non fury products(read rX series) tend to overclock quite well... there are exceptions on both sides. 

But there are other reasons why performamce, power use, and overclocking are different. 

Creative, that thinking...I will give you that!


----------



## the54thvoid (Mar 20, 2016)

Katanai said:


> Yeah, yeah, IF Fiji clocked higher and IF 980ti clocked lower and IF we lived on another planet then it would be faster than the 980ti. And IF on that other planet Intel wouldn't exist then AMD would have the best CPU's in the world. In the meantime on planet Earth TPU is one of the last corners of the internet where people still believe that AMD is worth anything...



I own a 980ti for a reason - it is the fastest single gpu card you can buy.  I know Fiji's core is close to the top end while Maxwell is running easy but the next chips are unknowns.  If AMD get the shrink right they will have a formidable card.  Nvidia ripped out Kepler's compute component with Maxwell, that's why they still sell as a Kepler chip as one of their top end HPC parts.  

https://www.techpowerup.com/207265/nvidia-breathes-life-into-kepler-with-the-gk210-silicon.html

That sacrifice on Maxwell allowed the clocks to go higher with a lower power draw.  Fiji kept a relevant chunk of compute and in the next process shrink it may turn out quite well for them.

I had a 7970 that clocked at 1300Mhz (huge back then but a lot of them did it).  My Kepler card didn't clock so well (compute is hot to clock) but with Maxwell and lower compute, the clocks are good.  The base design of Fiji, which Polaris will be based on(?) has good pedigree for DX12.  If Nvidia drop the ball (highly unlikely) AMD could take a huge lead in next gen gfx.

As it is, I don't believe Nvidia will drop the ball.  I think they'll bring something quite impressive to the 'A' game.  At the same time, I think AMD will also be running with something powerful.  We can only wait and see.

FTR - there are many forums out there that believe AMD are worth something.


----------



## trog100 (Mar 20, 2016)

EarthDog said:


> What???
> 
> You do know that AMD processors overclock more % wise on average, right? Also, non fury products(read rX series) tend to overclock quite well... there are exceptions on both sides.
> 
> ...



its pretty logical thinking.. one team will always have the edge.. that team will also hold back something in reserve.. all it has to do is beat the other team..

the team that is struggling to keep up dosnt have this luxury.. it in simple terms it has to try harder.. trying harder means higher power usage more heat and more noise..

from a cpu point of view amd have simply given up trying to compete.. they are still trying with their gpus..

when you make something for mass sale it needs a 5 to 10 percent safety margin to avoid too many returns

back in the day when i moved from an amd cpu to an intel.. i bought a just released intel chip.. it came clocked at 3 gig.. at 3 gig it pissed all over the best amd chip.. within a few days i was benching it at 4.5 gig without much trouble..

a pretty good example of what i am talking about.. a 50% overclock is only possible if the chip comes underclocked in the first place..

trog


----------



## EarthDog (Mar 20, 2016)

trog100 said:


> a pretty good example of what i am talking about.. a 50% overclock is only possible if the chip comes underclocked in the first place..


That is how things work though, trog. It all depends on the quality of the silicon, binning, and, sales, actually.


----------



## medi01 (Mar 20, 2016)

Katanai said:


> Yeah, yeah, IF Fiji



F**k ifs:






http://wccftech.com/amd-radeon-r9-f...ia-geforce-gtx-titan-quad-sli-uhd-benchmarks/


----------



## FRAGaLOT (Mar 20, 2016)

I don't know why nvidia won't name the products after the cores they have in them, "GeForce GP104" sounds like a fine product name along with "GeForce GP104 Ti" and "GeForce GP104 Titan" and whatever other gimped versions of the "GP10x" they make of this core for cheap cards.


----------



## PP Mguire (Mar 20, 2016)

Frick said:


> To the 4K users: how does old games look on it? Say Diablo 2 unmodded.


A little later today I'm going to try putting my 98 box on the 4k screen


----------



## FRAGaLOT (Mar 20, 2016)

BiggieShady said:


> So what's after X80, X80 Ti and X80 Titan ... does that mean that dual gpu card would be x90 and next gen silicon will go with X180, X180 Ti, X180 Titan and X190 for dual volta gpu?


I don't think nvidia has made a dual GPU single-card in ages. Reference or 3rd party. That's more something AMD tends to do, nvidia has never had the need\s to make such a product since the competition is so far behind, plus it would probably way more expensive than just SLIing two cards.


----------



## FRAGaLOT (Mar 20, 2016)

Frick said:


> To the 4K users: how does old games look on it? Say Diablo 2 unmodded.


Wasn't Diablo 2 just updated recently by Blizzard to run on modern PCs? Doubt it supports 4k tho.


----------



## PP Mguire (Mar 20, 2016)

FRAGaLOT said:


> I don't think nvidia has made a dual GPU single-card in ages. Reference or 3rd party. That's more something AMD tends to do, nvidia has never had the need\s to make such a product since the competition is so far behind, plus it would probably way more expensive than just SLIing two cards.


Titan Z was last gen and Nvidia. They allowed AIB partners to sell the card but like all Titan cards they couldn't modify it. The last dual GPU that was modified to any extent was the 590.



FRAGaLOT said:


> Wasn't Diablo 2 just updated recently by Blizzard to run on modern PCs? Doubt it supports 4k tho.


I believe so, but idk what the update all has. Was never really a Diablo fan.


----------



## FRAGaLOT (Mar 20, 2016)

EarthDog said:


> You do know that AMD processors overclock more % wise on average, right?


That means nothing since Intel CPUs still outperform AMD cpus. If overclocking an AMD cpu was that much of a performance improvement more than what you can squeeze out of an Intel CPU, then everyone would be using AMD CPUs like it was the early 2000s again.


----------



## FRAGaLOT (Mar 20, 2016)

PP Mguire said:


> Titan Z was last gen and Nvidia. They allowed AIB partners to sell the card but like all Titan cards they couldn't modify it. The last dual GPU that was modified to any extent was the 590. I believe so, but idk what the update all has. Was never really a Diablo fan.



Yeah i just couldn't recall the last time nvidia ever did a dual GPU card, which AMD does all the time at every reversion.

As far as i know the Diablo 2 update allows the game to run on modern PCs and operating system (64bit OSes), considering most people were playing it on Windows 9x back in the day.


----------



## trog100 (Mar 20, 2016)

EarthDog said:


> That is how things work though, trog. It all depends on the quality of the silicon, binning, and, sales, actually.



the chip i mentioned was on pre-order i had one of the first in the country.. it was a brand new line.. intel knew full well what it was capable of.. but knowing this they chose to clock it at 60% of what it could have been clocked at.. fast enough to clearly beat the opposition but not fast enough to make fools of them..

8 years later my current end of the line intel cpu is running at 4.5 gig.. very similar to what my 8 year old intel chip could do.. and very similar to what the latest generation intel chips can do..

it does make one wonder just what would have happened if that chip had not been deliberately down clocked.. where would we be now.. he he

one thing is for sure.. intel have had a pretty easy 8 years.. 

trog


----------



## EarthDog (Mar 20, 2016)

FRAGaLOT said:


> That means nothing since Intel CPUs still outperform AMD cpus. If overclocking an AMD cpu was that much of a performance improvement more than what you can squeeze out of an Intel CPU, then everyone would be using AMD CPUs like it was the early 2000s again.


while that is true.. that wasn't remotely a part of what we were discussing. Look at the post I quoted.


----------



## arbiter (Mar 20, 2016)

EarthDog said:


> while that is true.. that wasn't remotely a part of what we were discussing. Look at the post I quoted.


Ok lets get back on topic of Nvidia gpu and drop the whole AMD cpu discussion as that is probably some other thread.


----------



## Frick (Mar 20, 2016)

FRAGaLOT said:


> As far as i know the Diablo 2 update allows the game to run on modern PCs and operating system (64bit OSes), considering most people were playing it on Windows 9x back in the day.



This is getting way off topic, but it ran just fine on modern OS's before.


----------



## Ubersonic (Mar 21, 2016)

There's a certain comedy to this, after the 9000 series they went to the 200 series instead of a X000 series, now after the 900 series they are going to the X00 series lol.

No points for consistency Nvidia 

(Yeah I  know the was a 100 series before the 200 but it was OEM only rebrands).


----------



## Basard (Mar 21, 2016)

Prima.Vera said:


> LOL. Saw that actually. 800x600 stretched on a 4K monitor....Blur pr0n fest. Like watching a very low res video on your 1080p monitor.... ))
> 
> However, if you use a hack, it looks like this ))


That's the most beautiful thing I've ever seen!


----------



## RejZoR (Mar 21, 2016)

Then again they maybe want to differentiate the series more by making drastic name changes instead of logical ones...


----------



## laszlo (Mar 21, 2016)

as nvidia like green they shall name their cards :
-leprechaun
-goblin
-grinch
-ipkiss
-lantern
-t.m.n.t.
-wazowski
-hulk
-kermit
-yoda
-shrek
.....

plenty of green characters to choose from....

i couldn't resist


----------



## 64K (Mar 21, 2016)

FRAGaLOT said:


> I don't know why nvidia won't name the products after the cores they have in them, "GeForce GP104" sounds like a fine product name along with "GeForce GP104 Ti" and "GeForce GP104 Titan" and whatever other gimped versions of the "GP10x" they make of this core for cheap cards.



Cool names sell hardware. Look at all the products that carry the "Gaming" logo. There is a reason manufacturers do that. Calling a card by the engineering term won't help to sell cards.


----------



## InVasMani (Mar 21, 2016)

Prima.Vera said:


> Which Games, on what settings and resolution please? Otherwise I'm calling this a BS


 Rainbow Six Siege on Ultra easily can surpass 6GB VRAM same with the latest CoD and even more so depending on how much system memory you have installed. From what I've seen higher VRAM can offset the need and impact of more system memory up to a point at least for gaming at least.


----------



## EarthDog (Mar 21, 2016)

That makes little sense considering what is in vRAM is different than what is in the System Ram...


----------



## InVasMani (Mar 21, 2016)

FRAGaLOT said:


> Wasn't Diablo 2 just updated recently by Blizzard to run on modern PCs? Doubt it supports 4k tho.


 Just play Grim Dawn at 4K instead it's a vastly better game and the price is pretty cheap all things considered plus it's getting modding tools on top of a all ready brilliant ARPG that doesn't feature Korean spam bots selling gold and items like every Blizzard game because they require bnet and they even spam you on that outside of the games themselves.



EarthDog said:


> That makes little sense considering what is in vRAM is different than what is in the System Ram...


 http://www.techpowerup.com/217308/black-ops-iii-12-gb-ram-and-gtx-980-ti-not-enough.html

What's in storage goes into system memory and into CPU cache and goes into vRAM where do you think it fetches and accesses those textures from? If you run out of VRAM your next logical fast access high storage density bottleneck is your system memory followed by your actual storage which is more of a non volatile cache for memory. 

Try playing a game that loads textures on the fly like World of Warcraft or Diablo 3 with a traditional HD and see what happens stutter stutter stutter when you run out of available faster access VRAM on the GPU what do you think is going to happen stutter stutter stutter. 

That's also where higher bandwidth lower latency ram helps as well and explains why it drastically impacts APU's as they don't have dedicated VRAM and rely on system memory.


----------



## EarthDog (Mar 21, 2016)

Ahhhhhhhhhhh, I see...after thinking a bit more critically about it.... it makes sense. If you are using more vRAM than what your GPU allows, it 'pages' out to the System RAM....this also depends on the settings and the card so it is not a hard/fast rule. 

Please edit posts instead of double posting... they don;t like that 'round these parts.


----------



## PP Mguire (Mar 21, 2016)

Blops 3 actually scales to your VRAM and system RAM then uses all that it can. It can run smooth on a 4GB card but on my machine will utilize around 11.5GB.


----------



## anubis44 (Mar 22, 2016)

oinkypig said:


> double the floating point so i wouldnt expect AMDs to close the performance gap for another year or so after release of their polaris  "
> 
> 
> 
> ...



So you think AMD's Polaris will be like chocolate soft-serve ice cream with eyeballs? Very interesting.


----------



## Frick (Mar 22, 2016)

InVasMani said:


> Just play Grim Dawn at 4K instead it's a vastly better game and the price is pretty cheap all things considered plus it's getting modding tools on top of a all ready brilliant ARPG that doesn't feature Korean spam bots selling gold and items like every Blizzard game because they require bnet and they even spam you on that outside of the games themselves.



I was about to write something but realized you're probably talking about Diablo 3. Crisis averted.


----------



## InVasMani (Mar 22, 2016)

Frick said:


> I was about to write something but realized you're probably talking about Diablo 3. Crisis averted.


 It's better than any of the Diablo games really it doesn't matter which one seeing as it's got better game mechanics, graphics, story, humor and overall fun factor. The game literally has improved every time I've played it for the last year and half or so. Diablo 2 wasn't a bad game by any means, but even in comparison to that Grim Dawn is still head and shoulders a superior ARPG it has so many great things going for it I wish your typical EQ/WoW class based MMO's were more like it in fact.


----------



## medi01 (Mar 22, 2016)

FRAGaLOT said:


> ...since the competition is so far behind...


Seriously?


----------



## Frick (Mar 22, 2016)

InVasMani said:


> It's better than any of the Diablo games really it doesn't matter which one seeing as it's got better game mechanics, graphics, story, humor and overall fun factor. The game literally has improved every time I've played it for the last year and half or so. Diablo 2 wasn't a bad game by any means, but even in comparison to that Grim Dawn is still head and shoulders a superior ARPG it has so many great things going for it I wish your typical EQ/WoW class based MMO's were more like it in fact.



I won't take your word for it and I make no apologies for it.


----------



## Prima.Vera (Mar 22, 2016)

InVasMani said:


> It's better than any of the Diablo games really it doesn't matter which one seeing as it's got better game mechanics, graphics, story, humor and overall fun factor. The game literally has improved every time I've played it for the last year and half or so. Diablo 2 wasn't a bad game by any means, but even in comparison to that Grim Dawn is still head and shoulders a superior ARPG it has so many great things going for it I wish your typical EQ/WoW class based MMO's were more like it in fact.


Listen man. Going off topic now, but the only thing Grim Dawn has better than Diablo 2 is the graphics. There is no humanly possible way to even consider to compare the gameplay, story, music, overall atmosphere and rest with any other clones released afterwards... Including the *PIECE OF SHIT* called Diablo 3.


----------



## Frick (Mar 22, 2016)

It's probably good though. It's supposed to be made by many of the people working on Titan Quest which is the best Diablo clone after Diablo 2 IMO.


----------



## PP Mguire (Mar 22, 2016)

medi01 said:


> Seriously?


I like how you keep quoting this image like anybody gives a shit. TPU's own benchmarks put the Fury X behind the Titan X and 980ti in all 3 top resolutions. So you can quit spamming the fud.


----------



## medi01 (Mar 22, 2016)

PP Mguire said:


> TPU's own benchmarks


Include "we didn't get a penny from nvidia (c) gameworks" games (the marvelous PrCa in particular)



PP Mguire said:


> ...put the Fury X behind the Titan X and 980ti in all 3 top resolutions...



ORLY? Ah, I see, 102% vs 102%, still, one is behind the other, true.






that chart surely debunks this as FUD, doesn't it? I mean, find one difference:






And clearly, Fury X is, to quote the original statement you had no problem with "so far behind".


----------



## EarthDog (Mar 22, 2016)

PP Mguire said:


> Blops 3 actually scales to your VRAM and system RAM then uses all that it can. It can run smooth on a 4GB card but on my machine will utilize around 11.5GB.


Just curious where you see 11.5GB... is MSI AB? The reason I ask is because in BF4 with a 295x2, I was seeing ram use at almost 7GB (Ultra 2560x1440 + 30% resolution scaling). THat said, that was 3.5GB per card... I an wondering if you are sitting at 5.75GB for each card.


----------



## PP Mguire (Mar 22, 2016)

medi01 said:


> Include "we didn't get a penny from nvidia (c) gameworks" games (the marvelous PrCa in particular) and don't cover multi GPU configurations.
> 
> And even if not, original statement was there was no competition in GPU market ("so far behind"), which itself is FUD regardless of which card has several % advantage, yet that wasn't a problem for you, but you absolutely had to call legit results a "FUD". Which item in that list is "FUD"? pretty please? Can't say? Oh, how freaking surprising...
> 
> ...


Oh wow, so you're trying to say TPU gets paid for their results?? 

Firstly, you're looking at a benchmark put on WCCF which is known to spread all kinds of FUD. So yea, calling that right there. Second, the single doesn't even match up either meaning the others won't and sites like Hard, Guru, and techspot also favor that fact. 

Third, you didn't address the initial post properly. I said and I quote "you keep spamming this like anybody gives a shit" because anybody worth their salt looking at these threads knows the card lineup AND knows the picture you keep posting constantly is in fact, FUD. 



EarthDog said:


> Just curious where you see 11.5GB... is MSI AB? The reason I ask is because in BF4 with a 295x2, I was seeing ram use at almost 7GB (Ultra 2560x1440 + 30% resolution scaling). THat said, that was 3.5GB per card... I an wondering if you are sitting at 5.75GB for each card.


MSI AB and eVGA OSD, not with each card, that was running single. I only recently put the second card back in my machine and Blops 3 came out a while ago.


----------



## medi01 (Mar 22, 2016)

PP Mguire said:


> TPU gets paid


No, I thought you knew the favourite quote by PrCa dev.




PP Mguire said:


> Second, the single doesn't even match up either


It perfectly matches up with TPU results. (were you bad at math at school?)



PP Mguire said:


> like anybody gives a shit


That's just your opinion, and I value it. Very much.



PP Mguire said:


> ...picture you keep posting constantly is in fact, FUD.


Except you failed to support that statement and failed to comprehend TPU charts showing the opposite.


----------



## PP Mguire (Mar 22, 2016)

medi01 said:


> No, I thought you knew the favourite quote by PrCa dev.
> 
> 
> 
> ...


Maybe you need glasses bud.

TPU shows the Fury X being at 100 and 980ti at 102 without exclusions while the FUD shows as 97% for both. Now, let's go on a limb here and say exclusions are ok and that the 102/102 and 97/97 are matching sure, but what about TItan X. Can you not count chief? TItan X is 4% higher than the matching, and 6% higher than the actual Fury X figure. Oh, those FUD #s also differ from the above mentioned Crossfire benchmarks or do I need to quote those for you so you know what to Google? I'm not doing the work for you. And it still doesn't detest my *original* statement. It's not my opinion, you get ignored practically every time you post that chart until now.


----------



## medi01 (Mar 22, 2016)

PP Mguire said:


> Maybe you need glasses bud.


Maybe you need to work on reading comprehension.

TPU site shows both 980Ti and Fury X at 102%, in case "we didn't get a penny from nVidia (c) Project Cars" is excluded. Oh, so it is in DGLee review (which lists exactly which games it tested, and "we didn't get a penny from nVidia" clearly isn't one of them), same results too.

Now Titanium X (single card vs single cards) is merely 4% faster than Fury X on TPU site, 3% faster in DGLee review, which is well within error margins, especially, taking into account that set of tested games is actually different.

Now, the fact that crossfire scales better than SLI should be known to a person who visits tech review sites regularly (unless most time is spent sharing with people your valuable opinion on who gives a flying f*ck about what) so looks not even remotely suspicious.

PS
Good Lord, just realized what you meant bay "ahaaaa, on one site both are 97% while on the other they are at 102%". Jeez. Think about what 100% are in both cases.


----------



## EarthDog (Mar 22, 2016)

Jesus, and Bill Bright isn't even a part of this thread...?!!!

Take it to PM guys... if I wanted to see/read/hear incessent bitching, I would go home to your wives.  


BOOM.


----------



## rtwjunkie (Mar 22, 2016)

EarthDog said:


> Jesus, and Bill Bright isn't even a part of this thread...?!!!
> 
> Take it to PM guys... if I wanted to see/read/hear incessent bitching, I would go home to your wives.
> 
> ...








 
Maybe that will work better than my plea earlier.


----------



## EarthDog (Mar 22, 2016)

I tried to lighten the mood... even if that garner's me an infraction for busting their chops. Worse than me........ an incredible feat!


----------



## arbiter (Mar 22, 2016)

medi01 said:


> Seriously?





medi01 said:


> And clearly, Fury X is, to quote the original statement you had no problem with "so far behind".


Problem with that graph is its pretty deceptive. Good % that puts fury ahead is 1 game, that 1 game was an AMD sponsored title so take that outta the mix they are pretty close to dead even.


----------



## medi01 (Mar 23, 2016)

arbiter said:


> Problem with that graph is its pretty deceptive. Good % that puts fury ahead is 1 game, that 1 game was an AMD sponsored title so take that outta the mix they are pretty close to dead even.


Which one game is that? (and dead even in which combination?)


----------



## EarthDog (Mar 23, 2016)

Ok, not to jump in this mess, but I do have a question about that graph.... how can one tell from it the performance of individual games? particularly at the single and dual card level? It seems like the point of this graph is to show GPU scaling overall and its awfully difficult to tell who is winning what at dual and below. 

With that, across their testing, the Titan X is 3% faster than the 980Ti and Fury X with a single card...


----------



## medi01 (Mar 23, 2016)

EarthDog said:


> how can one tell from it the performance of individual games?



How can you tell performance of inidividual games on TPU chart?


Spoiler










`

There is only so much information you can squeeze into a bar. In DGLee's chart I can see that, say, FuryX from Crossfire onward is faster than Titanium X in Alien Isolation, or Metro 2033, Metro Last Night.


----------



## EarthDog (Mar 23, 2016)

medi01 said:


> How can you tell performance of inidividual games on TPU chart?
> 
> 
> Spoiler
> ...


By looking at the individual games on the pages prior? Did you link this asian website  you sourced that graph from earlier in your back and forth which shows individual game performance?

Just saying that, in single card, its nearly impossible to tell. As you use more cards, you can see more of a difference between games. In the end, its a terrible way, particulary with a single card, to discern that data from the graph.


----------



## medi01 (Mar 23, 2016)

EarthDog said:


> By looking at the individual games on the pages prior?


No, from the summary chart alone. For some games it's easy to tell, while others are hard,



EarthDog said:


> its a terrible way..


Providing more information (you don't see anything at all on TPU summary chart) is definitely not "a terrible way", and if you thought that individual charts are hidden, scroll down:
http://wccftech.com/amd-radeon-r9-f...ia-geforce-gtx-titan-quad-sli-uhd-benchmarks/


----------



## EarthDog (Mar 23, 2016)

I didn't think anything medi... I saw this single graph and no links until now.............

......... that link shows the different resolutions, not games (as the TPU reviews DO show indivdual game performance.

Again, single card, its nearly impossible to tell, but as the number of cards go up, you can tell.


----------



## xfia (Mar 23, 2016)

just be smart and never read anything from them.. they lie all the time and it starts crap a lot worse than this.


----------



## medi01 (Mar 24, 2016)

EarthDog said:


> that link shows the different resolutions, not games



No, it shows games (and benchmarks, 3DMark isn't a game, right) at different resolutions:










etc.



EarthDog said:


> I saw this single graph and no links until now....


I posted link earlier, post number 125 in this thread.




xfia said:


> just be smart and never read anything from them


FFS, bloody results MATCH RESULTS OF TPU...


----------



## Legacy-ZA (Mar 24, 2016)

Found a nice video on Linustechtips for people that say VRAM doesn't matter. ;-)









Just an example; I have a 4GB card, when I run Rise of Tomb Raider, can my GPU handle it? Yes. Can my amount of VRAM handle it? No. When I run the benchmark, my FPS drops from 50 to 1 depending what new sections need to be loaded, not to mention the texture pop-ins. I suppose there are people that haven't run into these problems because they can always afford the best, perhaps they will stay unconvinced since they won't run into the problem or rarely. Exactly the same problem that Linus had with Shadows of Mordor.

Perhaps some of you will remember when Battlefield 4 launched, most people that still had 2GB VRAM cards complained that they have a lot of stuttering and weird graphical pop-in issues etc. When they replaced it with newly released nVidia GFX cards with 4GB VRAM, no more complaints, how weird huh?

I hope this video will also help convince those that weren't and those that had doubts that they really should understand, that *VRAM MATTERS!*


----------



## InVasMani (Mar 28, 2016)

You can generally lower settings resolution or to a point if you have less VRAM. It's one of those things where it most defiantly can matter and impact frame rates depending on the former. If you run out of VRAM having enough system memory and faster speed can help minimize the negative impacts from inadequate VRAM to fill the demand requirements up to a point. It's all a system of balance from high to low and fast to slow it's harder to juggle more balls at once.


----------



## Prima.Vera (Mar 28, 2016)

Legacy-ZA said:


> I hope this video will also help convince those that weren't and those that had doubts that they really should understand, that *VRAM MATTERS!*



Actually that video just clearly explains that there is no clear answer and it all depends on the game and resolution mostly...

P.S.

I still fermly belive that 3GB of VRAM are more than enough of running a game in 1080p with Ultra(Max) Details and SMAA Antialiasing. So far there is no game I had issues...


----------



## Legacy-ZA (Mar 28, 2016)

Prima.Vera said:


> Actually that video just clearly explains that there is no clear answer and it all depends on the game and resolution mostly...
> 
> P.S.
> 
> I still fermly belive that 3GB of VRAM are more than enough of running a game in 1080p with Ultra(Max) Details and SMAA Antialiasing. So far there is no game I had issues...



It's true, it depends on the situation. All I am saying; I have run into the problem way more often than I would like and it's better to have more than less.


----------



## medi01 (Apr 5, 2016)

"up to 70% faster in....  in CUDA Deep Neural Network Workloads"


http://wccftech.com/nvidia-pascal-gpu-deep-neural-ai-performance/


----------



## oinkypig (Nov 14, 2017)

when does volta get released


----------



## EarthDog (Nov 14, 2017)

Dead thread... There is a Volta thread around here... but it should be out in 2H+ 2018. 

.... and funny how off the mark the news was too.


----------



## Fluffmeister (Nov 14, 2017)

I love bumped necros, seems not everyone got X = 10.


----------



## EarthDog (Nov 14, 2017)

btarunr said:


> Instead of going with the GTX 1000 series that has one digit too many


----------

