# New GTX TITAN-Z Launch Details Emerge



## btarunr (May 1, 2014)

NVIDIA's GeForce GTX TITAN-Z missed the bus on its earlier 29th April, 2014 launch date, which was confirmed to the press by several retailers, forcing some AIC partners to content with paper-launches of cards bearing their brand. It turns out that the delay is going to be by just a little over a week. The GeForce GTX TITAN-Z is now expected to be available on the 8th of May, 2014. That will be when you'll be able to buy the US $3,000 graphics card off the shelf. 

A dual-GPU graphics card based on a pair of 28 nm GK110 GPUs, the GTX TITAN-Z features a total of 5,760 CUDA cores (2,880 per GPU), 480 TMUs (240 per GPU), 96 ROPs (48 per GPU), and a total of 12 GB of GDDR5 memory, spread across two 384-bit wide memory interfaces. Although each of the two GPUs is configured identical to a GTX TITAN Black, it features lower clock speeds. The core is clocked at 705 MHz (889 MHz on the GTX TITAN Black), with GPU Boost frequencies of up to 876 MHz (up to 980 MHz on the GTX TITAN Black); while the memory remains at 7.00 GHz. The card draws power from a pair of 8-pin PCIe power connectors, and its maximum power draw is rated at 375W. It will be interesting to see how it stacks up against the Radeon R9 295X2 by AMD, which costs half as much, at $1,500.





*View at TechPowerUp Main Site*


----------



## matar (May 1, 2014)

I am a nVidia Fan but not this time $3000 for 3 slots so basically $1000 for each slot.


----------



## johnspack (May 1, 2014)

Again,  why...  I know we are a performance minded communtity...  why not just 2 tis,  I know,  no high end dp performance to do cad work.......


----------



## MxPhenom 216 (May 1, 2014)

I would have much rather seen Nvidia release a GTX790 to take on the 295x2 rather than this card. Unless Nvidia is working on one too. In gaming I expect this card to be killed by the 295x2, but anyone who buys this card for strictly gaming might want to reevaluate their life.


----------



## GreiverBlade (May 1, 2014)

MxPhenom 216 said:


> I would have much rather seen Nvidia release a GTX790 to take on the 295x2 rather than this card. Unless Nvidia is working on one too. In gaming I expect this card to be killed by the 295x2, but anyone who buys this card for strictly gaming might want to reevaluate their life.



well if you expect that card to be killed by the 295x2 then the 790 would have been same ... afaik a Titan Black is equal to a 780 Ti but plus DP compute  (and the Tblack has a higher clock but the Z will have a lower tho) rather buying 2 or 3 780 Ti is more efficient (price and frequencies) even on air cooling (for gaming ofc) except if the 790 would be priced under 1500$.

but for the last part i am totally following you





matar said:


> I am a nVidia Fan but not this time $3000 for 3 slots so basically $1000 for each slot.


exactly what i thought when they released the info on the price


----------



## MxPhenom 216 (May 1, 2014)

GreiverBlade said:


> well if you expect that card to be killed by the 295x2 then the 790 would have been same ... afaik a Titan Black is equal to a 780 Ti but plus DP compute  (and the Tblack has a higher clock but the Z will have a lower tho) rather buying 2 or 3 780 Ti is more efficient (price and frequencies) even on air cooling (for gaming ofc) except if the 790 would be priced under 1500$.
> 
> but for the last part i am totally following you
> View attachment 56344
> ...



Maybe, but the clock speeds are butchered on this Titan Z. I was hoping they would change some things during the delay to get high clock speeds.


----------



## GreiverBlade (May 1, 2014)

MxPhenom 216 said:


> Maybe, but the clock speeds are butchered on this Titan Z. I was hoping they would change some things during the delay to get high clock speeds.


indeed that's what i said in my post, but a 790 would also be on a 2x8pin and if they go for a 3 slot air cooling for a hypothetical 790 the clock of the 780 Ti would be also butchered. 

i guess lower clock is what you get for a air cooling solution and a tdp of 375w, maybe with a 3x8pin ... but again the price point of the 295x2 is the main force and if you take a OC 295x2 like the Sapphire even in Gflops the Z is behind... i would love to see a 790(Ti) but for the moment the 295x2 is my favorite, even if the hybrid cooler is a little hindrance due to the place needed for the radiator, just for user who want two of them, me with only one i would be happy 
 
500w versus 375w well ... for power efficiency the Z hold the lead but at what price ...


----------



## Suka (May 1, 2014)

MxPhenom 216 said:


> Maybe, but the clock speeds are butchered on this Titan Z. I was hoping they would change some things during the delay to get high clock speeds.


Apparently they aren't going after the 295 performance crown with the Titan Z. Maybe that would be a 790 but then if they are to cool two GK110 chips wouldn't they need water cooling also or they will go with a dual or triple fan configuration for the cooler? Am assuming they would clock the chips ~1GHz boost.


----------



## dj-electric (May 1, 2014)

So guys... What happened to the ASUS Launch PR?


----------



## HumanSmoke (May 1, 2014)

GreiverBlade said:


> ... i would love to see a 790(Ti)


Not me.
If the whole idea is a PR stunt for the halo, then Nvidia should just go nuts at pay no heed to the PCI-SIG. Supply enough power (3x8pin), add in some binned chips to guarantee 1150-1200MHz boost and be done with it. Maybe take a leaf out of AMD's book and bundle the card with a couple of 740QC's and a 240mm rad just for sh*ts and giggles.
Personally, I find dual cards a waste of time and resources. Benchmarks results (and subsequent conclusions) are heavily dependant upon a working driver profile for SLI/CrossfireX in the chosen games, and when it doesn't work, disabling one GPU is a limited option fix for an expensive cash outlay. If you're looking at productivity/content creation applications that don't leverage SLI/CFX, then that's all good- but it still represents a miniscule number of potential users that require a precise feature set.

I'm guessing, overclocking two 6GB 780's will net a better experience, moreso when you factor in the $1860 saving over the Titan Z.


----------



## GreiverBlade (May 1, 2014)

HumanSmoke said:


> Not me.
> If the whole idea is a PR stunt for the halo, then Nvidia should just go nuts at pay no heed to the PCI-SIG. Supply enough power (3x8pin), add in some binned chips to guarantee 1150-1200MHz boost and be done with it. Maybe take a leaf out of AMD's book and bundle the card with a couple of 740QC's and a 240mm rad just for sh*ts and giggles.
> Personally, I find dual cards a waste of time and resources. Benchmarks results (and subsequent conclusions) are heavily dependant upon a working driver profile for SLI/CrossfireX in the chosen games, and when it doesn't work, disabling one GPU is a limited option fix for an expensive cash outlay. If you're looking at productivity/content creation applications that don't leverage SLI/CFX, then that's all good- but it still represents a miniscule number of potential users that require a precise feature set.
> 
> I'm guessing, overclocking two 6GB 780's will net a better experience, moreso when you factor in the $1860 saving over the Titan Z.



i rephrase ... i would love to see a 790(Ti) under 1500$ otherwise : it's a no go!

i can't but agree on the dual gpu card are a wast... even if i like to see them, ofc i would be happy with one 295x2 in my SG09B but a R9 290/290X would be more than enough already.


----------



## LAN_deRf_HA (May 1, 2014)

Between the clock speed, deflated cooler, and price, this has turned out to be a pretty shitty product. Hopefully it does poorly enough so they don't try this again any time soon. Of course people had hoped that would be the case for the Titan, but everybody bought that up so now we're stuck with $1000 cards being considered acceptable.


----------



## 64K (May 1, 2014)

GreiverBlade said:


> i rephrase ... i would love to see a 790(Ti) under 1500$ otherwise : it's a no go!
> 
> i can't but agree on the dual gpu card are a wast... even if i like to see them, ofc i would be happy with one 295x2 in my SG09B but a R9 290/290X would be more than enough already.



You probably will see a GTX 790 but I doubt it will be under $1,500.


----------



## lZKoce (May 1, 2014)

Come on people  can't you just enjoy it already . It's not like it's your investment to release the card. Judging by the amount of comments Titan Black is causing in the Net, it already paid for itself.


----------



## Sihastru (May 1, 2014)

So much hate.

I understand the arguments.

CON *#1*: It creates a precedent. A really expensive card, like we never had before. It creates a new (new-new) price bracket for very (very-very) high end cards. We don't want that.

REPLY to CON #1: For every Koenigsegg, there are millions of affordable cars of all sizes and for all purposes. You're not required to buy the Koenigsegg. You can buy the half price Nissan GTR and still go as fast as the speed limit.

CON *#2*. It's stupid. It makes no sense. I can get INSERT-NAME-HERE card for half the price, or I can get two of the INSERT-ANOTHER-NAME-HERE for even less.

REPLY to CON #2: So what? If we won't push the limits, how will we ever get ahead? You're not required to buy into the crazy-bonkers ragged edge products.

CON *#3*: It's not as great for mining as the INSERT-NAME-HERE! AMD FTW!

REPLY to CON #3: The world is not just about mining. NVIDIA created an ecosystem in the professional market and they can now get a nice return on that investment. For the professional market, it's a bargain card. If AMD wants in, they must work at it. It's irrelevant that an AMD card is good at certain compute tasks if the companies that write the software for the professional market do not care. And there are good reasons for them not to care, one of the most important ones is that AMD always offloads almost everything to INSERT-COMPANY-NAME-HERE.

The most recent example to this is Mantle. AMD created a lot of buzz, but in reality it offloaded the actual work to the game developers. The same with 3D display technology. AMD offloaded work to some company and then it buzzed it up with the words "free" and "open source". And there are many other examples...

CON *#4*: Nobody will buy the card. If you buy the card, you're stupid! NVIDIA is stupid! Titan-Z is stupid!

REPLY to CON #4: I hear this argument a lot. People that can afford expensive things are for some reason all stupid. Well... maybe some of them are, but most of them are smarter then most people. And a lot of people will buy this card. They will.

Crazy and stupid are not the same thing.

CON *#5*: Why would NVIDIA create such a product? The same was asked when Titan came along.

REPLY to CON#5: Because people will buy it. This kind of purchase can be justified in many ways. If you have the money, 'because I can' is enough. NVIDIA thanks you.

CON *#6*: Whatever.

REPLY to CON #6: Whatever.


----------



## Sony Xperia S (May 1, 2014)

Oh, come on, don't get silly. Nvidia will NEVER thank you, they are the big evil laughing in the corner at anyone who is not smart enough to value their money but throw it. Nvidia DOESN'T need your money, it has enough!

The comparison between the car is nonsense too. I would call the R9 295X2 a Bugatti Veyron, and this one would be a cheaper Nissan in real value.

That said, I will buy the card only if it is fairly priced at around 1000 bucks. If you are greedy and want more, no deal.


----------



## 64K (May 1, 2014)

Sihastru said:


> So much hate.
> 
> 
> REPLY to CON #3: The world is not just about mining. NVIDIA created an ecosystem in the professional market and they can now get a nice return on that investment. For the professional market, it's a bargain card. If AMD wants in, they must work at it. It's irrelevant that an AMD card is good at certain compute tasks if the companies that write the software for the professional market do not care. And there are good reasons for them not to care, one of the most important ones is that AMD always offloads almost everything to INSERT-COMPANY-NAME-HERE.



Nvidia is actively causing a lot of this uproar on the net concerning the Titan Z. This is a GeForce card (a gaming card) and look at what they are saying on their GeForce website.....

http://www.geforce.com/whats-new/articles/announcing-the-geforce-gtx-titan-z

If this card makes more sense than 2X Titan Black to professionals then spend the extra money but as long as Nvidia aims this card at gamers for $3,000 then you will continue to see comments like the above posts all over the net from gamers.


----------



## HM_Actua1 (May 1, 2014)

Thee dumbest, non-sense making deploy by Nvidia I've ever seen.......

This card and it's price tag make ZERO SENSE!

What people should really understand and know is the the "12GB" of Vram is really 6GB

they're doing what they did with the 690 allocating 4GB per GPU

So in the Z's casse 6GB to each GPU which means you really only get 6GB of usable VRAM.

I'm a stout Nvidia/Intel user but this saddens me to see such stupidity.


----------



## MxPhenom 216 (May 1, 2014)

Hitman_Actual said:


> Thee dumbest, non-sense making deploy by Nvidia I've ever seen.......
> 
> This card and it's price tag make ZERO SENSE!
> 
> ...



I think everyone here, and everyone even remotely interested in the card knows that. Its SLI on same PCB. Each GPU gets their own 6GB pool of memory, and 384 bit bus. 

They are doing what they HAVE done with every dual GPU they have made, AMD does the same thing. 

Found a screen shot of the PCB. One crowded PCB, scares me like the GTX590 PCB.


----------



## HM_Actua1 (May 1, 2014)

MxPhenom 216 said:


> I think everyone here, and everyone even remotely interested in the card knows that. Its SLI on same PCB. Each GPU gets their own 6GB pool of memory, and 384 bit bus.
> 
> They are doing what they HAVE done with every dual GPU they have made, AMD does the same thing.
> 
> Found a screen shot of the PCB. One crowded PCB, scares me like the GTX590 PCB.


I wouldn't go as far to say everyone knows that but. who knows.

Yah Dual GPU cards have WAY too much heat on a single PCB


----------



## Scrizz (May 1, 2014)

maybe they should've gone the 9800 GX2 route?
LOL


----------



## xorbe (May 1, 2014)

2 x Titan Black base clock 26.1% faster, but Titan Z 50% more expensive.  Does not compute for gamer.


----------



## Fluffmeister (May 1, 2014)

If people choose to buy two Titan Blacks over a single Titan Z than I really don't think Nvidia care.


----------



## radrok (May 1, 2014)

MxPhenom 216 said:


> I think everyone here, and everyone even remotely interested in the card knows that. Its SLI on same PCB. Each GPU gets their own 6GB pool of memory, and 384 bit bus.
> 
> They are doing what they HAVE done with every dual GPU they have made, AMD does the same thing.
> 
> Found a screen shot of the PCB. One crowded PCB, scares me like the GTX590 PCB.



Oh wow that power delivery is almost even worse than the original Titan, ha ha ha sorry can't stop laughing.

For that price one would have hoped they'd equip it with a decent PCB, I'm sorry but ATI has always been superior, look at the "overkillness" they slapped onto the 295x2.


----------



## MxPhenom 216 (May 1, 2014)

radrok said:


> Oh wow that power delivery is almost even worse than the original Titan, ha ha ha sorry can't stop laughing.
> 
> For that price one would have hoped they'd equip it with a decent PCB, I'm sorry but ATI has always been superior, look at the "overkillness" they slapped onto the 295x2.



Its not entirely about the amount of phases. But also the capacity that each phase is rated for.

I expect that with dropping one phase per gpu, the rest are rated a bit higher to compensate, but who knows.


----------



## Ferrum Master (May 1, 2014)

radrok said:


> , I'm sorry but ATI has always been superior, look at the "overkillness" they slapped onto the 295x2.



Be more mature...

Nevertheless the design looks underpowered. Both Titan too much Zeroes and  295X2 Celsius are failures design wise... they are utterly useless for the given price, R/D cost and other stuff... it is just a check in the book like we had them...


----------



## radrok (May 1, 2014)

Ferrum Master said:


> Be more mature...
> 
> Nevertheless the design looks underpowered. Both Titan too much Zeroes and  295X2 Celsius are failures design wise... they are utterly useless for the given price, R/D cost and other stuff... it is just a check in the book like we had them...



Can't see any childish remark in my post, I think it's genuinely positive for a customer to ask for a decent power section on a graphics card of this caliber.

For 3K USD I expect NOTHING else than overkill.



MxPhenom 216 said:


> Its not entirely about the amount of phases. But also the capacity that each phase is rated for.
> 
> I expect that with dropping one phase per gpu, the rest are rated a bit higher to compensate, but who knows.



I honestly think that they won't be rated any higher than what Nvidia has been using on reference 780/Titan/780ti, they are almost all the same.

I bet this GPU will blow when matched against 110%+ TDP, but hey we shouldn't overclock our GPUs, right?


----------



## Ferrum Master (May 1, 2014)

radrok said:


> For 3K USD I expect NOTHING else than overkill.



Well mate... it reminds me of this . And the second thing... It is just the way the things work... They do it because they CAN.


----------



## lZKoce (May 2, 2014)

radrok said:


> Can't see any childish remark in my post, I think it's genuinely positive for a customer to ask for a decent power section on a graphics card of this caliber.



I do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.


----------



## radrok (May 2, 2014)

lZKoce said:


> I do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.



Do you realize that we are talking about power delivery section and not power consumption? Those things are completely different from each other.


----------



## HumanSmoke (May 2, 2014)

lZKoce said:


> I do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.


Unfortunately this board (the Titan Z) isn't Maxwell...and people looking at the top of the performance hierarchy tend to be happy for efficiency to play second fiddle to outright performance.

The Titan Z seems to fall into the chasm between usability and outright performance. Nvidia obviously tried to squeeze as much into a conventional air cooled card as was possible, but it falls short against the competition. AMD have shown in the past that they don't have any qualms about ignoring the PCI-SIG (the HD 6990 and 7990 ), but unlikely that Nvidia expected AMD to put out the first 500 watt reference card, or the first water cooled reference card for that matter. In this instance (the top of the model line) brute force trumps efficiency and Nvidia will be pilloried for being too conservative even if they relaunch the card sans FP64 as a GTX 790. Having said that, I fully expect both cards to enjoy the short and intermittent production runs and free-falling depreciation enjoyed by their dual-GPU predecessors.

The sad thing is that one camp has a $3000 card, and the other camp has a 500 watt card. I'm not entirely sure we're heading in the right direction.


----------



## Prima.Vera (May 2, 2014)

Can I ask a stupid question? Isn't just 10x times better to buy 2x 780 Ti GTX cards for 1500$ and have 4 slots taken, instead of buying 1 card for 3000$ and have 3 slots taken, but 20% LESS performance?!? I mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?


----------



## radrok (May 2, 2014)

To be fair, your question is not stupid at all, I feel the same about it.

Two Titan Blacks for DP make it obsolete, two 780Tis for gaming make it obsolete.

It's just an hype halo w/e product.


----------



## 64K (May 2, 2014)

Prima.Vera said:


> Can I ask a stupid question? Isn't just 10x times better to buy 2x 780 Ti GTX cards for 1500$ and have 4 slots taken, instead of buying 1 card for 3000$ and have 3 slots taken, but 20% LESS performance?!? I mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?



No, your question is not in any way stupid. Gamers are trying to figure out why Nvidia is calling this a GeForce card and aiming it at gamers and though I have no experience with professional cards I have to wonder why wouldn't 2X Titan Black for less money and more performance be better? As a gamer the Titan Z would have never come across my radar if Nvidia hadn't labeled it GeForce and aimed it at gamers.

http://www.geforce.com/whats-new/articles/announcing-the-geforce-gtx-titan-z

http://blogs.nvidia.com/blog/2014/03/25/titan-z/


----------



## BiggieShady (May 3, 2014)

Prima.Vera said:


> I mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?



Having 2 GPUs on a single PCB is actually a good thing for compute professionals that need double precision performance but want cheaper alternative for array of Tesla cards. 
There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while reducing overall number of systems (less cpu-s, motherboards and psu-s needed) on site.
This card is for companies that are building their own supercomputer.
Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.


----------



## Xzibit (May 3, 2014)

BiggieShady said:


> Having 2 GPUs on a single PCB is actually a good thing for compute professionals that need double precision performance but want cheaper alternative for array of Tesla cards.
> There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while reducing overall number of systems (less cpu-s, motherboards and psu-s needed) on site.
> This card is for companies that are building their own supercomputer.
> Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.



Not very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU.  Kind of takes the super out of it all together.


----------



## BiggieShady (May 3, 2014)

Xzibit said:


> Not very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU.



Well you do get what you pay for ... single Tesla K40 is 5.5K USD ... that's one kepler gpu


----------



## DrunkMonk74 (May 3, 2014)

Anyone think that when nVidia's partners get their hands on this card, they'll be able to squeeze some more juice out of it? Maybe along the lines of EVGA's ACX cooler that you find on EVGA's version of the 780 Ti? 

If you look at EVGA's K|NGP|N edition of the 780 Ti it's base clock is 1072MHz!! Rumour is that EVGA is going to also be bringing out a 6Gb version of that card. The current 3Gb version of that card sells for 859.99 USD from EVGA themselves. A 6Gb version might possibly be closer to 1000 USD buy two of those, Sli, and you'll have all the power you need for quite some time and probably save yourself 1000 USD along the way.


----------



## HumanSmoke (May 3, 2014)

DrunkMonk74 said:


> Anyone think that when nVidia's partners get their hands on this card, they'll be able to squeeze some more juice out of it? Maybe along the lines of EVGA's ACX cooler that you find on EVGA's version of the 780 Ti?


If it gets any kind of treatment then it should be a HydroCopper Classified solution. A custom air cooled card along the lines of the KPE would certainly be possible, but would the sales and PR justify the development expenditure?
Judging by the initial testing, the card has some headroom. A 1050MHz boost on an unheated board isn't _too_ bad, so either a waterblock or a reworked air cooler with larger fans such as the ACX would be a better bet for maintain that kind of level without having to ramp the fanspeed to max.


Xzibit said:


> Not very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU


ECC for GDDR5 isn't really needed unless the workload is of critical importance. GDDR5 already has EDC (Error Detection Code) built in which detects errors across the system bus. The only errors it can't check for is memory IC fault and GPU memory controller errors, both of which (along with GPU runtime validation) are stringently binned for to produce pro cards. It's why a K40 (as BiggieShady mentioned) costs 4-5 times the price of a Titan Black, and a W9100 costs 6 times as much as a 290X


----------



## Xzibit (May 4, 2014)

HumanSmoke said:


> ECC for GDDR5 isn't really needed unless the workload is of critical importance. GDDR5 already has EDC (Error Detection Code) built in which detects errors across the system bus. The only errors it can't check for is memory IC fault and GPU memory controller errors, both of which (along with GPU runtime validation) are stringently binned for to produce pro cards. It's why a K40 (as BiggieShady mentioned) costs 4-5 times the price of a Titan Black, and a W9100 costs 6 times as much as a 290X



I believe this is what he said



BiggieShady said:


> Having 2 GPUs on a single PCB is actually a good thing for compute professionals that *need double precision performance but want cheaper alternative for array of Tesla cards*.
> There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while *reducing overall number of systems* (less cpu-s, motherboards and psu-s needed) on site.
> *This card is for companies that are building their own supercomputer.*
> Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.



Nvidia Titan Z 12 GB
6 GB per GPU
single precision = 8.0 Tflops
2.6 Tflops per slot
*double precision = 2.6 Tflops
0.86 Tflops per slot*
375 TDP
$2,999

I'll save you some money on Nvidia at the same site

Nvidia Tesla K40 12 GB
single precision = 4.29 Tflops
2.14 Tflops per slot
*double precision = 1.43 Tflops
0.71 Tflops per slot*
235 TDP
$4,245

Nvidia Quadro K6000 12 GB
single precision = 5.2 Tflops
2.6 Tflops per slot
*double precision = 1.7 Tflops
0.85 Tflops per slot*
225 TDP
$4,235

AMD FirePro W9100 16 GB
single precision = 5.24 Tflops
2.62 Tflops per slot
*double precision = 2.62 Tflops
1.31 Tflops per slot*
275 TDP
$3,499

You don't build supercomputers with a card from a gaming stack. Unless uptime, errors and stability isn't a concern.


----------



## HumanSmoke (May 4, 2014)

Xzibit said:


> I believe this is what he said


I was answering your post, not anyone elses. The fact that I quoted your post should have been a dead giveaway. Why you answered a post concerning double precision with a supposed need for ECC I have no idea- they aren't inextricably linked.
As for whatever vague point you're making, there are plenty of instances where FP64 could be useful to a prosumer (mixed single+ double precision workloads such as 3D modelling)


Xzibit said:


> Nvidia Titan Z 12 GB
> single precision = 8.0 Tflops
> *double precision = 2.6 Tflops*
> $2,999
> ...


So, judging by the bolding and price inclusion, you're saying double precision :
Titan Z...0.87 GFlop/$
W9100..0.75 GFlop/$
K6000...0.45 GFlop/$  (the card is available for $3800)
K40.......0.34 GFlop/$

Not sure how that the Tesla, Quadro, or FirePro are supposed to be "saving some money".

Of course, it's still an apples vs oranges scenario. Professional drivers, software (Nvidia's OptiX, SceniX, CompleX etc.), support, binning, and a larger frame buffer (the Titan Z isn't a 12GB card, it's a 2 x 6GB card) should all add value to the pro boards regardless of vendor.

A further point to note is that Nvidia's FLOP numbers are calculated on base clock (which is correct for double precision since boost is disabled) , not boost -either guaranteed minimum or maximum sustained for single precision. The FLOPS for AMD's cards are calculated on maximum boost, whether it is attainable/sustainable or not.
Case in point: The GTX 780 is quoted as having a 3977 GFlops FP32 rate ( 863 base clock * 2304 cores * 2 Ops/clock). But GPGPU apps can be as intensive as games. The GTX 780 I have here at the moment - based on that the usual calculation should be 967 * 2304 * 2 = 4456 GFlops. In reality the card sustains a boost of 1085 MHz at stock settings (no extra voltage, no OC above factory, no change in stock fan profile). The actual FP32 rate would be 1085 * 2304 * 2 = 5000 GFlops
A quick Heaven run to show how meaningless the base clock (and its associated numbers) are, and why they generally aren't worth the time to record







Xzibit said:


> You don't build supercomputers with a card from a gaming stack.


Jesus, how many times are you going to edit a post.

It probably depends upon your definition of a supercomputer. If its an HPC cluster, then no, you wouldn't...but that's a very narrow association used by people with little technical knowledge of the range of compute solutions.
Other examples:
The *Fastra II* is a desktop supercomputer designed for tomography
Rackmount GPU servers also generally come under the same heading, since big iron generally tend to be made up of the same hardware....just add more racks to a cabinet...and more cabinets to a cluster...etc. etc.
I'd also note that they aren't "one offs" as you opined once before, as explained here: " We build and ship at least a few like this every month".
Go nuts configure away.


----------



## Suka (May 4, 2014)

matar said:


> I am a nVidia Fan but not this time $3000 for 3 slots so basically $1000 for each slot.


How much performance per slot ?


----------



## radrok (May 4, 2014)

GTX Titan also shines at CUDA, untapped CUDA power is nothing to overlook.

I've been abusing my two graphics cards for rendering, work and fun especially now that you can purchase multiple render platforms that support CUDA rendering.

I personally use Octane CUDA render plugins (3Ds Max and Poser) for my personal enjoyment  when I'm free and Vray CUDA for work.

You don't need a Tesla/Quadro card for CUDA rendering 

I would still purchase two separate GTX Titans (not black cause the difference is minimal) compared to this one. Save one slot for what?


----------



## BiggieShady (May 4, 2014)

Xzibit said:


> You don't build supercomputers with a card from a gaming stack. Unless uptime, errors and stability isn't a concern.



Look at it from a standpoint of a testing environment and production environment. If you own a software company and want to offer a solution for CUDA based supercomputers, it would be economically more feasible to do development on Titans, and deploy on customer's Tesla array.


----------



## sweet (May 4, 2014)

HumanSmoke said:


> I was answering your post, not anyone elses. The fact that I quoted your post should have been a dead giveaway. Why you answered a post concerning double precision with a supposed need for ECC I have no idea- they aren't inextricably linked.
> As for whatever vague point you're making, there are plenty of instances where FP64 could be useful to a prosumer (mixed single+ double precision workloads such as 3D modelling)
> 
> So, judging by the bolding and price inclusion, you're saying double precision :
> ...


Stop defending the Titan's price with DP. 7970 is 200$ on eBay nowaday, and it has 947 GFlop double precision. Sooooo....
*7970... 4.735 GFlop/$*
It makes all your number look like a rob. Not to mention 7970 can easily OC more than its default 925 MHz

Few people bought the first Titan for their work with CUDA. However, the others simply bought it because it was the best of its time. They was f*cked really hard by nVi with the release of 780 and 780Ti. Hopefully a smart gamer could learn from their demise and stay away from these stupidly overpriced cards.


----------



## HumanSmoke (May 4, 2014)

sweet said:


> Stop defending the Titan's price with DP. 7970 is 200$ on eBay nowaday, and it has 947 GFlop double precision. Sooooo....
> *7970... 4.735 GFlop/$*
> It makes all your number look like a rob. Not to mention 7970 can easily OC more than its default 925 MHz


Newsflash genius, I used the models and numbers *provided by Xzibit* in his comment to me.....Do I care that you can get a 7970 on eBay for $200 ? Not really since it isn't relevant as it wasn't part of the original data set provided by Xzibit- if he'd included a bunch of other cards I'd have extrapolated their numbers also. All it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (a loss of 64% of its initial value in 28 months).  Delve into the second hand market and you can find comparable deals everywhere since it seems to come as a shock to you that new card prices don't compare particularly well with pre-owed. How about a Quadro at *8.68 GFlops/$* + pro driver support thrown in? or a HD 5970 at *16.87 GFlops/$*, or a desktop  $20 8800 GTS that works out at *31.2 GFlops/$*


sweet said:


> Few people bought the first Titan for their work with CUDA.


You got a link for that ? Maybe some sales numbers?......even some anecdotal evidence would suffice....really.


----------



## sweet (May 4, 2014)

HumanSmoke said:


> Newsflash genius, I used the models and numbers *provided by Xzibit* in his comment to me.....Do I care that you can get a 7970 on eBay for $200 ? Not really since it isn't relevant as it wasn't part of the original data set provided by Xzibit- if he'd included a bunch of other cards I'd have extrapolated their numbers also. All it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (a loss of 64% of its initial value in 28 months).  Delve into the second hand market and you can find comparable deals everywhere since it seems to come as a shock to you that new card prices don't compare particularly well with pre-owed. How about a Quadro at *8.68 GFlops/$* + pro driver support thrown in? or a HD 5970 at *16.87 GFlops/$*, or a desktop  $20 8800 GTS that works out at *31.2 GFlops/$*
> 
> You got a link for that ? Maybe some sales numbers?......even some anecdotal evidence would suffice....really.


I can remind you that 7970's release price is 549$. But it's not my main point here.

Most of people bought Titan for GAMES, and the official drivers of nVidia for this card have been always optimized for GAMES. 

And that price for a gaming card is stupidly high.

Sales numbers cannot determine the buying purpose unfortunately. However, you can go to some tech forum to see how those Titan buyers bragged their FPS , just like AMD buyers recently have been talked about the hashrate of their cards.


----------



## BiggieShady (May 4, 2014)

sweet said:


> Most of people bought Titan for GAMES, and the official drivers of nVidia for this card have been always optimized for GAMES.



Graphics driver should be optimized for all applications (including games) but this is about CUDA libraries and CUDA driver (they are part of a standard geforce driver package but are also independent). Also the whole point is that TITAN is not only marketed as a gaming card: https://developer.nvidia.com/ultimate-cuda-development-gpu


----------



## Xzibit (May 4, 2014)

BiggieShady said:


> Graphics driver should be optimized for all applications (including games) but this is about CUDA libraries and CUDA driver (they are part of a standard geforce driver package but are also independent). Also the whole point is that TITAN is not only marketed as a gaming card: https://developer.nvidia.com/ultimate-cuda-development-gpu






> Nvidia just let us know that Titan supports Dynamic Parallelism and Hyper-Q for CUDA streams, and does not support ECC, the RDMA feature of GPU Direct, or Hyper-Q for MPI connections



The Titan brand is to suck you into the CUDA eco-system.  Once they got you there its not like you can buy Intel or AMD to use it.


----------



## radrok (May 4, 2014)

^ Kinda what my point has always been.

Crosses fingers for Maxwell's Titan to have 12GB Vram, FULL GPU rendered scenes can reach up the 6GB framebuffer easily as you have to load all textures into the GPU.


----------



## OneCool (May 4, 2014)

I smell smoke. Thick green smoke.


----------



## HumanSmoke (May 4, 2014)

sweet said:


> I can remind you that 7970's release price is 549$.


Why would I need reminding when you're the one replying to my post where I made a calculation based on the $549 price???? 


HumanSmoke said:


> All it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (*a loss of 64% of its initial value in 28 months*).


Based on your $200 current estimate...


sweet said:


> . 7970 is 200$ on eBay nowaday.



$200 / $549 = 0.36 = *64%* decrease

/notrocketscience


----------



## sweet (May 5, 2014)

HumanSmoke said:


> Why would I need reminding when you're the one replying to my post where I made a calculation based on the $549 price????
> 
> Based on your $200 current estimate...
> 
> ...


Mate, you totally miss my point here. I just quote the release price of 7970 to show that even at the first release, 7970 was still a better choice for DP than any card of Titan branch. Therefore, when considered as a compute card, Titan cards are still not a valid option.

So please find a better excuse to defend the stupid price of those GAMING cards.

As mentioned above, only people working with CUDA could find benefit in these cards. For others, just learn the lesson that the first Titan buyers took and stay away from these shiny money sucker.


----------



## HumanSmoke (May 5, 2014)

sweet said:


> Mate, you totally miss my point here. I just quote the release price of 7970 to show that even at the first release, 7970 was still a better choice for DP than any card of Titan branch.


No one is denying the 7970 isn't a good deal at $200 (unless it's been thrashed by mining), so you're basically arguing with yourself.

You also seem to be another person hung up on numbers. AMD/ATI cards have had the edge over Nvidia boards in double precision (and single for that matter) on some time, yet it still isn't reflected in the wider community. Why? Because AMD's architectures are totally reliant upon OpenCL for the most part, and OpenCL support is spotty at best.
Say what you will about CUDA, but the software ecosystem is in place and it works.
Blender from their own FAQ:


> Currently NVidia with CUDA is rendering faster. There is no fundamental reason why this should be so—we don't use any CUDA-specific features—but the compiler appears to be more mature, and can better support big kernels. OpenCL support is still being worked on and has not been optimized as much, because we haven't had the full kernel working yet.


AFAIK, OpenCL (working) support isn't overly prevalent. Even Lux, which is touted as a poster child for OpenCL has ongoing issues, and generally where both CUDA and OpenCL are supported, it is the former that is generally more mature. The Blender sentiment isn't a lone voice (pdf)


> In our tests, CUDA performed better when transferring data to and from the GPU. We did not see any considerable change in OpenCL's relative data transfer performance as more data were transferred. *CUDA's kernel execution was also consistently faster than OpenCL's, despite the two implementations running nearly identical code.*
> CUDA seems to be a better choice for applications where achieving as high a performance as possible is important. Otherwise the choice between CUDA and OpenCL can be made by considering factors such as prior familiarity with either system, or available development tools for the target GPU hardware.



So, basically, hardware performs as well as the coding allows. AMD is tied to OpenCL, and OpenCL is tied to third parties for its advancement, which it is very much a case of YMMV. What clouds the issue further is that mainstream (gaming) sites use OpenCL apps only to compare AMD and Nvidia cards which distorts the overall picture, since using the CUDA path for Nvidia hardware invariably means a better result. Note this render test using Premiere where the GTX 670 (using CUDA) and the R9 290X (using OpenCL) are basically equal in time to render. On raw numbers the 290X should have it all over the GTX 670, after all the AMD card has *5.8* *TFlops* of processing power to the 670's *2.46* *TFlops* - well over double!

So the 7970 might represent great value with the right application, and it certainly is affordable at $200. On the other side of the ledger, the GTX 580 -also because of its decreasing price (and its ability to run CUDA apps), makes a compelling buy for people who want to use CUDA coded render/CAD apps. Is the Titan the be-all-and-end-all ? Of course not, and I don't see anyone saying it is. What I see is people comparing two current top tier GPUs because....well, because they are the flavour of the week.

Numbers on the page don't always translate that well in real life scenarios. Harping back to point regarding double precision. It's use is governed by the same coding environment. Can't say I've seen many FP64-only benchmarks outside of HPC, most consumer apps tend to involve both single and double precision calculation rather than FP64 solely, and those that do find their way into benchmarks, are again devoid of using the CUDA path. HPC is the environment for widespread use of double precision, and the ratio of Nvidia to AMD GPUs there is a pretty telling story.

Now, you realise that these two statements you made are contradictory:


sweet said:


> Therefore, when considered as a compute card, Titan cards are still not a valid option.





sweet said:


> As mentioned above, only people working with CUDA could find benefit in these cards.


The majority of compute application Nvidia cards use *ARE* CUDA code. These machines, this machine, this machine, these machines are all geared for content creation. You say its a waste of money, these people beg to differ. Different requirements equals different usage patterns.


----------



## radrok (May 5, 2014)

^
If I may add we are almost forced to use CUDA devices, as you posted their development is more mature and offers way more than what OpenCL based solutions have currently in the market.

Nvidia has done its homework with CUDA and it is gathering what has planted in the past.

Sadly CUDA is proprietary but the fact that it is proprietary allowed it to strive compared to the hot mess OpenCL is nowadays, at least for what I do.

I also challenge you to use Blender with an AMD based graphics solution, you'll run away with nightmares, there are so many cursor bugs I can't even start to enumerate.


----------



## sweet (May 5, 2014)

HumanSmoke said:


> No one is denying the 7970 isn't a good deal at $200 (unless it's been thrashed by mining), so you're basically arguing with yourself.
> 
> You also seem to be another person hung up on numbers. AMD/ATI cards have had the edge over Nvidia boards in double precision (and single for that matter) on some time, yet it still isn't reflected in the wider community. Why? Because AMD's architectures are totally reliant upon OpenCL for the most part, and OpenCL support is spotty at best.
> Say what you will about CUDA, but the software ecosystem is in place and it works.
> ...



CUDA applications are dominating content creation field, but "compute" is not only content creation, mate. GPGPU covers a lot of fields, such as GPU mining, folding, password crack, ...

Leaving that aside, my statement is still valid "Only people working with CUDA could find benefit in these cards"

Even if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as well as compatibility with server racks. And they are even cheaper!!

Titan cards make little sense already, and Titan Z makes absolutely no sense, at all.


----------



## radrok (May 5, 2014)

Why do you insist in saying Titan makes no sense? Or little, I am curious


----------



## GhostRyder (May 5, 2014)

sweet said:


> CUDA applications are dominating content creation field, but "compute" is not only content creation, mate. GPGPU covers a lot of fields, such as GPU mining, folding, password crack, ...
> 
> Leaving that aside, my statement is still valid "Only people working with CUDA could find benefit in these cards"
> 
> ...



He tried to give the same argument on another site under a different name.  Im glad to see someone else who understands why Titan-Z will not work in a rack mount environment because of its 2 direction axial fan (Something Titan and Titan black don't have, instead the usual blower).  Your 100% correct, the fact is that the card has no real good purpose right now because on the gaming front there are better cheaper options, on the professional front there are better cheaper options, and the lack of the professional drivers, build quality, and support for 24/7 work really holds it back.  At the price of 3 times a standard titan black and the fact its a 3 slot device it really holds it back from being worthwhile.

Im guessing either they are going to bump the boost clocks or lower the price to accommodate.  Right now, I don't see it as being a fruitful investment since anyone who really needs Cuda Dev is going to buy 3 Titan blacks for the same price.



Hitman_Actual said:


> Thee dumbest, non-sense making deploy by Nvidia I've ever seen.......
> 
> This card and it's price tag make ZERO SENSE!
> 
> ...



Your completely right, but in the small Cuda render area that 12gb might shine.  However as stated before because of it being a tri-slot, non-professional designed card, and the price make it a near impossible sell.  Its gaming aspects are abysmal, its professional aspects are limited, which brings in many problems to the whole idea of the card.



radrok said:


> Why do you insist in saying Titan makes no sense? Or little, I am curious


It mostly comes to branding and the whole idea/design of Titan.  The original Titan cost 1k, but was the fastest single GPU, carried a hefty 6gb of ram, and had some Cuda dev applications that made use of it.  But even back then the price, the fact its not 24/7 rated, does not have all the professional support, and is beaten but its younger brothers pretty easily in gaming make it a hard sell.  Some people took advantage of it for the 6gb of ram, but in reality the card just comes down to a problem of being where in reality does it actually fit?  Its branding says gaming, Nvidia advertises it as a Professional card, but its lack of support says its not.  It misses on all the fronts which is why I along with many (If not most) find it an hard sell.

Titan makes sense on a small front, but its a very limited front.


----------



## HumanSmoke (May 5, 2014)

GhostRyder said:


> Im glad to see someone else who understands why Titan-Z will not work in a rack mount environment because of its 2 direction axial fan (Something Titan and Titan black don't have, instead the usual blower).


Weirdly enough the guy you're quoting mentioned *nothing* about rackmount. Pretty lame attempt at histrionics on your part as per usual. I've never met any other poster that tries so hard to convince others of his immaturity.
BTW and OT: The GTX 690 also has a the same fan/cooler arrangement, maybe you take time out from composing poorly spelled and punctuated posts to write some poorly spelled and punctuated emails to the GPU server builders - they clearly don't have your technical expertise, since they seem happy to sell and warranty the GTX 690 in a 4U form factor:


----------



## GhostRyder (May 5, 2014)

HumanSmoke said:


> Weirdly enough the guy you're quoting mentioned *nothing* about rackmount. Pretty lame attempt at histrionics on your part as per usual. I've never met any other poster that tries so hard to convince others of his immaturity.
> BTW and OT: The GTX 690 also has a the same fan/cooler arrangement, maybe you take time out from composing poorly spelled and punctuated posts to write some poorly spelled and punctuated emails to the GPU server builders - they clearly don't have your technical expertise, since they seem happy to sell and warranty the GTX 690 in a 4U form factor:


First of all, apparently you can't read at all and you have just made another lame excuse to try and pretend to be smart.


sweet said:


> Even if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as *well as compatibility with server racks*. And they are even cheaper!!




Just because a site will sell it to you does not mean it works well something you spoke about in the past with the HD 7990's in quad setups.

I have never seen someone who tries so hard to convince others of experience in fields hes never even touched in real life.  Googling something and actually working on something are two totally different things.  I like your name better on this site because it sums you up pretty well, just trying to blow smoke in peoples faces.


----------



## Xzibit (May 5, 2014)

HumanSmoke said:


> Jesus, how many times are you going to edit a post.
> 
> It probably depends upon your definition of a supercomputer. If its an HPC cluster, then no, you wouldn't...but that's a very narrow association used by people with little technical knowledge of the range of compute solutions.
> Other examples:
> ...



Edit button is there.  There for I use it. 

K.I.S.S.

750w, 1500w & Max

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w
750w = 1 / 2.6 Tflops
1500w = 3 / 7.8 Tflops
*Max = 5 / 13 Tflops*

Nvidia Quadro K6000 12 GB / 1.7 Tflops / 2 slot / 225w
750w = 3 / 5.1 Tflops
1500w = 6 / 10.2 Tflops
*Max = 8 / 13.6 Tflops*

AMD FirePro W9100 16 GB / 2.62 Tflops / 2 slot / 275w
750w = 2 / 5.24 Tflops
1500w = 5 / 13.1 Tflops
*Max = 8 / 20.96 Tflops*


----------



## cadaveca (May 5, 2014)

GhostRyder said:


> Googling something and actually working on something are two totally different things.



You're kidding, right? like... man... Google FTW!!!


Not really though, I just like that bit of your post. I might have to put it in my sig, if you don't mind. More people need to read that.


----------



## GhostRyder (May 5, 2014)

cadaveca said:


> You're kidding, right? like... man... Google FTW!!!
> 
> 
> Not really though, I just like that bit of your post. I might have to put it in my sig, if you don't mind. More people need to read that.


Well you can't believe everything you read on the internet


----------



## cadaveca (May 5, 2014)

GhostRyder said:


> Well you can't believe everything you read on the internet


That was a big part of why I wanted to do hardware reviews in the first place, and still do, although it's really something that costs me money rather than makes me money.


----------



## GhostRyder (May 5, 2014)

cadaveca said:


> That was a big part of why I wanted to do hardware reviews in the first place, and still do, although it's really something that costs me money rather than makes me money.


Yea I know what you mean, ive always wanted to do reviews as well but never really bitten the bullet.  Part of that problem for me is being on call 12 hours a day 5 days of the week most of the time to fix servers.  You can use it as part of your sig, I don't mind


----------



## Xzibit (May 6, 2014)

HumanSmoke said:


> Yeah ? Glad I don't listen to your system build advice.



Your advise isn't that great.  Even if your just advocating for Nvidia & CUDA.  Titan Z makes no sense other then to feed some ones lack of e-Peen.  I guess its easier for some when its not their money.

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
750w = 1 / 2.6 Tflops / $2,999
1500w = 3 / 7.8 Tflops / $8,997
Max = 5 / 13 Tflops / $14,995

Nvidia Titan Black 6 GB / 1.7 Tflops / 2 slot / 250w / $1,299
750w = 2 / 3.4 Tflops / $2,598
1500w = 5 / 8.5 Tflops / $6,495
Max = 8 / 13.6 Tflops / $10,392


----------



## HumanSmoke (May 6, 2014)

Xzibit said:


> Your advise isn't that great.  Even if your just advocating for Nvidia & CUDA.  Titan Z makes no sense other then to feed some ones lack of e-Peen.  I guess its easier for some when its not their money.
> 
> Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
> 750w = 1 / 2.6 Tflops / $2,999
> ...


You're right....but then again I don't believe I advocated the use of the Titan Z over the Titan Black. So your point is? (apart from trolling and learning to use a calculator app that is)
A lot of CG render artists actually snap up GTX 580's (it's where my two cards ended up). They can utilise CUDA, are reasonable well suited for CG work, relatively cheap for the performance, and are available with a 3GB framebuffer.


----------



## Deleted member 24505 (May 6, 2014)

I could buy a whole lot of drugs 'n' whores for $3000.


----------



## HumanSmoke (May 6, 2014)

tigger said:


> I could buy a whole lot of drugs 'n' whores for $3000.


Probably a mornings budget for Keith Richard!


----------



## Tatty_One (May 6, 2014)

This total thread hijack is becoming tiresome now and embarrassing, at least for one or two of you, lets stop the tit for tat childish behaviour, thread cleaned up.


----------



## Am* (May 6, 2014)

Sihastru said:


> So much hate.
> 
> I understand the arguments.
> 
> ...



Comparing GPUs to cars is nonsense. Cars do not get replaced by ones 40%-50% faster every 1-2 years. They also gain a lot of resale value over time, especially those with a discontinued but iconic designs. For example, the late 90s VW Golf that I could've bought 10 years ago for about £500 now goes for about £3000 just because of its looks, (and it's only a budget car).

It pushes limits of nothing, besides the level of stupidity of people who have more money than sense.

As for the pro-market "bargain card" statement, sorry but that is a bunch of bullshit that either misinformed people chant or just blind Nvidia fanboys use as an excuse. Anyone who thinks spending $3000 on a GeForce card is a good idea compared to say a Quadro K6000 that can be had for just $2000 more for development -- is brain-dead, to say the least. This is still a Geforce card, meaning no ECC support, no compatibility certification in any non-gaming/pro-oriented program (Adobe, AutoDesk etc) and no compute-oriented support of Tesla cards either, meaning they will slaughter this card in their own markets even more than other cards will in the gamers' market as far as value-for-money goes. The last time GeForce cards were comparable with their pro-counterparts was back in the Fermi days when cards like the GTX 580 had Adobe certification and it is also why there were so many pissed off people who upgraded to Kepler who realised how much worse the GTX 680 was at the same tasks due to Nvidia crippling Geforce cards at hardware-level compared to the vBIOS/software and driver limits which Nvidia used to put in place and which clever people used to bypass on old cards with a custom vBIOS and modded drivers. This card is a huge waste of time, money and resources for Nvidia and I hope it loses them a ton of cash (though I very much doubt it).


----------



## GhostRyder (May 6, 2014)

Am* said:


> Comparing GPUs to cars is nonsense. Cars do not get replaced by ones 40%-50% faster every 1-2 years. They also gain a lot of resale value over time, especially those with a discontinued but iconic designs. For example, the late 90s VW Golf that I could've bought 10 years ago for about £500 now goes for about £3000 just because of its looks, (and it's only a budget car).
> 
> It pushes limits of nothing, besides the level of stupidity of people who have more money than sense.
> 
> As for the pro-market "bargain card" statement, sorry but that is a bunch of bullshit that either misinformed people chant or just blind Nvidia fanboys use as an excuse. Anyone who thinks spending $3000 on a GeForce card is a good idea compared to say a Quadro K6000 that can be had for just $2000 more for development -- is brain-dead, to say the least. This is still a Geforce card, meaning no ECC support, no compatibility certification in any non-gaming/pro-oriented program (Adobe, AutoDesk etc) and no compute-oriented support of Tesla cards either, meaning they will slaughter this card in their own markets even more than other cards will in the gamers' market as far as value-for-money goes. The last time GeForce cards were comparable with their pro-counterparts was back in the Fermi days when cards like the GTX 580 had Adobe certification and it is also why there were so many pissed off people who upgraded to Kepler who realised how much worse the GTX 680 was at the same tasks due to Nvidia crippling Geforce cards at hardware-level compared to the vBIOS/software and driver limits which Nvidia used to put in place and which clever people used to bypass on old cards with a custom vBIOS and modded drivers. This card is a huge waste of time, money and resources for Nvidia and I hope it loses them a ton of cash (though I very much doubt it).


You have listed nearly every problem that makes this card a complete bust.  The problems listed are just many of the problems this card at its current configuration bring that make it a problem.

Nvidia decided with Kepler to separate out the dev cards and the gaming cards completely so people with budget concerns were completely stranded and forced to either stick with the old cards (Fermi 580's for instance) or spend the extra on the Professional level cards.  But then Nvidia released the Titan which basically brought that back on the gaming series of cards with its basic levels of cuda dev and high level of ram to the market.  Of course that came at a price of nearly double at the time of its release what the cost of the highest GPU from Nvidia cost (Though to be fair, it was the fastest single GPU for gaming so it was slightly more justified at the time versus the recently released Titan Black).   Titan was designed with a nice blower to keep it cool which would (as with the reference designs of the past) work well in any environment including a crammed Rack mount environment which made people build CGI render house with it because of its high capacity ram.

Titan-Z was supposed to be a new dual GPU card with 12gb of ram that brought Titan devs a better buy or the ability to use it in similar environment.  Thats not the case with the design of this card.  Unlike previous dual GPU cards, this one is a 3 slot form factor, keeps the central axial fan, and costs 3 times a single Titan Black.  The price of 3k is in a level of crazy proportions that brings its price well close to the Quaddro and tesla costs.  Of course people still try to advocate "Well its still cheaper than them" but what they miss is the completely high up-charge this card is bringing for what little it brings and the scenarios that Titan once worked are much more limited with this card.  If they had upgraded titan-Z with some more professional level features including some ECC ram we could be having a different debate right now but facts are facts and no amount of fanboys are going to justify the cards cost.

Whether or not your a professional and need something like this, there is already a significantly better option for you out there called the Firepro W9100.  If you really need Cuda Dev and want to use that in your purchase, buy 3 titan blacks and be happy with the fact you will blow Titan-Z to pieces and it will work in ANY environment.  But for those who need the extreme power, the W9100 is 70 bucks more, has more ram that Titan-Z that is also ECC rated, comes with the professional level driver support, rated for 24/7 use, the blower will work in most environments, and its got all that on a SINGLE GPU which means its not limited in the ground of its Dual GPU support in certain compute areas.

Titan-Z misses the ball completely, now if the price comes down to 2200-2500, maybe it will have slightly more purpose (Though I doubt its going to, but Nvidia might realize their foolishness) but until then this card is beyond redemption.



Xzibit said:


> Your advise isn't that great.  Even if your just advocating for Nvidia & CUDA.  Titan Z makes no sense other then to feed some ones lack of e-Peen.  I guess its easier for some when its not their money.
> 
> Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
> 750w = 1 / 2.6 Tflops / $2,999
> ...



This is pretty much the reason summed up quite nicely.  Huge price difference and huge performance difference all at once (Though Titan Black is a bit cheaper at 1099 for the EVGA superclocked variant, so its even better than Titan-Z lawlz).


----------



## radrok (May 6, 2014)

You obviously missed how much Titan sold to prosumers.

There are many application where DP on CUDA is welcome and certified drivers make no difference at all.

You are all trying to justify your own reasons without having your hands into anything that could remotely use this kind of graphics power.

I'm not justifying Titan-Z but there is a market for Titan branding, you just can't grasp it.

I say most of you are gamers, buy non DP graphics card like the 780 Ti and call it a day.

If you don't have a use for a thing it doesn't necessarily mean it is useless to everyone.


----------



## GhostRyder (May 6, 2014)

radrok said:


> You obviously missed how much Titan sold to prosumers.
> 
> There are many application where DP on CUDA is welcome and certified drivers make no difference at all.
> 
> ...


I think you missed the point of what I was saying, the prosumers used to have a different choice set by Nvidia when Fermi was predominant.  Even by todays standards, in Cuda Iray and Blender (Which are 2 staples of the prosumers buying Titan's) the 580 even beats out the newer 680 and 780 (The 680 in many cases by near double) where as the Titan even in some cases is not that far ahead.  The problem being that Nvidia specifically limited and separated out the professional world and the gaming world on the GK architecture specifically to slow this and make people more justified to buy the more expensive Teslas and Quaddro cards.  Then when people were pretty upset and were still buying up 580's (I remember the price of 580's stayed around a high range because of this) they decided to release the Titan class of cards which were basically what the 580's used to be at an extraordinarily high price.  GM is actually sounding like its going to be fixing this a bit by having more of a focus on the compute aspects (As seen on the 750 and 750ti) along with the gaming parts.  The fact is that the Titan branding is very confusing and an oddity, its not got features that make professional cards well... professional cards (Drivers, ECC memory, 24/7 rated) but advertised as such while keeping the gaming branding (Geforce GTX).

I know exactly why people buy the Titan (If you read what I was saying I stated that many a time and even some of the posts pointed out how much better Titan Black was a better buy than Z) and the reasons are strong especially with CGI being predominant and with Ram being a high requirement in those fields.  However, does not change what I stated about how they were attempting to separate the cards for reasons that mostly involve making more money (For instance I would feel safe to say the profit margin of Titan Black is significantly higher than the 780ti which are both the same chip except one comes with 6gb of ram.

Titans work ok for what they are purchased for by the prosumers but they come at a price that while cheaper than the pro cards carries the same styles and power as the gamer series all the while having enough aspects to make it a "Best of a bad situation".  I've met people who snatched 3gb 580's to use in similar environments for cheap that perform excellent in the same environments the expensive Titan does and a position the GTX 680 and 780/ti cards can't even imagine.  That's my problem with the Titan branding and especially the up and coming Titan-Z.


----------



## Deleted member 24505 (May 6, 2014)

If I had the money for one of these, I would buy one in a second, as do the people who these are aimed at, and do not worry about the cost.

I'd have a nerdgasm every time I looked in my case.


----------



## radrok (May 6, 2014)

Well we can say they basically became more greedy, Nvidia just saw another segment between their gaming - professional line of graphics cards.

That segment was happy and dandy by using untapped Fermi until they decided to milk them (us).


----------



## GhostRyder (May 6, 2014)

radrok said:


> Well we can say they basically became more greedy, Nvidia just saw another segment between their gaming - professional line of graphics cards.
> 
> That segment was happy and dandy by using untapped Fermi until they decided to milk them (us).


Exactly, thats my only point on the argument of the Titans (At least in uses).  Fermi Architecture was a really excellent architecture in terms of the great gaming areas along with the compute and Cuda areas.

580's at one time or another were in the rang of 400-700 for 3gb models (Give or take and depending on version or how close to release they were) while Titan costs a minimum of 1k for a feature set like the 580 had while scrapping those features on the 680 and 780/ti.  Its a game of "See how much they can charge until people finally turn their noses up" which is pretty apparent with the absurd pricing of Titan-Z.

Im waiting to see what the 880 brings, if the 750ti is any indicator chances are it will be a much better render card and fix things back to the way they should have been from the beginning than that of Titan (Well this is just mere speculation).


----------



## radrok (May 7, 2014)

Fact is that there is no competition in CUDA devices with double precision enabled, now that Nvidia has created demand for their standard, they will try to gain as much as possible.


----------



## Xzibit (May 8, 2014)

Guessing they missed this launch date also.


----------



## GhostRyder (May 8, 2014)

Xzibit said:


> Guessing they missed this launch date also.


Yea, I'm almost wondering if they are completely re-designing it or if they are canceling it (doubtful on the second).  Some leaked performance said the boost clock was hugely bumped up but still low.


----------



## Xzibit (May 9, 2014)

GhostRyder said:


> Yea, I'm almost wondering if they are completely re-designing it or if they are canceling it (doubtful on the second).  Some leaked performance said the boost clock was hugely bumped up but still low.



Has to be re-design.  Boosting the clocks wont cut it.










Uping the sustained boost clock on the 780 Ti 13mhz increased temps 3c. Dual GPU card will see its temp rise faster then a single.  A 50mhz increase on Titan Z will be worse than the 95c on the 290X in this graph.


----------



## radrok (May 9, 2014)

I'm pretty sure they didn't expect the 295 to be so fast.
It kinda pissed on their parade so hard even with that power consumption.
Twice the asking msrp doesn't justify 100w less, they cannot even play the quietness game since the 295 aio does a flawless job.

They should just make a dual GK110 (no DP) card that has the same MSRP as the 295x2, maybe a bit less considering it would perform worse, Nvidia will never pull out a 500W card imo.


----------



## sweet (May 9, 2014)

Seem like we have a plot twist
http://www.kitguru.net/components/g...tpones-launch-of-geforce-gtx-titan-z-reports/

Maybe Titan Z will come back a few month later, with better VRM and cooling. Given that it is nVidia we are talking about, maybe we will see 4000$ price tag :lol:


----------



## radrok (May 9, 2014)

inb4 quad slot GPU.


----------



## Xzibit (May 9, 2014)

radrok said:


> I'm pretty sure they didn't expect the 295 to be so fast.
> It kinda pissed on their parade so hard even with that power consumption.
> Twice the asking msrp doesn't justify 100w less, they cannot even play the quietness game since the 295 aio does a flawless job.
> 
> They should just make a dual GK110 (no DP) card that has the same MSRP as the 295x2, maybe a bit less considering it would perform worse, Nvidia will never pull out a 500W card imo.



I agree.

If they do they also have to go the water cooling route and once you open up GK110 it zips more juice then Hawaii. Going water also kills the "CUDAsumer" market.  They have to leave the Titan Z on air, "790" could go water.

Maxwell time table probably got pushed up ?


----------



## radrok (May 9, 2014)

I could shut down my AX1200 with two Titans overclocked, GK110 once you pass a threshold becomes a power hog.

I can imagine it can't take much to surpass that 375W mark if they want to reach the 295x2 in performance. I think ATI pulled an ace this time which I would gladly support but it would be a downgrade for me.

I think they just went too bold this time out, without thinking about what the 6990 was and 295x2 could have been that time.

Dual GPUs have always been an ATI game, they can't beat them. ATI doesn't really care about power consumption as it has shown us now as well in the past.

It has bit them in the ass


----------



## Xzibit (May 9, 2014)

According to *Hermitage Akihabara*



			
				Hermitage Akihabara said:
			
		

> Was the lifting of the ban will sell at Thursday 22 May 8 to "GeForce GTX TITAN Z" 's equipped graphics card, but the failure of the driver turned out to be for the, sales and postponed again.
> The story of multiple stakeholders, product is already in stock in the country already with companies, that it is re-set waiting for the lifting of the ban on sale by NVIDIA.



So they already went out and just need "proper drivers" or atleast ones that will make it competitive with 295x2.


----------



## 64K (May 9, 2014)

It's still a long wait for more Maxwells. My guess is they are working on a dual GPU fully unlocked GK110 (GTX 790) to answer AMD's R9 295X2. If they go full balls and don't reduce the clocks it could beat the 295X2 but we'll see. God only knows what they will set the retail price at though. I think it will be a good bit more than double the price of a GTX 780Ti.

I'm still assuming the Titan Z will make an appearance soon.


----------



## sweet (May 9, 2014)

Xzibit said:


> According to *Hermitage Akihabara*
> 
> 
> 
> So they already went out and just need "proper drivers" or atleast ones that will make it competitive with 295x2.


This "proper drivers" imho is a new BIOS that allow it to hit 95c and 290x noise 
But I doubt that the VRM can keep up with 2 power hungry GK110 at that level. The story of blowed up 590 will go live again


----------



## Xzibit (May 10, 2014)

sweet said:


> This "proper drivers" imho is a new BIOS that allow it to hit 95c and 290x noise
> But I doubt that the VRM can keep up with 2 power hungry GK110 at that level. The story of blowed up 590 will go live again



For that to happen AIBs would have to recall stock.  Do the BIOS update themselves and run % test out of the stock then send them off to stores again. Since its a Titan it might have to go back to Nvidia given they don't like AIB fiddling with them as much.

Leaving it up to P.O.S. to do BIOS update is extremely unwise.  Unless your willing to take the PR and % return hit that will go along with it.


----------



## GhostRyder (May 10, 2014)

Xzibit said:


> I agree.
> 
> If they do they also have to go the water cooling route and once you open up GK110 it zips more juice then Hawaii. Going water also kills the "CUDAsumer" market.  They have to leave the Titan Z on air, "790" could go water.
> 
> Maxwell time table probably got pushed up ?



What should be done is they need this in a 2 slot config and a cooler that will work in all environments.  A 3 slot card advertised as a "Compute" card that wont work well in a rack and costs as much as real professional cards without most of the feature set is a hard sell.

If they made this work in a 2 slot config while improving the cooler and clocks along with a heavy price drop then it could be sold as a better alternative to adding more Ram for Cuda renders and devs.

I think a 790 would not be a good idea unless more ram is added.  Shooting for 4k its already apparent there are issues with the 780ti because of the 3gb limits that are proving to be an actual problem.

Titan-Z is just going to have to go through a lot of changes to be even looked at.


----------



## Xzibit (May 10, 2014)

Looks like the first Titan Z listing by SabrePC. *Zotac Titan Z for $3,879* Ouch!!!


----------



## Steevo (May 10, 2014)

LOL!!!!! 3879, Nvidia, "Quick take their money before they realize it has wood screws!!!"


----------



## GhostRyder (May 10, 2014)

Xzibit said:


> Looks like the first Titan Z listing by SabrePC. *Zotac Titan Z for $3,879* Ouch!!!


Pfffffffffffffffffffffffffffffffft.

Are you kidding me with that price!!!

That's a whole gaming/compute rig price that could actually include a pair of Titan blacks and a real fancy motherboard and processor!  I could fit an ivy-bridge e chip, a pair of Titan blacks, a nice motherboard, 16gb of ram, and some heavy ssd storage for the price of one of those cards.


----------



## debs3759 (May 10, 2014)

Xzibit said:


> Looks like the first Titan Z listing by SabrePC. *Zotac Titan Z for $3,879* Ouch!!!



LOL, they claim to have them in stock, despite the fact that NVidia still cannot give a launch date


----------



## Xzibit (May 10, 2014)

debs3759 said:


> LOL, they claim to have them in stock, despite the fact that NVidia still cannot give a launch date



Stores already received their stock awhile ago. Gibbo over at OCUK. Asia sites have the listings also.

Release Date has been delayed 3 times now.
1.) When it was announced it was early April
2.) April 29
3.) May 8
4.) Who knows.


----------



## GhostRyder (May 10, 2014)

Xzibit said:


> Stores already received their stock awhile ago. Gibbo over at OCUK. Asia sites have the listings also.
> 
> Release Date has been delayed 3 times now.
> 1.) When it was announced it was early April
> ...


Yea it was pretty apparent they were all set to go but held off and are trying to rethink their strategy.  The 295x2 at half the price and temp (not literally on temp but cooler then normal) really put a damper on them.


----------



## radrok (May 11, 2014)

Maybe they will lower the price to match two blacks or even less. There is no way they can make it launch at its original msrp . One thing is sure : gotta love competition


----------



## Xzibit (May 11, 2014)

radrok said:


> Maybe they will lower the price to match two blacks or even less. There is no way they can make it launch at its original msrp . One thing is sure : gotta love competition



That would be the sensible thing to do.  Although I don't think that will happen because someone would have to explain why the price was one of the only things revealed at GTC.






Not a smart move to go on stage and showcase an upcoming product with the price as one of the few things announced.  Only to have to walk it back to due your competition having a better gaming product at half the price.

They still plan on releasing it in Q2 which is May-June-July for them.

*KitGuru - Nvidia vows to make GeForce GTX Titan Z available in coming months*



			
				Nvidia Conference Call for Q1 2015 said:
			
		

> “At the very high end, we announced our newest flagship GPU, the GeForce GTX Titan Z. This is the highest performance graphics card we have ever designed. The Titan Z will please both PC enthusiasts and CUDA developers and will be available in Q2.”


----------



## radrok (May 11, 2014)

As I already said, that "boldness" has bit them in the ass.

I'm kinda glad they can't get away with this SKU, it would have been massively retarded to witness.


----------



## pr0n Inspector (May 12, 2014)

GhostRyder said:


> Yea it was pretty apparent they were all set to go but held off and are trying to rethink their strategy.  The 295x2 at half the price and temp (not literally on temp but cooler then normal) really put a damper on them.



I don't get this fascination over the AIO system on 295x2. It's dissipating nearly 500W of heat through a single radiator, the temps and noise are piss-poor by "real water" standard. And after going through all the trouble of getting a liquid system the VRM is still left out of the loop and gets very hot.


----------



## GhostRyder (May 12, 2014)

pr0n Inspector said:


> I don't get this fascination over the AIO system on 295x2. It's dissipating nearly 500W of heat through a single radiator, the temps and noise are piss-poor by "real water" standard. And after going through all the trouble of getting a liquid system the VRM is still left out of the loop and gets very hot.


What do you mean, because all I have heard is that the temps stay in the low 60's and in fact this example says it never exceeded 62C so im not sure why your saying the temps are bad.  As far as noise goes, if your considering this loud I do not understand the problem.  Most cards seem to run much louder than that in general.

Its not the most graceful thing on the planet for sure, but it gets the job done and to me is the best dual GPU card that has come out yet to date (Reference mind you).

Here is a Thermal Imager that showed the VRM's hitting about 70 under load.
http://www.guru3d.com/articles_pages/amd_radeon_r9_295x2_review,14.html


----------



## pr0n Inspector (May 12, 2014)

GhostRyder said:


> What do you mean, because all I have heard is that the temps stay in the low 60's and in fact this example says it never exceeded 62C so im not sure why your saying the temps are bad.  As far as noise goes, if your considering this loud I do not understand the problem.  Most cards seem to run much louder than that in general.
> 
> Its not the most graceful thing on the planet for sure, but it gets the job done and to me is the best dual GPU card that has come out yet to date (Reference mind you).
> 
> ...




It's only great by air standard because video card coolers have always been limited greatly by space and form factor. It would do so much better with a full cover block or at least put the VRM in the loop and use a fat dual rad. This half-assed el cheapo solution only wows the uninitiated. I know that sounds elitist but that's how it is.


----------



## Xzibit (May 12, 2014)

pr0n Inspector said:


> It's only great by air standard because video card coolers have always been limited greatly by space and form factor. It would do so much better with a full cover block or at least put the VRM in the loop and use a fat dual rad. This half-assed el cheapo solution only wows the uninitiated. I know that sounds elitist but that's how it is.



Refence design has to accomidate most users

EK Water Blocks Unveils its Radeon R9 295X2 Full-Coverage Blocks

I'm sure there will be a block for Titan Z if its ever released.


----------



## GhostRyder (May 13, 2014)

pr0n Inspector said:


> It's only great by air standard because video card coolers have always been limited greatly by space and form factor. It would do so much better with a full cover block or at least put the VRM in the loop and use a fat dual rad. This half-assed el cheapo solution only wows the uninitiated. I know that sounds elitist but that's how it is.


Im really confused because your calling this system "cheap" (Well and some other choice words) but I do not understand what you are referencing to.  I would say every other dual GPU card has had a much cheaper cooler than before this so I can't really say this is a worse cooler than all the previous models.  On top of that, this is just a hybrid cooler which is getting popularity and work very well for what they do.  Plus this keeps the temps well low enough without resulting to extreme speeds and noise while also allowing for plenty of overclocking headroom for a dual GPU card so I fail to see the problem.  Using a full cover block is very expensive on its own not including a thick radiator and pump system to go with it.


----------



## pr0n Inspector (May 13, 2014)

GhostRyder said:


> Im really confused because your calling this system "cheap" (Well and some other choice words) but I do not understand what you are referencing to.  I would say every other dual GPU card has had a much cheaper cooler than before this so I can't really say this is a worse cooler than all the previous models.  On top of that, this is just a hybrid cooler which is getting popularity and work very well for what they do.  Plus this keeps the temps well low enough without resulting to extreme speeds and noise while also allowing for plenty of overclocking headroom for a dual GPU card so I fail to see the problem.  Using a full cover block is very expensive on its own not including a thick radiator and pump system to go with it.



Because I'm looking at it from a watercooling point of view. Asetek pumps are extremely weak, their radiators are made of aluminum, the coolant is full of glycol: none of these are acceptable in the watercooling scene. AMD just slapped two pump/block units on it. Not even a simple VRM block because Asetek don't make them. Dual rads are literally 10 bucks more than a single and with a full cover block(presumably 3rd party ones bought at wholesale pricing) you don't need all the extra heatsinks, only one pump and simpler final assembly, the price increase would be something like 60, 70 dollars, virtually nothing for someone buying a $1500 video card. It's the water equivalent of stock CPU cooler: hot, loud and cheaply made. Think about it, you are using a single 120mm radiator to dissipate 500 watts of heat when even Asetek themselves are selling duals for CPUs, the fan needs to be working overtime to keep it under throttle temperature. Compare that to custom watercooling, 500W would warrant a fat *quad* rad but the fans would be blowing at something like 1500rpm* and* have far better temperatures.


----------

