Tuesday, March 25th 2014

NVIDIA Announces the GeForce GTX TITAN-Z

Here it is, folks, the fabled monster dual-GPU graphics card from NVIDIA, based on its GK110 silicon, the GeForce GTX TITAN-Z (sounds like "Titans"). The first reference-design graphics card to span across three expansion slots, the GTX TITAN-Z features a cooler design that's an upscale of the GTX 690, with a pair of meaty heat-pipe fed heatsinks being ventilated by a centrally-located fan. The card features a pair of GK110 chips, with all 2,880 CUDA cores enabled, on each. That works out to a total core count of 5,760!

That's not all, the two chips have 480 TMUs, and 96 ROPs between them; and each of the two is wired to 6 GB of memory, totaling a stunning 12 GB on the card. At this point it's not clear if the GPUs feature full-DPFP, but their SPFP totals 8 TFLOP/s. Display outputs on the card include two dual-link DVI, a DisplayPort, and an HDMI. According to its makers, the GTX TITAN-Z is the first graphics card that's truly ready for 5K resolution (5120 x 2700 pixels) on a single display head, for gaming. At US $2,999, the card costs thrice as much as a GTX TITAN Black, for twice its performance.
Add your own comment

122 Comments on NVIDIA Announces the GeForce GTX TITAN-Z

#101
Xzibit
Making it seam like every ones doing just because someone has a blog about it doesn't make it the norm.

You can always just read the guys blog. Echelon Blog
I can however not attest to how stable they are in a 24/7 setting. Our systems run 24/7 but computation on GPUs is only done in small bursts throughout the day. There are however some computing centers that run multi-GPU systems (not 8 but 2 or 3 per node) with consumer grade cards quite successfully. There are probably more problems than with professional cards but they are also a lot more expensive.
Posted on Reply
#102
HumanSmoke
The Von MatricesI still don't see how a Titan Z could work in a server case. Those cases are designed for a torrent of front to back air flow. But the Titan Z (and other modern consumer dual GPU cards) exhaust from both the front and rear of the card
You'd be best to direct the question at someone who runs one (or more).There are a few blogs and sites for multi-GPU CG workload machines.
FWIW, the same could be said of the GTX 690 - a card that I've seen more of as an Octane renderer than as a gaming card. Not saying the central radial fan and front exhaust are ideal, since it works against the natural airflow, but it doesn't seem to deter everyone
Tyan barebones rackmount product page.
XzibitMaking it seam like every ones doing just because someone has a blog about it doesn't make it the norm.
Didn't say everyone was doing it, or is the norm. If it were then Nvidia would be selling boatloads of them wouldn't they?

I can keep putting up the render rigs if you'd like just to prove that Echelon's isn't the only render rig in existence.
Posted on Reply
#103
Disparia
The Von MatricesI still don't see how a Titan Z could work in a server case. Those cases are designed for a torrent of front to back air flow. But the Titan Z (and other modern consumer dual GPU cards) exhaust from both the front and rear of the card. In such a server case the Titan Z's front GPU will be starved for cooling since it exhausts against the case's flow of air.

As far as I know you have to go to the Tesla range with NVidia (like the K10) and Firepro with AMD (like the S10000) to get dual GPU cards that will work with front to back airflow.
You just remove the fan unit and shroud. Even if you kept them on, it's not like those fans will actually disrupt airflow in the cases by Tyan and Supermicro. Like a squirrel standing up to a semi.
Posted on Reply
#104
TheHunter
HumanSmoke and your point is?

butthurt much that you need to prove something?
Posted on Reply
#105
xorbe
Obviously Titan Z was aimed at AMD's W9100 today, which is probably more than $2999.
Posted on Reply
#106
FX-GMC
TheHunterHumanSmoke and your point is?

butthurt much that you need to prove something?
Seems like the norm for his posts..

Might buy one of these if I win the powerball.
Posted on Reply
#107
radrok
Yawn Nvidia, I've been enjoying the same kind of performance, if not more, for more than a year, earlier and one thousand cheaper. Old tech and overpriced, give us a new architecture please, stop milking dead horse Kepler.
Posted on Reply
#108
xorbe
HumanSmoke. If the 4U (in this case) can accommodate 10 GPUs (5 x Titan Z), why would they stick with 8 Titan/Titan Black ? You are potentially losing 20% of the possible performance per unit.
You would save $7000. Buy two more Titan Blacks, and use the $5000 left to buy whatever additional server room is needed. I don't see how it comes out ahead.
Posted on Reply
#109
15th Warlock
xorbeObviously Titan Z was aimed at AMD's W9100 today, which is probably more than $2999.
Bingo!

What boggles my mind is why call it a Geforce and not Tesla... Seems like Nvidia is trying to appeal to a much wider audience by marketing this card to Compute users as a cheap alternative to Tesla and gamers alike.
Posted on Reply
#110
Xzibit
15th WarlockBingo!

What boggles my mind is why call it a Geforce and not Tesla... Seems like Nvidia is trying to appeal to a much wider audience by marketing this card to Compute users as a cheap alternative to Tesla and gamers alike.
Doesn't Nvidia still locks cards out from benefiting compatibility with software, Bios + PCB, hardlocks and softlocks so you just cant use or trick it into being a Tesla/Quadro. At $3000 it looses its appeal as a cheap alternative to anything serious.

It might also be feeling pressure from Intel MiCs and AMD FirePro which have started providing competitive products stacks at a much lower price in that segment.

I believe your right though that the aim is more of a Student/Gamer/Novice
Posted on Reply
#111
HumanSmoke
XzibitDoesn't Nvidia still locks cards out from benefiting compatibility with software, Bios + PCB, hardlocks and softlocks so you just cant use or trick it into being a Tesla/Quadro. At $3000 it looses its appeal as a cheap alternative to anything serious.
Not really, since you can use Quadro and Forceware drivers concurrently. If you need Viewport or pro driver support, it is a simple matter of connecting display out to a cheap Quadro NVS - which typically start at ~$100(or any other Quadro as the mixed GPU machine picture shows in post #103).
TheHunterHumanSmoke and your point is? butthurt much that you need to prove something?
Providing a possible usage scenario for the subject of the hardware being discussed.
Providing examples to substantiate said possible usage scenarios.

Maybe I should just take a leaf out of your book and go with the resource-light,-no-thinking-required ad hominem attack that has zero content regarding the actual subject of thread.
Posted on Reply
#112
FX-GMC
HumanSmokeNot really, since you can use Quadro and Forceware drivers concurrently. If you need Viewport or pro driver support, it is a simple matter of connecting display out to a cheap Quadro NVS - which typically start at ~$100(or any other Quadro as the mixed GPU machine picture shows in post #103).

Providing a possible usage scenario for the subject of the hardware being discussed.
Providing examples to substantiate said possible usage scenarios.

Maybe I should just take a leaf out of your book and go with the resource-light,-no-thinking-required ad hominem attack that has zero content regarding the actual subject of thread.
butthurt is an adjective (which means it isn't a noun). Therefore it cannot be considered name calling. Now if he called you a hurt butt........
Posted on Reply
#113
HumanSmoke
FX-GMCbutthurt is an adjective (which means it isn't a noun). Therefore it cannot be considered name calling. Now if he called you a hurt butt........
The image I pulled off a Google search as an illustrative. What I actually said was
HumanSmokeMaybe I should just take a leaf out of your book and go with the resource-light,-no-thinking-required ad hominem attack that has zero content regarding the actual subject of thread.
Posted on Reply
#114
FX-GMC
HumanSmokeThe image I pulled off a Google search as an illustrative. What I actually said was
Sorry, ignored the text in favor of the colorful image with a large arrow pointing to NAME CALLING.
Posted on Reply
#115
Disparia
xorbeYou would save $7000. Buy two more Titan Blacks, and use the $5000 left to buy whatever additional server room is needed. I don't see how it comes out ahead.
It's like when you need to buy an 8-way Xeon instead of getting a pair of quads. The quads have a huge cost advantage but don't provide the necessary performance or are not applicable to your situation.

Similarly, it's going to be far more common to see Titans in use, but there's going to be those occasions where its advantageous to use the Z.
Posted on Reply
#116
xorbe
XzibitDoesn't Nvidia still locks cards out from benefiting compatibility with software, Bios + PCB, hardlocks and softlocks so you just cant use or trick it into being a Tesla/Quadro.
Isn't the only difference ECC support? Though I'm guessing some OpenGL operations are "driver nerfed" for the non-Quadro Titan.
Posted on Reply
#117
Xzibit
xorbeIsn't the only difference ECC support? Though I'm guessing some OpenGL operations are "driver nerfed" for the non-Quadro Titan.
Yes. It's compatibility through software. It reverts workload back to CPU or software mode.

There a few sites that try to use software tricks and PCB switching. You can get the software to make it show up as a Quadro and 1 or 2 features on certain commercial products but Its not the same as running a Quadro to a Geforce. Nvidia made sure of that.

Short list
* A lot of memory + ECC support.
* 64x antialiasing with 4×4 supersampling, 128x with Quadro SLI. The Geforce is limited to 32x, but supersampling is used only in certain 8x and 16x modes.
* Display synchronization across GPUs and across computers with the optional Quadro Sync card
* Support for SDI video interface, for broadcasting applications
* GPU affinity so that multiple GPUs can be accessed individually in OpenGL. This feature is available on AMD Radeon but not on Geforce.
* No artificial limits on rendering performance with very large meshes or computation with double precision
* Support for quad-buffered stereo in OpenGL
* Accelerated read-back with OpenGL. There are also dual copy engines so that 2 memory copy operations can run at the same time as rendering/computation. However, this is tricky to use.
* Accelerated memory copies between GPUs
* Very robust Mosaic mode where all the monitors connected to the computer are abstracted as a single large desktop.
Posted on Reply
#118
HumanSmoke
xorbeIsn't the only difference ECC support? Though I'm guessing some OpenGL operations are "driver nerfed" for the non-Quadro Titan.
Yup. GeForce boards are firmware (and driver) crippled for OpenGL /OpenCL performance, so if the workload is primarily OGL based then no software mods will transform a GeForce's performance. CUDA performance isn't affected in the same way (hardware based).
The other primary differences are runtime validation (fewer errors in calculations) with Quadro, and better Viewport performance. The same applies to the difference between FirePro and Radeon. The video below compares a FirePro W5000 (basically a castrated HD 7850 - 25% fewer cores and texture address units, 50% slower memory) running rings around a Radeon.

Even if/when you can flash a gaming card into a pro card, you still can't get around the better runtime binning of the Tesla/Quadro/FirePro, so whilst you may save cash, it could come at the expense of visual artifacts in 3D renders or similar issues in other workloads.
Posted on Reply
#119
Fluffmeister
15th WarlockBingo!

What boggles my mind is why call it a Geforce and not Tesla... Seems like Nvidia is trying to appeal to a much wider audience by marketing this card to Compute users as a cheap alternative to Tesla and gamers alike.
Because the Titan moniker brand falls under GeForce, and has since the original launched, there is really nothing mind boggling about it.

But you may well have nailed it with your assumption, since 3K for higher end Tesla/Quadro cards is really not a lot of money.

You see, when you create a high level brand and expect companies in turn to pay top dollar for it, it actually makes sense to protect that brand.

A concept peeps here clearly struggle with.
Posted on Reply
#120
sweet
Meanwhile, AMD released their Hawaii-base FirePro and TPU just ignored it. LOL no bias media
Posted on Reply
#121
HumanSmoke
sweetMeanwhile, AMD released their Hawaii-base FirePro and TPU just ignored it. LOL no bias media
Technically, AMD just announced the W9100. It hasn't been released. launch is slated for April.
Posted on Reply
#122
lanceknightnight
CasecutterAfter listening to all this, I have an atypical interpretation (or as "Serpent of Darkness covers in point #5) . Nvidia enjoys (nye almost demands) such PR to keep selling GK110's as a gaming offerings, hear me out.

Nvidia knows the tipping point they can recoup engineering, tooling, manufacturing costs to deliver such a card, and they have a good idea the number they can expect to sell. Even if that number of units mainly to enterprise purchasers, the PR frenzy it whips-up within "Gamers" just adds to the business plan for releasing it. It's a perfectly good plan, and it returns venue, actually better profit that selling "X" amount chips individually (at lower margins) as offerings that counteracts AMD Hawaii product. Nvidia gets to elevate the brand even higher, use up chips on extreme products offerings, and that actual adds "cred" to themselves as not directly vying with AMD.

We know 20Nm Maxwell is some time off, Nvidia needs to maintain GK110 production, but can’t hold margins selling a bunch in some price war especially the good full-compute parts. I think the dual chip board Nvidia realized it lends itself to more to low-end enterprise, as the package provide substantial punch in more non-traditional chassis arrangements. Two of them puts 4x compute without the need for risers on a more traditional (cost effective) motherboard, but honestly I don’t how/what is entailed in that today. So, they release this as it devours two full-chips, pays the overhead, return profit, all the while shoring up the legitimacy for the pricing on the GTX 780/780Ti. Best of keeps the gamers in side-bar discussing it. It’s a very shrewd move to extent they use 2x the product (GK110), in an exceptionally high margin offering.

It will find buyers mostly from enterprise/compute that hadn't justified the Tesla/Quarto pricing. I see it as product that 85% of even "bleeding edge gamers" won’t step-up to purchase… that fine. It's a card that folks can use as for experimentation, pushing huge resolutions and multiple monitor configurations, if some deep pocket gamer finds it worthwhile... all the more merry.
This is what I was saying too.
Posted on Reply
Add your own comment
Nov 22nd, 2024 18:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts