• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Readying Dual-GF110 Graphics Accelerator, Eying Total Performance Leadership

Wonder if nvidia asked asus for advice after seeing their dual 480 board. Looks very similar. Even the same off center mounting holes. Actually are we sure this isn't a new mars card? I mean has nvidia said anything about it?

147a.jpg

mars_ii_1.jpg
 
Just in time for Crysis 2 :D

Board layout pretty much looks like GTX295 single pcb. Would probly run ok on a 750w+ psu, with at least 48amps on the +12v. Im seeing a 600 USD tag there..
 
Last edited:
That's one hell of a mess in that board! Asus did tidy up the components better. Seems like it will be one hell of a performer. Can't wait! :D
And that SLI finger, THAT SLI finger...
Im seeing a 600 USD tag there..
That's waaaaaaaaaaaaaaaaaaaaay too low...around 750 USD should be more adequate...
 
A 580 is currently power limited due to spiking above the total available power of 150+75+75=300watt max. In tpu's article here the 580 power limits itself after spiking up then it settles to a max watts of 200(question why 200 and not something more like 250 or 290??? Probably due to the limits of the electronics used on board...). One question I didn't see answered was did the performance increase proportionately to the increase in power used after the power limit was taken away?

In any case the point is a 580 really needs an 8pin+8pin power to keep itself from being power limited from the cables(let alone the electronics used on the board as well as possible problems with load balancing), so how is a dual gpu going to fare better if a single gpu is already power limited? I guess I will just have to wait till someone who actually knows what they're talking about gives a go at it, or just wait till the card comes out and reviews are posted.

ah! My guess: The gtx580 is so powerful to the point it has to be power limited when used to its fullest, however most applications will be limited by the weakest link in the gpu before using the full power of the gtx580(example: cut the rops in half and suddenly the card is limited by the rop count). The dual gpu card will still need to be power limited, but it won't matter as most normal applications will be limited by the weakest link in the gpu before reaching the power limit.

You also have to remember that since PCI-E 2.0 there is the capability to get 150W from the slot, in 2.1 and 3.0 this is even "smart" and can give you what you need rather than the whole 150W, therefore at least theoretically, 450W could be available here.
 
You also have to remember that since PCI-E 2.0 there is the capability to get 150W from the slot, in 2.1 and 3.0 this is even "smart" and can give you what you need rather than the whole 150W, therefore at least theoretically, 450W could be available here.

75W from the slot, and iirc the power management came about in 2.0?

At any rate, the 'spec' is anything but one, as we can see GF100/GF110 clearly can draw more juice under load than 300W at stock, and AMD/nVIDIA implement their TDPs differently.

One has to question this product though, power consumption Nazi or not. It's outside the pci-e spec, and (if a nVIDIA-sanctioned card) can almost certainly be seen as a concession that the same configuration using GF114 will not beat 6990...which is perhaps a given.

For it even to be feasible, it needs to be faster than the same configuration using GF114 GPUs instead, even at a greater clockspeed. Since we know GF104 GPUs clock into the 800+mhz range, and would assume GTX560 will be clocked in the 750-800mhz range, this would need to be clocked at least around ~600mhz. I wouldn't think there is a lot of wiggle room between that and what they could get away with using a < 375W spec, even using tricks like lower clocked/voltage (1.35v, 3.6/4.0Gbps) 2Gb (denser, not GB) GDDR5.
 
ATI 6990, LOL ATI from this moment show your white flag
 
75W from the slot, and iirc the power management came about in 2.0?

At any rate, the 'spec' is anything but one, as we can see GF100/GF110 clearly can draw more juice under load than 300W at stock, and AMD/nVIDIA implement their TDPs differently.

One has to question this product though, power consumption Nazi or not. It's outside the pci-e spec, and (if a nVIDIA-sanctioned card) can almost certainly be seen as a concession that the same configuration using GF114 will not beat 6990...which is perhaps a given.

For it even to be feasible, it needs to be faster than the same configuration using GF114 GPUs instead, even at a greater clockspeed. Since we know GF104 GPUs clock into the 800+mhz range, and would assume GTX560 will be clocked in the 750-800mhz range, this would need to be clocked at least around ~600mhz. I wouldn't think there is a lot of wiggle room between that and what they could get away with using a < 375W spec, even using tricks like lower clocked/voltage (1.35v, 3.6/4.0Gbps) 2Gb (denser, not GB) GDDR5.

Not quite sure what you are saying there in the first part of your post, to make it clear, the PCI-E 2.0 and 2.1 specification (and 3.0) is for 150W from the slot, the thing I am not sure of is whether all motherboard manufacturers actually conform to the specification.... ie.... their boards actually do draw more than 75W but potentially with a PCI-E 2.0 and onwards board 450W could be drawn with two 8 pin connections, whether in reality that is the case I really don't know..... my point being simply, don't assume that a dual GPU card would be starved of powerrrrzzzz, I doubt very much that NVidia would bring out a dual GPU card if they didn't think it would be competative.
 
hells to the muthafuckin yeah! finally nvidia! hello $800 gpu!
 
It would be so awesome if nvidia get a dual chip card in on the 5xx range, it was a little disappointing that there was no top end dual chip card from the 4xx range to go up against the 5970.

*thinks about the crazy frame rate this card in sli could get* I wonder how well a pair of these would do in 3dmark11.
 
it's not going to be dual 580
maybe dual 570, or dual 560(gtx 460 @ 750Mhz) sounds more realistic.
 
it's not going to be dual 580
maybe dual 570, or dual 560(gtx 460 @ 750Mhz) sounds more realistic.

I would expect them to try something like what AMD/ATI did with the 5970, maybe use the 580 core with 570 clocks if they could get the power usage low enough by doing so.

http://www.seoboy.com/wp-content/uploads/2009/12/implied-facepalm.jpg


The 6990 didn't even come out yet, so it's too premature to spectulate

I agree, even the 6970 is not out so the 6990 speed can't even be guessed using 6970 crossfire numbers yet.
 
it's not going to be dual 580
maybe dual 570, or dual 560(gtx 460 @ 750Mhz) sounds more realistic.

thats a good catch

it actually does have the 1038A1 like on the sample w1zzard used in his review but the other marks are different and the identifyer the GF110 at the bottom isn't there so this could actually not even be a 512sp(disabled shaders) part but still based on the GF110.
 
Good news ive almost finished converting my computer to nuclear power :shadedshu

har har har you so witty.

this will draw little more than a GTX295, while whoppin' it twice over.

good on them I say, get it out fast like the GTX580.
 
har har har you so witty.

this will draw little more than a GTX295, while whoppin' it twice over.

good on them I say, get it out fast like the GTX580.

I wouldn't be suprised if this goes neck to neck with the 5990
 
Wait, so finally I've got a room heater replacement???? I thought Fermi was a bit too cool, 350W LOL I was laughing at that. I guess 600W or 700W would do good in the winters
 
Back
Top