Wednesday, June 2nd 2010
Galaxy Readies Dual-Fermi Graphics Card
Galaxy is finally breaking ground on graphics cards with two GF100 "Fermi" GPUs from NVIDIA, with the company displaying one such design sample at the ongoing Computex event. The dual-Fermi board uses essentially the same design NVIDIA has been using for generations of its dual-GPU cards, involving an internal SLI between two GPUs, which connect to the system bus via an nForce 200 bridge chip, and are Quad SLI capable.
The power conditioning and distribution on this design consists of two sets of 4+1 phase VRM, the card draws power from two 8-pin PCI-Express power connectors. The GPUs carry the marking "GF100-030-A3", which indicates that it has the configuration of GeForce GTX 465, and since we count 8 memory chips per GPU system with no traces indicative of the other two memory chips per GPU sitting on their own memory channels, on the reverse side of the PCB, it is likely that the GPUs have a 256-bit wide memory interface. Galaxy, however, calls the card GTX 470 Dual. Output connectivity includes 3 DVI-D, with a small air-vent. It's likely that the cooler Galaxy designs will dissipate hot air around the graphics card, rather than out through the rear-panel.
Source:
HotHardware
The power conditioning and distribution on this design consists of two sets of 4+1 phase VRM, the card draws power from two 8-pin PCI-Express power connectors. The GPUs carry the marking "GF100-030-A3", which indicates that it has the configuration of GeForce GTX 465, and since we count 8 memory chips per GPU system with no traces indicative of the other two memory chips per GPU sitting on their own memory channels, on the reverse side of the PCB, it is likely that the GPUs have a 256-bit wide memory interface. Galaxy, however, calls the card GTX 470 Dual. Output connectivity includes 3 DVI-D, with a small air-vent. It's likely that the cooler Galaxy designs will dissipate hot air around the graphics card, rather than out through the rear-panel.
105 Comments on Galaxy Readies Dual-Fermi Graphics Card
**Generic EDIT to add the fact that it will probably have a massive cooler and thus being able to cool properly, regardless of how many slot it should take, in order to be retailed**
On the worse case, maybe 600€, but that would be pushing it (even know it is nVidia).
All your graphs have shown is that 2x GTX480 in SLI is faster than a single 5970 which is a no brainer since the GTX480 is 11% faster than a 5870 and a 5970 is actually 2x downclocked 5870s in crossfire.
Its no secret that GPUs don't scale well beyond 2. It doesn't matter whether its crossfire or sli. To imply that crossfire is inferior to sli by comparing benchmarks taken from how 2 ati gpus scales to 4 and how 1 nvidia gpu scales to 2 is nonsense.
If you think 2 465s (which is ~37% slower than a 480 and ~32% slower than a 5870 at 2560x1600) is going to match a 5970, you're going to be surprised.
For your consideration: www.guru3d.com/article/geforce-gtx-465-sli-review/12
Although, I hope they have a nice comeback, because we don't need another Intel/MS here... prices should go DOWN! :)
If you cant bring in the big guns on a dual GPU card it = FAIL in my book.
They should have used crippled 480s :shadedshu
The more cycles a processor performs the hotter it becomes because of the extra 'power' required to do the extra cycles. This is why liquid gas (N2) cooling is used on OC records. The processor does not consume more power because it is hot - it is hot because it consumes more power.
It is a bloody rule of electrical power - that which requires more power becomes hotter. Heat does not generate power.
"However, the fact is, temperature and temperature alone is what is causing Fermi to consume so much power"
This is not correct. What is really happening is the fact that so much heat is being lost by an innefficient design means that more power has to be pumped in to perform the given task* If the system is more efficient, as W1zz's review sample must be, it loses less heat and therefore does not require more power. So to be more accurate:
"Temperature (heat) loss is what causes Fermi to consume so much power"
Well, that and the fact it requires a lot more power in the first place.
*i.e. if i need 100 units (of whatever) to perform an operation and i have a 100% efficient system, i only need to draw 100 units. However if my system is innefficient, say 75%, then i lose 25 units as heat (or light or sound). This leaves me with 75 units which is not enough for the operation, so i have to draw 25 more. My total draw for a 100 unit task is now 125 units because of my 25 unit heat loss. Well done if you follow that!
It's a Dual 470 and it's as simple as that. If Galaxy says it is, then it is.
As far as Galaxy releasing this Dual GTX 465 card; we will just have to wait and see if they do or don't. And then we can judge it accordingly...
*in fact, even 6-pin can do it since most 8-pin connectors are 6+2.
Perhaps you can enlighten me as to what you know they have done to the stock cards bioses to lower temps and power without added cooling or faster fans which is what I was trying to say. Using the the graph from the zotac review applied to a water cooled card at 48C under load would mean it should use a lot less power than it is in the review.
Anyway back this post. It will interesting to see what a bigger company comes up with. With the 4 gig 5970 seeming to all have 3 slot coolers it does leave a lot of space for a big efficient cooler. Will be interesting to see what they all come up with. We need something to knock the 5970 off its perch. :D