Wednesday, January 20th 2010
Galaxy Designs Dual-Core GeForce GTS 250 Graphics Accelerator
Whereas NVIDIA is inching towards releasing its DirectX compliant GF100 GPU from which it has high expectations, some of its partners don't seem to give up on the two-generation old G92. Galaxy has almost finished designing a graphics card that uses two GeForce GTS 250 GPUs on one PCB, in an SLI on card setup. It refers to the card as "DualCore GTS250", and consists of a long blue PCB which holds two G92-426 GPUs (G92-426 GPUs are used on low-power variants of the GTS 250).
Each GPU is wired to 1 GB of GDDR3 memory across its 256-bit wide memory interface (total 2 GB on board). The two GPUs are connected to the system over an nForce 200 BR-03 bridge chip. The multi-GPU system is contained within the board, with no SLI fingers for expansion, even though G92 chips are Quad SLI capable.The card draws power from two 6-pin PCI-E power connectors, regulated by a 9-phase digital PWM circuit. This isn't the first implementation of its kind from a very technical standpoint. The 2-year old NVIDIA GeForce 9800 GX2 uses two 65 nm G92 chips in a similar implementation, which is Quad SLI capable. There is no word on when Galaxy will release this.
Source:
Expreview
Each GPU is wired to 1 GB of GDDR3 memory across its 256-bit wide memory interface (total 2 GB on board). The two GPUs are connected to the system over an nForce 200 BR-03 bridge chip. The multi-GPU system is contained within the board, with no SLI fingers for expansion, even though G92 chips are Quad SLI capable.The card draws power from two 6-pin PCI-E power connectors, regulated by a 9-phase digital PWM circuit. This isn't the first implementation of its kind from a very technical standpoint. The 2-year old NVIDIA GeForce 9800 GX2 uses two 65 nm G92 chips in a similar implementation, which is Quad SLI capable. There is no word on when Galaxy will release this.
60 Comments on Galaxy Designs Dual-Core GeForce GTS 250 Graphics Accelerator
as far as the 8800gtx goes its only slightly faster then the 8800gtx 512 which could overclock to compensate thats why the 9800gtx was on par with the older G80 it may have a few limitation but otherwise they trade blows
and more on topic id still like to see what 2 of these gts 250x2s could do in sli (quad sli) id also like to see how they do with F@H
I think there is no Quad SLI suport :)
This thing owns hardcore! (In Geforce 9 times :nutkick:).
Oh wait the 4850X2 that Sapphire don't bother selling anymore. :p
Strap this on big bang with a 4850x2, i wonder what it would do. :cool:
the 8800ULTRA(and the 9800GTX stock) now can be outperformed by a 9800GT oc'd to like 760/1800/2000 which is not hard to get too
the 8800gts 512 offered within 5% of the 8800gtx perfromance at the time but what i was trying to say is the 8800gts has been renamed and rebaged and slightly altered untill it became the gts 250 the point i was making was that even with the rebadge the 8800gtx in many cases was faster then a 9800gtx not so much the gts 250 (due to clock speeds) and i was also saying the same thing this is what the 9800gx2 SHOULD have been as in 2gb = 1gig usable memory compared to the 512 mb usable on the 9800gx2
And yep, idk who i'd be agreeing with, but the G80 was a better chip than the G92, imagine how much better the G80 would've been if it came clocked at 738mhz stock like the GTS 250. And just a question/assumption, the only reason i could think of that G92 was made was that G80 for some reason didn't go to planned if/when nvidia attempted to shrink it so they tweaked it around to make G92 which did work and was cheaper. Yes? or am i wrong lol cause G80 would've been a beast at GTS 250 speeds+GTS 250 overclocking.
Its also kind of useless since the 9800GX2 is basically 2 x 9800GTX, while the DualCore GTS250 will only be like 2 x 9800GTX+'s. The only real advantage of this card would be that it uses less energy and is on single PCB.
Also, I'm very interested to see what the cooling solution will look like... if they use a cut-down version of the GTX 295 cooler it would also save money on R&D.
Also at the time of 8800GT and even 8800GTS release many retailers were still selling the G80 8800GTX and charging double the price (still) over the 8800GTS.
The 8800GTS 512MB was worse than the 8800GTX, but was very close to the 8800GTS 620. However, it was cheaper to the consumer and to manufacture, ran cooler, and consumed less power than the 8800GTS 640.
The 9800GTX was not a rename, it was a very different beast from the 8800GTS 512, and was planned from the beginning of G92's life. Besides the obvious Tri-SLi option on the 9800GTX, it also performed about 10% better then the 8800GTS at stock speeds. It did match an 8800GTX, well about 1% difference between the two, but again was cheaper to manufacture and cheaper to buy, while running cooler and consuming less power. Not to mention most 8800GTS 512s couldn't even match 9800GTX clocks due to the poor power setup on the 8800GTS, and forget about matching a 9800GTX in overclocking. You can voltmod the 8800GTS to be close to the 9800GTX without volt mods though. Of course the die shrinks, PCB and GPU revisions did eventually give us the GTS250. The very few games where framebuffer actually makes a difference, I can only think of GTA:IV, the 8800GTX was better than the 9800GTX, but that was not enough to negate the overall performance benefit of the 9800GTX in other areas.
www.techpowerup.com/reviews/Zotac/GeForce_8800_GTS_512_MB/20.html
8800GTX steam rolled across the board. Wonder why TPU didnt include an Ultra :confused:
The 8800GTX was faster than the 8800GTS 512 and the 9800GTX was virtually identical overall:
And the Ultra was just an overclocked 8800GTX, down to the PCB being identical, it just had a beefed up cooler to handle the extra heat. I don't think W1z ever actually had an 8800Ultra to review, though the 8800GTX was easily overclocked to match the Ultra specs probably wasn't worth the hassle to include in the review though...
And actually the 9800GTX+ wasn't really that much different, nVidia designed it this way. G92b was pin compatible with G92, so no changes to the PCB were made between the 9800GTX and 9800GTX+, not even the GPUID changes, which is why it is impossible for software like GPU-z to tell the difference between the 65nm and 55nm version of the G92. This is why we had so many threads back when the 9800GTX+ was popular asking why GPU-z said it was 65nm. The real changes didn't come until the GTS250, where the G92b was essentially put on the 8800GTS 512 PCB(slightly modified for Tri-SLi).
pic again for reference...