Thursday, October 22nd 2009
EVGA and NVIDIA Design Unique Multi-GPU Graphics Accelerator
EVGA and NVIDIA are readying a unique multi-GPU graphics accelerator this Halloween, slated for October 30. To celebrate its launch, the two have organized a launch party for 300 lucky participants who will go to the NVIDIA Plaza in Santa Clara, CA and witness the launch of the new GeForce product. The accelerator packs two GPUs: a G200b, and a G92b. That's right, a GeForce GTX 200 series GPU, with a GeForce GTS 250 GPU. This is perhaps the first graphics accelerator to pack two entirely different GPUs. How it works, however, is interesting: the G200b GPU handles graphics, while the G92b is dedicated to PhysX processing. The accelerator could have 896 MB of graphics memory, with 512 MB of dedicated memory for the PhysX GPU. You can sign up for the event here.
Source:
Bright Side of News
80 Comments on EVGA and NVIDIA Design Unique Multi-GPU Graphics Accelerator
1. only one game was tested. poor sampling so its hard to prove if its going to be the same in all games.
2. approximately a 10 FPS gain was seen using the second card for physx. i'd not pay $150 au and another 50-100W of power use for 10 FPS.
3. december 2008 - i'm sure performance on single card (and SLI) setups has been improved
This is almost not funny, without competition ATI is free to dilly dally on the new GPU's and cards like the eyefinity 2Gb I need.
but you can pretty much guarantee this will be nothing but a high priced waste of money.
at least they are being innovative though, its a good idea to me, just too late.
Though maybe this is just a stepping stone to a version with GT300 and a GT200 core?
I don't know, there are always these retarded cards coming out at the end of a generation of cards, sometime they lead to innovation, sometimes not. It is always a good thing to see something new though.
Happy Bdays to all :).. NV think of doing that Your kidding right ?. There so full of them selfs with Cuda.
Expect it of ATI though and i'm surprised it's not been done already but then again they have got the idle usage down real low already.
Not enough games for this yet for this be as good as it could be. People don't want this they want 300 series already lol.
Signed up to the event ^^. I live 9000 km away from the event, but hey maybe they choose me lol ^^.
cheers
DS
Or need I remind anyone of the "HD 2600XT Dual."
"$400 dollar card, $150 performance."
I'm obviously not a graphics card engineer, but I would think desigining this card would be easier than desiging something like a GTX295 with two of the same cores.
Think about it: With two of the same cores, you have to design the card so the two core communicate together to share the work. You have to design SLi communication into the PCB, along with communication with the PCI-E bus for each core.
With this card, all you need to do is slap the two cores on the same PCB, connect them both to a NF200 bridge chip, and that is it. No need to design communication paths between the two cards. To the system, they are just two cards sharing the same PCI-e bus.
And on a different note, this card will probably benefit the people with motherboard that only have one PCI-E x16 slot. So they can have a dedicated PhysX card, the only problem is that the price will likely be so outragous, they might as well buy a new motherboard...
Isn't it possible to get the onboard IGP of an nvidia motherboard to do the physx whilst your real GPU does the graphics anyway?
i dont see the point in this, im guessing they need to make somemore money on a gimmick they hope people to buy to support all the money there losing.
Nv ramped up the requirements for physX recently, so most of the onboard GPU's cant handle the more modern physX titles
And in the launch event they can test that card vs HD 5870 + GTS 250 with the hacked drivers to allow physX... I would really wanna see that. :D
Fermi is close to be completed and there is no delay from Nvidia.
www.techpowerup.com/tags.php?tag=GT300
First it was Q4 2008, then Q1 2009, then Q4 2009 (that's right now) with demos in September -- this is what you call "no delay?"
Q4 2009 has been the official release schedule since nVidia announced it back in December 2008.
So, no, it was never Q4 2008, then Q1 2009. It has always been Q4 2009.
Their 40NM parts was roughly 3 months delayed.
NVidia and ATI is the best that ever happened to this world, the bad part is that nvidia turning out to be arrogant pricks now. But they have had a very very good performance growth for every generation change, and pushed themself hard.
Surely GPGPU is good, but without a common API for intel ati and nvidia this isnt any compelling feature.
I wudnt want to make a app ive made just for nvidia costumers, and i deffy dont want to make it for 4 diffrent api's, one for intel, one for ati, one for nvidia and one for x86!
And i wudnt make a GPGPU card for consumers yet, could increase the performance for GPGPU steadyly, not dedicate something for it, from one generation(gaming card) then to gpgpu car the other generation.
Stuff dont happen over night, just watch 64 bits, 32 bit worked and people didnt switch.
on topic, this frakenstein card has scared me...
i will be purchasing ATI next time if this card is released as my religion does not approve of the Doctor's work.