Tuesday, August 11th 2009

MSI ''Big Bang'' P55 Motherboard Implements Lucid Hydra
MSI already has its task cut out when Intel's first socket LGA-1156 processors hit stores. With entry-level P55-CD53, mid-range P55-GD65, enthusiast-range P55-GD80, and a micro-ATX P55M-GD45 motherboard offering in place, the lineup seems just about complete, except for two mysterious motherboards that aren't part of the list. First being the G9P55-DC that packs an NVIDIA BR-03 bridge chip that enables 3-way SLI with better interface bandwidth to the three graphics cards, and second is under the looking-glass today. Codenamed "Big Bang", this prototype motherboard by MSI packs a LucidLogix Hydra technology, which clearly on paper, is the next big thing as far as multi-GPU systems go.
MSI P55 "Big Bang" looks similar to the P55-GD80, except for that under the top chipset heatsink (which, by the way, is purely cosmetic on the GD80), is a Lucid Hydra chip. The chip connects to all three (or four) PCI-Express x16 slots (lane configuration not known), and allows Lucid's multi-GPU technology that lets you make practically any combination of graphics cards, for performance scaling. The member cards needn't have parity on their performance, as the Hydra chip does all the load-balancing by itself. Products based on Hydra are slowly, but surely showing up in small numbers for now, including enterprise-grade rack-mount graphics rendering boxes like this one, conceived a long time ago. A lot of details are yet to emerge, especially around if there are more motherboard manufacturers eying Hydra, about when a Hydra-based product actually makes it to shelves, and more importantly, when does MSI plan to sell this and G9P55-DC.
Source:
IOPanel
MSI P55 "Big Bang" looks similar to the P55-GD80, except for that under the top chipset heatsink (which, by the way, is purely cosmetic on the GD80), is a Lucid Hydra chip. The chip connects to all three (or four) PCI-Express x16 slots (lane configuration not known), and allows Lucid's multi-GPU technology that lets you make practically any combination of graphics cards, for performance scaling. The member cards needn't have parity on their performance, as the Hydra chip does all the load-balancing by itself. Products based on Hydra are slowly, but surely showing up in small numbers for now, including enterprise-grade rack-mount graphics rendering boxes like this one, conceived a long time ago. A lot of details are yet to emerge, especially around if there are more motherboard manufacturers eying Hydra, about when a Hydra-based product actually makes it to shelves, and more importantly, when does MSI plan to sell this and G9P55-DC.
87 Comments on MSI ''Big Bang'' P55 Motherboard Implements Lucid Hydra
The Lucid Hydra chip, as mentioned before should enable the use of 2 different video cards together on the same motherboard.
We have been told in the past that there will be different versions of the chip - one will enable the usage of video cards like
Radeon HD 4870 along with a HD 4650 for example , another version will enable the usage of NVIDIA based
video cards together - like a GTX260 together with 9600GT for example, and of course one version will enable a mix of video cards -ATI video card
together with a Nvidia video card on the same motherboard.
The Hydra chip is responsible for the Balance and link between the different video cards."
so if it's the version that mixes that will suck... ass nvidia just killed that -doh- unless lucid is going to make some drivers....
but the MAIN thing about this is not mixing card or anything along those lines, it's the new algathorim(sp?) they use to calculate running sli/x-fire which should be much improved(if implemented like they said) and we should see some good 85/90%++ scaling in dual gpu configurations
installed on an old Socket 939 premium gigabyte mobo.
Anyway I don't see why we need to try so much to mix up video cards from ATI and nVidia... it's not like we've fixed all the technical problems in the world, and this is the one thing that keeps us from achieving ascension.
I don't think you guys have seen the actual demo... There is no actual balancing like SLI and CF are doing (splitting up a frame into regions or rendering alternative frames). The chip is actually splitting the scene into it's objects, in the demo the walls and the gun (it was some kind of FPS game) were rendered by one of the cards, the ceiling, floor and other objects were rendered by the other card. The Lucid chip then reassembled the scene from the existing rendered objects.
I find this method insanely complicated and if it even works I think there will be complications until the end of time. For example it was not explained if AA modes actually work. Can you think of a good way of mixing up AA modes from ATI with AA modes from nVidia? Can you think of a good way of mixing up models rendered by an ATI card with the models rendered by an nVidia card? I mean they use very different ways of optimizing a scene. I know ATI cards can actually skip rendering the parts of the objects that would not be visible in a scene, overriding what the engine tells it to render (the reason why so many new games had rendering errors with older drivers). Can you think how you'll ensure the quality level setup in the ATI driver will match the one in the nVidia driver?
You can't mix two drivers with a Lucid chip. It's the first time I heard about the miracle Lucid chip version that would do this and I've been listening very close and following this from day one. You can mix them up now, on non-Lucid boards, but I don't think you can actually have any benefit while gaming.
www.anandtech.com/showdoc.aspx?i=3385
I bet you're right... nv will buy these guys out, makes a hell of alot more sense than physX which is a floppity flop.
All the games that are ported from PS3 will use PhysX, and some titles ported from the PS3 can already be seen as working flawlessly with nVidia cards and working very badly on ATI cards.
I am hoping for OpenCL to take off and to become the standard physics acceleration technique and all should forget about PhysX and Havoc. But for now, nVidia has the upper hand, which is why I use nVidia cards in my work and gaming systems.
Anyway, I don't see Lucid bought out by ATI or nVidia, but, if the stuff works, Intel already has it's claws into it since they are an important investor, as I stated a few posts back.
Windows 7 support heterogeneous graphics adapters using WDDMv1.1 model drivers.
But vista does NOT support heterogeneous graphics adapters. But Windows 7 can run, for example ATI + NVIDIA WDDM1.1 drivers simultaneously[Just search web you will find such setups running].
So in Windows 7, Lucid's Hydra 200 can run ATI + NVIDIA cards.
Check this link:www.anandtech.com/video/showdoc.aspx?i=3646&p=1
It's specced to deal with more than 4 GPUs worth of work, and the interesting part is that unlike traditional scaling X-Fire or SLI, this 100% scaling can actually produce greater than 100% perf gains. Hard to believe? Well instead of split screen rendering the GPUs are given tasks that would normally be hindered by having to render other parts or effects of the scene. Imagine having a single gpu dedicated to particles and the impact that would have in your favorite FPS.
Article 1 (Hydra Explained): techgage.com/article/lucid_hydra_engine_multi-gpu_technology/1
Article 2 (Hydra Explained with demo): www.pcper.com/article.php?aid=607
Article 3 (Absolutely sick): www.legitreviews.com/article/1093/1/
::EDIT:: Seems the Hydra chip only draws 7W load or idle, interesting to know for heat.
But on a serious note, Hydra seems to be damn awesome. My interest is peaked.
EDIT:
www.anandtech.com/video/showdoc.aspx?i=3646