Tuesday, September 20th 2011

Gigabyte Responds to MSI's Bluff Call

A little earlier this month, MSI's PR team dished out a presentation in which they claimed that Gigabyte was misleading buyers into thinking that as many as 40 of its recently-launched motherboards were "Ready for Native PCIe Gen.3". MSI tried to make its argument plausible by explaining what exactly goes into making a Gen 3-ready motherboard. The presentation caused quite some drama in the comments. Gigabyte responded with a presentation of its own, in which it counter-claimed that those making the accusations ignored some key details. Details such as "what if the Ivy Bridge CPU is wired to the first PCIe slot (lane switches won't matter)?"
In its short presentation with no more than 5 slides, Gigabyte tried to provide an explanation to its claim that most of its new motherboards are Gen 3-ready. The presentation begins with a diplomatic-sounding message on what is the agenda of the presentation, followed by a disclaimer that three of its recently-launched boards, Z68X-UD7-B3 & P67A-UD7/P67A-UD7-B3, lack Gen 3 readiness. This could be because those boards make use of a Gen 2 NVIDIA nForce 200 bridge chip, even the first PCI-E x16 slot is wired to that chip.

The next slide looks to form the key component of Gigabyte's rebuttal, that in motherboards with just one PCI-E x16 slot, there is no switching circuitry between the CPU's PCI-E root-complex and the slots, and so PCI-E Gen 3 will work.

The following slides explain that in motherboards with more than one PCI-Express x16 slot wired to the CPU, a Gen 3 switch redirects the unused x8 PCIe lanes from the
second slot to the first card slot for full x16 PCIe graphics bandwidth. But then MSI already established that barring the G1.Sniper 2, none of Gigabyte's boards with more than one PCI-E x16 slot has Gen 3 switches.

Likewise, it explained in the following slides about how Gen 3 switches handle cases in which more than one graphics card is wired to the CPU. Again we'd like to mention that barring the G1.Sniper 2, none of Gigabyte's 40 "Ready for Native PCIe Gen.3" have Gen 3 switches.

The last slide, however, successfully rebuts MSI's argument. Even in motherboards with Gen 2 switches, a Gen 3 graphics card can run in Gen 3 mode, on the first slot, albeit at electrical x8 data rate. Sure, it's not going to give you PCI-Express 3.0 x16, and sure, it's going to only work for one graphics card, but it adequately validates Gigabyte's "Ready for Native PCIe Gen.3" claim from a purely logical point of view.

Now that Gigabyte entered the debate, the onus on Gigabyte will now be to also clarify that apart from the switching argument, all its "Ready for Native PCIe Gen.3" have Gen 3-compliant electrical components, which MSI claimed Gigabyte's boards lack in this slide.
Add your own comment

64 Comments on Gigabyte Responds to MSI's Bluff Call

#1
Mussels
Freshwater Moderator
inb4 flame war between the company reps again >.>
Posted on Reply
#3
Derek12
The only way to check any claims 100% is trying a PCIe 3 video card on PCie3 ready MSI & Gigabyte boards and see differences.
Posted on Reply
#4
Frizz
So.... for people who own Gigabyte mid-high end motherboards with more than 1 PCIE slot will get 8x/8x 3.0 in both single and dual GPU mode? :confused:
Posted on Reply
#6
kereta
i knew gigabyte has it covered
Posted on Reply
#7
Mussels
Freshwater Moderator
randomSo.... for people who own Gigabyte mid-high end motherboards with more than 1 PCIE slot will get 8x/8x 3.0 in both single and dual GPU mode? :confused:
8x 3.0 on the first slot, 8x 2.0 on the second. (or disabled, according to other comments... which would suck)
Posted on Reply
#8
newtekie1
Semi-Retired Folder
I love how it is all marketting and we won't see a graphics card that even begins to make use of the extra bandwidth in PCI-E 3.0 until long after these boards are obsolete.

I'm more interested in PCI-E 3.0 x1 than anything else, so we can have SATA 6.0Gb/s cards that actually get the bandwidth they need from an x1 slot and don't need an x4 slot.
Posted on Reply
#9
TheLostSwede
News Editor
randomSo.... for people who own Gigabyte mid-high end motherboards with more than 1 PCIE slot will get 8x/8x 3.0 in both single and dual GPU mode? :confused:
No, you'll only have the top slot working in x8 mode, at least according to these slides, and the second slot will be disabled.

The original idea as I understood it, was going to be fir the first slot to run in Gen 3 x8 mode and the second slot in Gen 2 x8 mode, but maybe that didn't work so well...
Posted on Reply
#10
TheLostSwede
News Editor
newtekie1I'm more interested in PCI-E 3.0 x1 than anything else, so we can have SATA 6.0Gb/s cards that actually get the bandwidth they need from an x1 slot and don't need an x4 slot.
Not going to happen for quite some time, as Sandy Bridge-E doesn't support it, nor does Ivy Bridge, so yeah...
Posted on Reply
#11
repman244
Well this is a never ending flame war which doesn't benefit anyone I think.

It goes like this:
Posted on Reply
#12
Fourstaff
newtekie1I love how it is all marketting and we won't see a graphics card that even begins to make use of the extra bandwidth in PCI-E 3.0 until long after these boards are obsolete.
You probably want the extra features from PCI-E 3.0 rather than the bandwidth.
Posted on Reply
#13
newtekie1
Semi-Retired Folder
TheLostSwedeNot going to happen for quite some time, as Sandy Bridge-E doesn't support it, nor does Ivy Bridge, so yeah...
Sure they do. Sandy Bridge-E has 40 PCI-E 3.0 lanes to work with and Ivy Bridge is going to have 16, they can be used for anything, they don't have to be used for a graphics card.
FourstaffYou probably want the extra features from PCI-E 3.0 rather than the bandwidth.
No, not really, the extra features are pretty useless to everyone really...
Posted on Reply
#14
TheMailMan78
Big Member
newtekie1Sure they do. Sandy Bridge-E has 40 PCI-E 3.0 lanes to work with and Ivy Bridge is going to have 16, they can be used for anything, they don't have to be used for a graphics card.



No, not really, the extra features are pretty useless to everyone really...
Until games stop being ports most GPU upgrades are useless over all. Unless you fold or do 3D rendering its all fluff.
Posted on Reply
#15
DigitalUK
thanks mailman, was laughing my head off when i saw that
Posted on Reply
#16
TheLostSwede
News Editor
newtekie1Sure they do. Sandy Bridge-E has 40 PCI-E 3.0 lanes to work with and Ivy Bridge is going to have 16, they can be used for anything, they don't have to be used for a graphics card.
Sure, but I think the OP meant x1 slots from the chipset, which isn't going to happen until maybe Ivy Bridge-E or even later.
Posted on Reply
#17
bbmarley
wasnt it last year ASUS called out Gigabyte
i wonder why all the Gigabyte hate from brands
Posted on Reply
#18
DannibusX
bbmarleywasnt it last year ASUS called out Gigabyte
i wonder why all the Gigabyte hate from brands
Because they are competitors?
Posted on Reply
#19
TheMailMan78
Big Member
PollDo such public face-offs help the consumer?
No. I don't want to own ether of their products because of this childish BS. You can call a competitor out without pointing fingers. Have a little class people.
Posted on Reply
#20
cadaveca
My name is Dave
TheMailMan78No. I don't want to own ether of their products because of this childish BS. You can call a competitor out without pointing fingers. Have a little class people.
+1.

I gotta say though, AsRock pulled the same thing, and all that Gigabyte has done in this situation is respond, as you would to a spoilt child throwing a tantrum.



Kinda sucks they got dragged into this crap, but at the same time, I do dare say that they were one of hte first yo claim PCIe 3.0 support, so maybe they drew the attention to themselves.


All I know is that I have been highlighting PCIe 3.0 switches in my reviews on every product that has them, and now, because of this, ANY PCIe siwthc on a board I review gets highlighted. I kind of appreciate that fact that this has provided me with more work, really, I DO.:banghead:


Poo-poo on AsRock and MSI for starting this crap!
Posted on Reply
#21
btarunr
Editor & Senior Moderator
Added a poll.
Posted on Reply
#22
btarunr
Editor & Senior Moderator
cadavecaI gotta say though, AsRock pulled the same thing, and all that Gigabyte has done in this situation is respond, as you would to a spoilt child throwing a tantrum.
ASRock didn't get itself involved. It just sensed the situation and spammed us with the same PCIe Gen3 PR material it sent us months ago when it launched its Gen 3 motherboards. The same "Gen3 gets you laid" stuff.
Posted on Reply
#23
HalfAHertz
So um what are the differences between pci-e 2.0 x16 and pci-e 3.0 x8, don't they provide the exact same bandwidth?
Posted on Reply
#24
btarunr
Editor & Senior Moderator
HalfAHertzSo um what are the differences between pci-e 2.0 x16 and pci-e 3.0 x8, don't they provide the exact same bandwidth?
Yes, provided:
  • You use Ivy Bridge CPU
  • The graphics card is PCI-E 3.0 compliant
Posted on Reply
#25
TheMailMan78
Big Member
btarunrASRock didn't get itself involved. It just sensed the situation and spammed us with the same PCIe Gen3 PR material it sent us months ago when it launched its Gen 3 motherboards. The same "Gen3 gets you laid" stuff.
Whoa, whoa, wait a second. Gen3 gets you laid?! Ill tattoo a ASRock logo on my face if thats true.
Posted on Reply
Add your own comment
Dec 18th, 2024 04:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts