Wednesday, September 7th 2011

MSI Calls Bluff on Gigabyte's PCIe Gen 3 Ready Claim

In August, Gigabyte made a claim that baffled at least MSI, that scores of its motherboards are Ready for Native PCIe Gen. 3. Along with the likes of ASRock, MSI was one of the first with motherboards featuring PCI-Express 3.0 slots, the company took the pains to educate buyers what PCI-E 3.0 is, and how to spot a motherboard that features it. MSI thinks that Gigabyte made a factual blunder bordering misinformation by claiming that as many as 40 of its motherboards are "Ready for Native PCIe Gen. 3." MSI decided to put its engineering and PR team to build a technically-sound presentation rebutting Gigabyte's claims.

More slides, details follow.

MSI begins by explaining that PCIe support isn't as easy as laying a wire between the CPU and the slot. It needs specifications-compliant lane switches and electrical components, and that you can't count on certain Gigabytes for future-proofing.

MSI did some PCI-Express electrical testing using a 22 nm Ivy Bridge processor sample.
MSI claims that apart from the G1.Sniper 2, none of Gigabyte's so-called "Ready for Native PCIe Gen. 3" motherboards are what the badge claims to be, and that the badge is extremely misleading to buyers. Time to refill the popcorn bowl.
Source: MSI
Add your own comment

286 Comments on MSI Calls Bluff on Gigabyte's PCIe Gen 3 Ready Claim

#76
n-ster
[H]@RD5TUFFSorry if you trust gigabyte your a sucker, they have proven at every turn to be a shady group with shady if not out right false marketing!
I'm not a sucker... Gigabyte has been making great boards, and TBH, IMO, they have had their 2nd strike now (if this is true). I didn't even know about the hypermemory false advertising until now, does that mean I am a sucker? I have to admit I will be more cautious around GB products, but questionable advertising doesn't mean lower quality products. Also note that GB hasn't had time to respond to this, give them a fucking chance!

I understand you not liking GB for this, and that is not only your right, but it is totally understandable. However calling anyone who trusts GB a sucker is going a bit far. At least have some respect and say something like "gullible".
Derek12I don't see the issue here, they said it is HYPERMEMORY. My POWERCOLOR also says 1GB HYPERMEMORY and has 512MB dedicated what's the issue?
It said HM to 1GB GDDR5 but it clearly can only have 512MB of GDDR5 and then its DDR2/3. It also doesn't say the actual memory size on the box. Definitively shady and misleading. They screwed up


As per the topic, I think this could potentially be a big blow to GB. The hypermemory is shady advertising, but it is still somewhat gullible of the buyer to buy it as the specs say 512mb, this doesn't excuse GB, but makes it a bearable mistake. However it seems they have no excuse for this, we will just have to wait and see
Posted on Reply
#77
Mussels
Freshwater Moderator
not to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.


in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.
Posted on Reply
#78
neliz
Musselsnot to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.


in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.
It also involved stickers on the card to indicate 1G. And China is a huge market BTW
Posted on Reply
#79
n-ster
is it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?
Posted on Reply
#80
Mussels
Freshwater Moderator
n-steris it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?
gigabyte: we have PCI-E 3!

MSI: we have PCI-E3... and gigabyte doesnt. liars.
Posted on Reply
#81
Suhidu
n-steris it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?
Ivy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).
Posted on Reply
#82
n-ster
SuhiduIvy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).
Is MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?

@ Mussels :roll: I meant a more detailed summary :rolleyes:
Posted on Reply
#83
sneekypeet
not-so supermod
you all need to read, Gigabyte only claims bandwidth of PCI-e 3.0 nothing more!

As far as Native, sure once you add a Ivy Bridge its native to that NB chipset!
Posted on Reply
#84
neliz
n-steris it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?
Powerpoint is here, some other sites are referring to it: media.msi.com/main.php?g2_itemId=68762
Posted on Reply
#85
Lordbollo
Steven BJust because something technical is put on a slide and delivered from a company for once doesn't mean what the text says is true or applicable to the matter at hand.

I think it is very nice that MSI took the time to make those slides, and I think its a great way for them to show the community how the PCI-E lane allotment and standards work, sad part it is it doesn't disprove that the first 8x lanes of the first 16x slot on the UD4 isn't PCI-E 3.0 capable.

Besides if what VRZone says is true, then this doesn't matter.
And just because someone posts a counter claim on a forum doesn't mean that it is to be believed as well.

All I am saying is that they have reported (however truthfully) that the UD4 isn't compliant and you came on and started talking about the UD7 which after reading what was in the story that TPU put up was not even mentioned.

I will agree that yes it doesn't really disprove as you said but why did you in the first place introduce the UD7 in to this. No where in the article is it mentioned.
Posted on Reply
#86
n-ster


is this the ud7 mentionned that he might be talking about?
Posted on Reply
#87
cool_recep
sneekypeetHa it would be awesome if gigabyte was doing a revision and MSI just looks like more of an ass for being a douche all the way around, instead of just stealing ideas and calling it your own!

IDK guess I'm just so tired of people who are obviously wrong making others look bad to get themselves on the next rung of the ladder.
And what about Gigbyte's stealing HiC Cap from MSI?
Posted on Reply
#88
neliz
n-sterIs MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?
PCI-E 2.0 x16 is slower than PCI-E 3.0 x8.
I've talked with some knowledgeable MB folks and they all say that the CPUs will stay in 2.0 x16 mode when you install a PCI-E 3.0 card in it without the proper switches.
Posted on Reply
#89
sneekypeet
not-so supermod
cool_recepAnd what about Gigbyte's stealing HiC Cap from MSI?
I wasn't saying GB is more right than MSI. Just that MSI isn't any better;)
Posted on Reply
#92
neliz
LordbolloUm OK I have egg on my face sorry. I was more looking at the cpu-z pics.
No, Don't be ashamed, you were right and n-ster got stuck in Gigabyte's quagmire of lies, since the press release said that their ENTIRE (yeah ENTIRE!) 6 series line-up was compatible:

www.gigabyte.us/press-center/news-page.aspx?nid=1048
GIGABYTE Announces Entire 6 Series Ready to Support Native PCIe Gen. 3
Future Proof Your Platform for Next Generation Intel 22nm CPUs
2011/08/08


Taipei, Taiwan, August 8, 2011 - GIGABYTE TECHNOLOGY Co., Ltd, a leading manufacturer of motherboards, graphics cards and computing hardware solutions today announced their entire range of 6 series motherboards are ready to support the next generation Intel 22nm CPUs (LGA1155 Socket) as well as offer native support for PCI Express Gen. 3 technology, delivering maximum data bandwidth for future discrete graphics cards.
So yeah, there you have it, even with the original press release they already knew they were lying since the UD7 has the NF200 and you will NOT have maximum data bandwidth for future discrete graphics cards.
Ultim8Hmmm gigabyte = Dual Bios + No UEFI :)
Yeah, there are going to be loads of ****ed off gigabyte customer once they find out their 2011 system can't run Windows 8 :p
Posted on Reply
#93
Ultim8
the UD7 isn't on the list???
Posted on Reply
#94
neliz
Ultim8the UD7 isn't on the list???
Yes, Gigabyte already lied in the original press release since "entire lineup" would include the UD7's :)
Posted on Reply
#95
Ultim8
windows 8??? windows 8 doesnt need UEFI.

Even OA3 doesnt need UEFI which is for large SI's as the bios strings can still be built into older bios's.

Also on the UD7 Gigabytes board if I remember rightly doesn't activate the NF200 chip until 3 or more pcie16 lanes are used which is different to the way boards such as the ROG board on Asus work where they send all pcie traffic via the nf200, so in theory it should work for the first pcie lane too. I think there are 2 ways to look at the news, Gigabyte have said they are PCIe3 Native ready which suggests you can use PCIe3.

Now i know you can use PCIe3 without the switches but only in the first slot. MSI's pictures are misleading in their presentation because they show the switches below the path of the cpu. In reality these chips are to bridge pcie lanes.

Now this means that the first pciex16 slot has a direct connection with the cpu, this means it will become a pcie3 slot. However the speed through the switches will be reduced to the limits of the switch.

So something like this:

Ivy Bridge CPU-------PCIe3---Switch Gen2------PCIe2

vs

Ivy Bridge CPU-------PCIe3---Switch Gen3------PCIe3

Now depending on your view you might say Gigabyte are the good guys as they are giving existing customers PCIe3 albeit only in one slot, where as other manufacturers are charging you to upgrade for gen 3 support.

They could have made it clearer I agree but I think MSI are mud throwing here and they will come out worse for it. Especially if IVY bridge needs them to wipe all there UEFI bios which cant be done at a service or reseller level!
Posted on Reply
#96
jfk1024
why msi is wrong?

WHY IS MSI WRONG? ...because the PCI-e 3.0 physical layer is the same as PCI-e 2.0. The only thing that is different is the '128b/130b' encoding. The data is encoded/decoded by the PCI-e controller from the processor (sandy bridge) and from the graphic card. So, the PCI-E Express 3.0 has the same physical characteristics as PCI-E Express 2.0, that means: if the PCI-e controller know how to encode and decode PCI-E 3.0 data then we can transfer PCI-E 3.0 data through a PCI-E 2.0 physical link.
Posted on Reply
#97
Ultim8
correct the controller is on the cpu but the traces, sockets ect are identical so in theory every board manufacturer could give you pcie3 support for the first slot but only Gigabyte did this.....why dont the others???

You decide but i know why
Posted on Reply
#98
cadaveca
My name is Dave
MSi has updated their entire lineup to be fully PCIe 3.0 ready. All boards have been revised with new components, and feature the (G3) moniker. I reviewed one of these boards, the GD65, last week or the week before.

The only question I have is why did MSi single out Gigabyte? What makes them different, from, say, ASUS?

ASUS hasn't mentioned PCIe 3.0 at all, that I can tell.
Posted on Reply
#99
jfk1024
PS: Probably MSI needs a PCI-E 3.0 GPU to make some real "testing"
Posted on Reply
Add your own comment
Nov 23rd, 2024 04:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts