• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI Calls Bluff on Gigabyte's PCIe Gen 3 Ready Claim

False advertising is pretty serious in the States. Can't the FTC look into this? It might suck to be Gigabyte pretty soon. I hope not though, they make good stuff.

Yes...

Isn't Gen 3 like Gen 1 to Gen 2 for performance gains ?. Maybe it's why Gigabyte ( if they did that is ) skimped out as the bandwidth is not used most of the time anyways.
 
Sorry if you trust gigabyte your a sucker, they have proven at every turn to be a shady group with shady if not out right false marketing!

I'm not a sucker... Gigabyte has been making great boards, and TBH, IMO, they have had their 2nd strike now (if this is true). I didn't even know about the hypermemory false advertising until now, does that mean I am a sucker? I have to admit I will be more cautious around GB products, but questionable advertising doesn't mean lower quality products. Also note that GB hasn't had time to respond to this, give them a fucking chance!

I understand you not liking GB for this, and that is not only your right, but it is totally understandable. However calling anyone who trusts GB a sucker is going a bit far. At least have some respect and say something like "gullible".

I don't see the issue here, they said it is HYPERMEMORY. My POWERCOLOR also says 1GB HYPERMEMORY and has 512MB dedicated what's the issue?

It said HM to 1GB GDDR5 but it clearly can only have 512MB of GDDR5 and then its DDR2/3. It also doesn't say the actual memory size on the box. Definitively shady and misleading. They screwed up


As per the topic, I think this could potentially be a big blow to GB. The hypermemory is shady advertising, but it is still somewhat gullible of the buyer to buy it as the specs say 512mb, this doesn't excuse GB, but makes it a bearable mistake. However it seems they have no excuse for this, we will just have to wait and see
 
not to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.


in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.
 
not to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.


in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.

It also involved stickers on the card to indicate 1G. And China is a huge market BTW
 
is it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?
 
is it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?

gigabyte: we have PCI-E 3!

MSI: we have PCI-E3... and gigabyte doesnt. liars.
 
is it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?

Ivy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).
 
Ivy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).

Is MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?

@ Mussels :roll: I meant a more detailed summary :rolleyes:
 
you all need to read, Gigabyte only claims bandwidth of PCI-e 3.0 nothing more!

As far as Native, sure once you add a Ivy Bridge its native to that NB chipset!
 
Just because something technical is put on a slide and delivered from a company for once doesn't mean what the text says is true or applicable to the matter at hand.

I think it is very nice that MSI took the time to make those slides, and I think its a great way for them to show the community how the PCI-E lane allotment and standards work, sad part it is it doesn't disprove that the first 8x lanes of the first 16x slot on the UD4 isn't PCI-E 3.0 capable.

Besides if what VRZone says is true, then this doesn't matter.

And just because someone posts a counter claim on a forum doesn't mean that it is to be believed as well.

All I am saying is that they have reported (however truthfully) that the UD4 isn't compliant and you came on and started talking about the UD7 which after reading what was in the story that TPU put up was not even mentioned.

I will agree that yes it doesn't really disprove as you said but why did you in the first place introduce the UD7 in to this. No where in the article is it mentioned.
 
41b.jpg


is this the ud7 mentionned that he might be talking about?
 
Ha it would be awesome if gigabyte was doing a revision and MSI just looks like more of an ass for being a douche all the way around, instead of just stealing ideas and calling it your own!

IDK guess I'm just so tired of people who are obviously wrong making others look bad to get themselves on the next rung of the ladder.

And what about Gigbyte's stealing HiC Cap from MSI?
 
Is MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?

PCI-E 2.0 x16 is slower than PCI-E 3.0 x8.
I've talked with some knowledgeable MB folks and they all say that the CPUs will stay in 2.0 x16 mode when you install a PCI-E 3.0 card in it without the proper switches.
 
Um OK I have egg on my face sorry. I was more looking at the cpu-z pics.

No, Don't be ashamed, you were right and n-ster got stuck in Gigabyte's quagmire of lies, since the press release said that their ENTIRE (yeah ENTIRE!) 6 series line-up was compatible:

http://www.gigabyte.us/press-center/news-page.aspx?nid=1048

GIGABYTE Announces Entire 6 Series Ready to Support Native PCIe Gen. 3
Future Proof Your Platform for Next Generation Intel 22nm CPUs
2011/08/08


Taipei, Taiwan, August 8, 2011 - GIGABYTE TECHNOLOGY Co., Ltd, a leading manufacturer of motherboards, graphics cards and computing hardware solutions today announced their entire range of 6 series motherboards are ready to support the next generation Intel 22nm CPUs (LGA1155 Socket) as well as offer native support for PCI Express Gen. 3 technology, delivering maximum data bandwidth for future discrete graphics cards.

So yeah, there you have it, even with the original press release they already knew they were lying since the UD7 has the NF200 and you will NOT have maximum data bandwidth for future discrete graphics cards.



Hmmm gigabyte = Dual Bios + No UEFI :)
Yeah, there are going to be loads of ****ed off gigabyte customer once they find out their 2011 system can't run Windows 8 :p
 
windows 8??? windows 8 doesnt need UEFI.

Even OA3 doesnt need UEFI which is for large SI's as the bios strings can still be built into older bios's.

Also on the UD7 Gigabytes board if I remember rightly doesn't activate the NF200 chip until 3 or more pcie16 lanes are used which is different to the way boards such as the ROG board on Asus work where they send all pcie traffic via the nf200, so in theory it should work for the first pcie lane too. I think there are 2 ways to look at the news, Gigabyte have said they are PCIe3 Native ready which suggests you can use PCIe3.

Now i know you can use PCIe3 without the switches but only in the first slot. MSI's pictures are misleading in their presentation because they show the switches below the path of the cpu. In reality these chips are to bridge pcie lanes.

Now this means that the first pciex16 slot has a direct connection with the cpu, this means it will become a pcie3 slot. However the speed through the switches will be reduced to the limits of the switch.

So something like this:

Ivy Bridge CPU-------PCIe3---Switch Gen2------PCIe2

vs

Ivy Bridge CPU-------PCIe3---Switch Gen3------PCIe3

Now depending on your view you might say Gigabyte are the good guys as they are giving existing customers PCIe3 albeit only in one slot, where as other manufacturers are charging you to upgrade for gen 3 support.

They could have made it clearer I agree but I think MSI are mud throwing here and they will come out worse for it. Especially if IVY bridge needs them to wipe all there UEFI bios which cant be done at a service or reseller level!
 
why msi is wrong?

WHY IS MSI WRONG? ...because the PCI-e 3.0 physical layer is the same as PCI-e 2.0. The only thing that is different is the '128b/130b' encoding. The data is encoded/decoded by the PCI-e controller from the processor (sandy bridge) and from the graphic card. So, the PCI-E Express 3.0 has the same physical characteristics as PCI-E Express 2.0, that means: if the PCI-e controller know how to encode and decode PCI-E 3.0 data then we can transfer PCI-E 3.0 data through a PCI-E 2.0 physical link.
 
correct the controller is on the cpu but the traces, sockets ect are identical so in theory every board manufacturer could give you pcie3 support for the first slot but only Gigabyte did this.....why dont the others???

You decide but i know why
 
MSi has updated their entire lineup to be fully PCIe 3.0 ready. All boards have been revised with new components, and feature the (G3) moniker. I reviewed one of these boards, the GD65, last week or the week before.

The only question I have is why did MSi single out Gigabyte? What makes them different, from, say, ASUS?

ASUS hasn't mentioned PCIe 3.0 at all, that I can tell.
 
PS: Probably MSI needs a PCI-E 3.0 GPU to make some real "testing"
 
Back
Top