# MSI Calls Bluff on Gigabyte's PCIe Gen 3 Ready Claim



## btarunr (Sep 7, 2011)

In August, Gigabyte made a claim that baffled at least MSI, that scores of its motherboards are Ready for Native PCIe Gen. 3. Along with the likes of ASRock, MSI was one of the first with motherboards featuring PCI-Express 3.0 slots, the company took the pains to educate buyers what PCI-E 3.0 is, and how to spot a motherboard that features it. MSI thinks that Gigabyte made a factual blunder bordering misinformation by claiming that as many as 40 of its motherboards are "Ready for Native PCIe Gen. 3." MSI decided to put its engineering and PR team to build a technically-sound presentation rebutting Gigabyte's claims. 



 

 



More slides, details follow.



MSI begins by explaining that PCIe support isn't as easy as laying a wire between the CPU and the slot. It needs specifications-compliant lane switches and electrical components, and that you can't count on certain Gigabytes for future-proofing. 



 

 

 



MSI did some PCI-Express electrical testing using a 22 nm Ivy Bridge processor sample.



 

 

 

MSI claims that apart from the G1.Sniper 2, none of Gigabyte's so-called "Ready for Native PCIe Gen. 3" motherboards are what the badge claims to be, and that the badge is extremely misleading to buyers. Time to refill the popcorn bowl.

*View at TechPowerUp Main Site*


----------



## GSG-9 (Sep 7, 2011)

Oh snap.


----------



## Fx (Sep 7, 2011)

lol nice! things just got real...


----------



## neliz (Sep 7, 2011)

Nice to see when Marketing BS is burned to the ground by technical realities.


----------



## kereta (Sep 7, 2011)

Is the source reliable?


----------



## Easo (Sep 7, 2011)

Epic popcorn shall be consumed indeed...


----------



## Radical_Edward (Sep 7, 2011)

Daaaaaaaaamn. Gigabyte got burned bigtime.


----------



## Chaitanya (Sep 7, 2011)

I see a big storm brewing on the horizon after this press release.


----------



## DannibusX (Sep 7, 2011)

False advertising is pretty serious in the States.  Can't the FTC look into this?  It might suck to be Gigabyte pretty soon.  I hope not though, they make good stuff.


----------



## dir_d (Sep 7, 2011)

Glad to see MSI step up their game.


----------



## cadaveca (Sep 7, 2011)

I'm staying well outta this one. WTF is going on here? Shall I start making popcorn?


----------



## TheLaughingMan (Sep 7, 2011)

Nope. Using the word "Ready" gives them at least 2 loopholes to get out of this that I see. It will be interesting to see how they respond.


----------



## heky (Sep 7, 2011)

Busted!, thats what they get for trying to fuck people over.


----------



## TheLostSwede (Sep 7, 2011)

Ok, how easy would it be to fake the lane readout? As for one, are there any Gen 3 PCI Express x16 cards MSI could've fitted to the slot?
And what happens if you put a x8 Gen 1 card in a slot of any board? Wouldn't that revert the slot speed to the same speed as the card?
Also, what utility is MSI using to get that readout?

On top of that, Gigabyte never said it'd work for multiple cards, only that the primary slot would operate at x16 Gen 3 speed.

I dunno about the components on the PCB though beyond the switches, which MSI is making a big deal out as the only choice, but there are at least two more companies that already have Gen 3 switches, none of those are used on the Gigabyte board in question though.


----------



## arroyo (Sep 7, 2011)

War… War never changes…


----------



## buggalugs (Sep 7, 2011)

OH Noes!! Gigabyte have done it again. Gigabyte are really screwing up lately....


----------



## Steevo (Sep 7, 2011)

Has MSI actually gotten honest and better?


----------



## kid41212003 (Sep 7, 2011)

Probably those pictures are just for promotion purpose and not really present the finished products.


----------



## arnoo1 (Sep 7, 2011)

some one is feeling pressure from gigabyte's awesome mobo's
msi that's just lame


----------



## RejZoR (Sep 7, 2011)

The whole thing is a bit lame, but it's nice to see MSI uncovering the truth. It's only good for us, consumers.


----------



## entropy13 (Sep 7, 2011)

kid41212003 said:


> Probably those pictures are just for promotion purpose and not really present the finished products.



Huh? Gigabyte is saying their boards are "Ready for Native PCIe Gen. 3", and obviously they're already the "finished products" since you can buy them right now, and all you need is a BIOS update.


----------



## robal (Sep 7, 2011)

Eagerly awaiting Gigabyte's response


----------



## Vincy Boy (Sep 7, 2011)

arnoo1 said:


> some one is feeling pressure from gigabyte's awesome mobo's
> msi that's just lame



Lame to reveal a company's attempt at misinforming us consumers? What are you some kinda blind Gigabyte fanboy? I've had my eye on a Gigabyte Z68 and this only helped me making a more informed purchase decision.
Looking forward to hear Gigabyte's response on this.


----------



## kid41212003 (Sep 7, 2011)

entropy13 said:


> Huh? Gigabyte is saying their boards are "Ready for Native PCIe Gen. 3", and obviously they're already the "finished products" since you can buy them right now, and all you need is a BIOS update.



I guess I missed the "In August" part. 

And since they use the picture that look like promotional picture, i thought these boards are yet to be released .


----------



## WarraWarra (Sep 7, 2011)

Cool another win for the end user.
I wonder what could be said for the gpu's that is being produced / labeled as Gen 3 PCIe ?
Makes you think about cpu's as well, at least Intel admitted their bugs and is working on the 2011 fix for that.

MSI = Robin Hood men in Tights LOL. 
Who would have though that there is still that ancient mythical concept called democracy and a legal system or that which they embody that still lives somewhere today on planet earth, thought it died out 200 years ago and here MSI shows us this is still alive.

Is MSI Chinese / Russian or something, maybe I should move to that country to find democracy & freedom.  

Keep up the good work MSI.


----------



## RejZoR (Sep 7, 2011)

MSI or MicroStar International (they used to call them self like this in the past but lately they are just MSI) is a Taiwanese company.


----------



## jpierce55 (Sep 7, 2011)

RejZoR said:


> MSI or MicroStar International (they used to call them self like this in the past but lately they are just MSI) is a Taiwanese company.



LOL Microstar international..... I remember that..... good to know information.


----------



## [H]@RD5TUFF (Sep 7, 2011)

Gigabyte has a history of misleading advertising like the video cards that said 1GB of "hyper memory" but came with only 512 mb, ect ect. Just another reason to look at other manufacturers.


----------



## Jstn7477 (Sep 7, 2011)

Anyone remember the Gigabyte G41 fiasco where the PCIe x16 slot was only wired with 4 lanes when the chipset was capable of a real x16 slot? If this story is really true, it's hilarious.


----------



## qubit (Sep 7, 2011)

I think adding a poll to this news post asking if we think Gigabyte is talking BS or not would go down really well. 

I'm voting for BS from Gigabyte, especially as they claim a whopping _40_ mobos.


----------



## Millennium (Sep 7, 2011)

Yeah good for MSI here. Last thing we need is misinformation from a 'trusted' brand like Gigabyte


----------



## LAN_deRf_HA (Sep 7, 2011)

Reminds me of that time Gigabyte said the Asus epu chips don't do anything. Asus just turned around and sued them and they folded pretty easy. Wonder what they'll do here.


----------



## [H]@RD5TUFF (Sep 7, 2011)

LAN_deRf_HA said:


> Reminds me of that time Gigabyte said the Asus epu chips don't do anything. Asus just turned around and sued them and they folded pretty easy. Wonder what they'll do here.



Likely nothing.


----------



## Andrea deluxe (Sep 7, 2011)

Anyone remember my post????

http://www.techpowerup.com/150333/G...ries-Ready-to-Support-Native-PCIe-Gen.-3.html


----------



## buggalugs (Sep 7, 2011)

LAN_deRf_HA said:


> Wonder what they'll do here.



Its going to be interesting.




Andrea deluxe said:


> Anyone remember my post????
> 
> http://www.techpowerup.com/150333/G...ries-Ready-to-Support-Native-PCIe-Gen.-3.html



Wow, you were spot on dude. I remember reading your post now.....


----------



## ensabrenoir (Sep 7, 2011)

*not trying 2 start something*

Ivory bridge  sample that's confirmed to exist......Wonder if msi have a bulldozer sample .....


----------



## Frick (Sep 7, 2011)

[H]@RD5TUFF said:


> Gigabyte has a history of misleading advertising like the video cards that said 1GB of "hyper memory" but came with only 512 mb, ect ect. Just another reason to look at other manufacturers.



That was not Gigabyte, it was from Ati.


----------



## BrooksyX (Sep 7, 2011)

Maybe those motherboards are Windows Vista ready too...


----------



## [H]@RD5TUFF (Sep 7, 2011)

Will be interesting to see if Gigabyte is made to retract it's 3.0 claims.


----------



## TheLostSwede (Sep 7, 2011)

[H]@RD5TUFF said:


> Will be interesting to see if Gigabyte is made to retract it's 3.0 claims.



Interesting for whom? You?

No-one can prove 100% without a doubt that their motherboard as today can work with PCI Express 3.0 cards, as there are no cards. That's how simple it is. As for anything else, let's wait for an official statement from Gigabyte shall we, before we draw conclusions either which way.


----------



## [H]@RD5TUFF (Sep 7, 2011)

TheLostSwede said:


> Interesting for whom? You?
> 
> No-one can prove 100% without a doubt that their motherboard as today can work with PCI Express 3.0 cards, as there are no cards. That's how simple it is. As for anything else, let's wait for an official statement from Gigabyte shall we, before we draw conclusions either which way.



Do you honestly believe Gigabyte would say oh your right our bad .. .


----------



## ranom (Sep 7, 2011)

MSI take a swipe at Gigabyte 

BTW, here in Japan, Gigabyte has released a series of "interesting" boards. Locally called the GA-Z68X-UD3H-B3/G3, GA-Z68MA-D2H-B3/G3, and GA-Z68MX-UD2H-B3/G3; *global names are GA-Z68X-UD3H-B3 rev1.3, GA-Z68MA-D2H-B3 rev1.3, and GA-Z68MX-UD2H-B3 rev1.3*; these boards claim to be Ivy/PCIe Gen3 compatible. hmm....


----------



## DannibusX (Sep 7, 2011)

The issue really isn't whether PCIe 3.0 cards will work with the slots in existing Gigabyte boards, read the following news posting from Bta:



> GIGABYTE TECHNOLOGY Co., Ltd, a leading manufacturer of motherboards, graphics cards and computing hardware solutions today announced their entire range of 6 series motherboards are ready to support the next generation Intel 22nm CPUs (LGA1155 Socket) *as well as offer native support for PCI Express Gen. 3 technology, delivering maximum data bandwidth for future discrete graphics cards.*
> 
> Wanting to provide maximum upgradeability to customers, GIGABYTE has enabled native support for PCI Express Gen. 3 across the entire range of GIGABYTE 6 series motherboards, including the recently launched G1.Sniper 2 motherboard, when paired with Intel's next generation 22nm CPUs. *By installing the latest BIOS for their 6 series motherboards today, users can be assured they are ready to take advantage of all the performance enhancements tomorrow's technologies have to offer*.



Gigabyte said that current line up will support PCIe 3.0 natively through a BIOS update, which is really hard to believe since there's a different in hardware when it comes to an upgraded socket.  There will be performance degradation when using a 3.0 card in a 2.0 socket, mystical BIOS update be damned.


----------



## Steven B (Sep 7, 2011)

Well i think what GIGABYTE meant, is that there is native support for PCI-E Gen 3. now MSI took the one board, the UD7 that GIGABYTE did NOT list on their list of PCI-E 3.0 capable boards and attacked it, but GB never said the UD& had PCI-E gen 3 capability.

Now on a board like the GD65 and UD4 there are 16 x pIC-E lanes. Now we all know that lanes that go through the PCI-E switches will be limited by the bandwidth of that switch. If the lanes don't go through the switch, in which 8 lanes don't on both the UD4 and GD65, those lanes directly connected to the first PCI-E 16x slots first 8x of lanes, can be PCI-E 3.0 capable, they are directly wired. 

now the UD7 was not included on the GB list becuase all its lanes goto the NF200. 

In theory BC of GIGABYTE's anouncement all LGA1155 boards with correctBIOS can support PCI-E 3.0 on those lanes direct from the CPU to the PCI-E lanes, which is usualy 8. 

So MSI always copies GIGABYTE stuff, they totally copied them on USB extra power, down the advertising stuff. I doubt GB would just go out and blatently lie.


----------



## TheLostSwede (Sep 7, 2011)

[H]@RD5TUFF said:


> Do you honestly believe Gigabyte would say oh your right our bad .. .



They would, to me, but hey, you don't know me so 
I'll go down to their offices tomorrow and have a chat with them, just for you, ok?


----------



## buggalugs (Sep 7, 2011)

TheLostSwede said:


> Interesting for whom? You?
> 
> No-one can prove 100% without a doubt that their motherboard as today can work with PCI Express 3.0 cards, as there are no cards. That's how simple it is. As for anything else, let's wait for an official statement from Gigabyte shall we, before we draw conclusions either which way.



 Its obvioulsy interesting for everybody involved with computers. It is being reported on Tech news sites.

 There are motherboard hardware specs and requirements for PCI-E 3 so if Gigabyte doesnt meet the specs we have a right to ask questions....



Derek12 said:


> Do you have proofs of previous incidents like this or you simply is talking crap I can't find of my own. I trusted it since 2003 though maybe I will not more due of this (the incident, not your message)?



 Apart from the video memory issue there is the UEFI bios issue from Gigabyte.

 Gigabyte call it a UEFI bios when its not a real UEFI bios.

 Gigabyte realized they made a big mistake on P67/Z68 boards by not including UEFI bios when other manufacturers did incude a real UEFI bios. So Gigabyte built a fake UEFI bios, nothing more than a glorified "easytune" windows software and called it "hybrid UEFI".

 Its exactly the same issue with this PCI-E 3.0 issue. Gigabyte gets caught with their pants down when the competition has something better, then they just lie and try to make a workaround.



Steven B said:


> I doubt GB would just go out and blatently lie.



Haha funny....


----------



## Steven B (Sep 7, 2011)

i am waiting to see what GIGABYTE will say, as no id expect MSi to lie before GB  GB doesn't need to lie, if they state somthing THAT blatant, aimed at a group of users with higher level thinking abilities like enthusiasts, people are going to attack it, so it better stand up.

You believe MSI but not GIGABYTE. Why don't you take a look over at overclock.net. The thread in which GB made this announcment has a pretty long thread, goto the last two pages. See what MSI REP MSIALEX says, hes a MSI rep, he says exactly what i just said, and he said MSI products can support what Gb said too. butit seems some overpretentious MSI people got offended. BTW same thing as ASsrock Slide earlier saying that asrock has PCI-E 3.0 but GB doesn't, but GB has the capbility too just like everything one. Asrock said this before GB even made their encouncment.


----------



## Suhidu (Sep 7, 2011)

MSI is totally lame at marketing. Geeze, stealing all Gigabyte's Durathunder3? Come up with your own campaign you hardcore 'investigators'! Pheh. All they do is just sit back and craft these _amazing_ new features, and then poke holes in everyone else's marketing.

Good luck upgrading to those PCI-E 3.0 cards without _Gigabyte's High ESD-Resistance iCs_! Did I mention that Gigabyte is the FIRST(and only?) to have _full traditional-BIOS support_ for PCI-E 3.0 cards?


----------



## MxPhenom 216 (Sep 7, 2011)

what about Asrock Extreme Gen3 boards? are those true PCI-e 3.0?


----------



## buggalugs (Sep 7, 2011)

Haha I just posted about it on Gigabyte's own forum. I still have an account from my old Gigabyte socket 775 board from 4-5 years ago.

 I asked Gigabyte to respond, If the thread doesnt get deleted i will post what they say.....lol


----------



## TheLostSwede (Sep 7, 2011)

nvidiaintelftw said:


> what about Asrock Extreme Gen3 boards? are those true PCI-e 3.0?



Some of them are, but according to MSI's reasoning, their high-end model doesn't, as it has an nForce 200 chip on it, yet ASRock is claiming PCI Express 3.0 support...


----------



## erocker (Sep 7, 2011)

nvidiaintelftw said:


> what about Asrock Extreme Gen3 boards? are those true PCI-e 3.0?



My Asrock Z68 Extreme4 Gen3 has NXP L04083B PCI-E 3.0 switches on it.


----------



## Andrea deluxe (Sep 7, 2011)

TheLostSwede said:


> Some of them are, but according to MSI's reasoning, their high-end model doesn't, as it has an nForce 200 chip on it, yet ASRock is claiming PCI Express 3.0 support...



they support 3.0 only in a single slot of the mobo...

is explained on the manual.....


----------



## Steven B (Sep 7, 2011)

well the UD7 was never on the GB list, right? 

PCI-E 3.0 is not interesting topic tho, its like the least important thing on a motherboard you buy today.


----------



## TheLostSwede (Sep 7, 2011)

Andrea deluxe said:


> they support 3.0 only in a single slot of the mobo...
> 
> is explained on the manual.....



And if that is the case, then so should Gigabyte's boards...


----------



## STCNE (Sep 7, 2011)

I haven't trusted Gigabyte since they sent me back the same DOA board 3 times. 5 months of RMAing and $40 for shipping and I'm still stuck with their dead board.


----------



## ranom (Sep 7, 2011)

TheLostSwede said:


> And if that is the case, then so should Gigabyte's boards...



The reason for that on the ASRock Ext7 is because the other PCIe slots are attached to the NF200, hence no Gen3. But one of the slots does get the full 16 lanes since they are using NXP L04083B PCIe 3.0 switches (but it becomes disabled when using the other x16 slots in crossfire/SLI situations)

Now on boards that dont use PCIe lane switches, in theory they might be able to support full 16 lanes of PCIe 3.0 (have no idea if they will be electronically stable). But on the Gigabyte boards that do have lane switches; other than the new G1.sniper2 and possibly the new REV1.3 boards;  the chances of having native Gen3 16 lane support might be a bit silm.


----------



## Scheich (Sep 7, 2011)

All these lies, goddamit :shadedshu


----------



## dazz (Sep 7, 2011)

what happens in 8x8x configurations with PCIe 2.0 switches involved? Do you get 8 lanes at 3.0 speeds and 8 lanes at 2.0, or do they all downgrade to 2.0?


----------



## neliz (Sep 7, 2011)

TheLostSwede said:


> Some of them are, but according to MSI's reasoning, their high-end model doesn't, as it has an nForce 200 chip on it, yet ASRock is claiming PCI Express 3.0 support...



Actually, the Extreme7 is a different beast altogether.

It switches between the SINGLE Gen3x16 slot and the NF200 controller.

If you installed a card in slot2, it will disable ALL OTHER PCI EXPRESS SLOTS from the NF200.
If you install a x16 in any other slot than slot 2, it will disable Gen3.

Now tell me, who feels like paying $300 for a single slot ATX board?


Oh, I checked the Gigabyte website:
G1 Sniper v2 lists Gen3 support:
http://www.gigabyte.com/products/product-page.aspx?pid=3962#ov

All other Z68 boards etc? NO Gen3 support listed anymore (clue clue!)
http://www.gigabyte.com/products/product-page.aspx?pid=3863#ov


----------



## Vimes (Sep 7, 2011)

neliz said:


> Actually, the Extreme7 is a different beast altogether.
> 
> It switches between the SINGLE Gen3x16 slot and the NF200 controller.
> 
> ...




They are, at this moment in time, all listed as being compatible with Gen 3...

http://uk.gigabyte.com/press-center/news-page.aspx?nid=1048


----------



## cdawall (Sep 7, 2011)

Derek12 said:


> very bad, Gigabyte, *if this is true and not marketing BS for MSI*, You disappointed me if is the case
> 
> 
> 
> ...



i make fun of PCChips all the time. how can you get a lower end ECS mobo? i mean thats a hell of a feat.



ensabrenoir said:


> Ivory bridge  sample that's confirmed to exist......Wonder if msi have a bulldozer sample .....




BD samples have been out and about a year now.


----------



## sneekypeet (Sep 7, 2011)

I take it those in glass houses should throw stones?

http://www.techpowerup.com/forums/showthread.php?t=151660

SO its OK for MSI to say they are the first on something they are not, but not OK for gigabyte to try something similar.

MSI needs to climb out of everyone's ass, and worry about their own house of cards for a while IMHO!

On topic....meh. The educated won't fall for gigabyte's tricks anyways!


----------



## cdawall (Sep 7, 2011)

sneekypeet said:


> I take it those in glass houses should throw stones?
> 
> http://www.techpowerup.com/forums/showthread.php?t=151660
> 
> ...



its possible that the MSI card fart works. i shut mine off every night. who knows maybe it will slow down dust build up. it is still no were near the full support for tech that wont be fully compatible.


----------



## sneekypeet (Sep 7, 2011)

not that it works, they claim they are the first with the tech, which they are not as someone in that thread brought up, and imho is something bta should have caught and called the bluff on MSI in it's OP instead of just following the marketing.

Again point is, people in glass houses shouldn't throw stones, but it seems MSI used bullet proof glass on their house!

So MSI sold the tech before they used it themselves? http://www.techpowerup.com/115132/S...-Cards-With-Dual-Layer-Fan-Blade-Cooling.html


----------



## ensabrenoir (Sep 7, 2011)

cdawall said:


> i
> BD samples have been out and about a year now.



Yeah everyone benched said 2 b false inaccurate etc etc. Haven't heard of a accountable one yet


----------



## cdawall (Sep 7, 2011)

ensabrenoir said:


> Yeah everyone benched said 2 b false inaccurate etc etc. Haven't heard of a accountable one yet



never said they had been benched and posted just said the samples have been out a while.


----------



## ensabrenoir (Sep 7, 2011)

cdawall said:


> never said they had been benched and posted just said the samples have been out a while.



Got me on that one


----------



## werez (Sep 8, 2011)

The only problem they have right now is this : claiming NATIVE 3.0 support . Native is the wrong word used . However they could have just claimed that any upcoming 3.0 card will be compatible with their boards without any major performance loss due to bandwidth and other crap that will prolly yeld 2fps higher in a random game anyway . Just my two cents ... 
The real marketing bs is the 3.0 implementation . I can already see it coming , everybody upgrading their systems , buying a new motherboard for 3.0 support , and the upcoming cards will perform the same on 2.0 (2.1) / 3.0 . I`ve seen it in the past and im pretty sure ill see this crap again . 

The fact that the cards maybe won`t run at full potential is another story , and we shall see the impact upon performance . But like i said , im pretty sure they know what they are doing . Just remember that motherboard manufacturers get early en samples to test on their motherboards , so they can actually build that bios , uefi or whatever and im pretty sure they have a 3.0 card around there somewhere . You wouldn`t wan`t ur hd7*** or whatever not working(or not fully compatible) in existing motherboards just because the card itself is 3.0 . It should be backward compatible , and basically that`s what an UEFI or BIOS update is for . Why not update the bios prior to the actual GPU launch ? 
And also i am pretty sure that they will actually launch motherboards with TRUE NATIVE ( hardware wise ) pcie 3.0 support , after the actual video cards hit the market . 
I`m still going to buy Gigabyte motherboards anyway ...


----------



## Mussels (Sep 8, 2011)

i'll go get the butter.


----------



## cdawall (Sep 8, 2011)

Mussels said:


> i'll go get the butter.



can i bring the toast?


----------



## Mussels (Sep 8, 2011)

LAN_deRf_HA said:


> Reminds me of that time Gigabyte said the Asus epu chips don't do anything. Asus just turned around and sued them and they folded pretty easy. Wonder what they'll do here.



except they were right, those chips really didnt do jack shit. i tested a few systems personally with a power meter at the wall, and they didnt do jack - it was just a software mechanism to control EIST, which didnt do anything if it was already enabled in the BIOS.


----------



## sneekypeet (Sep 8, 2011)

Ha it would be awesome if gigabyte was doing a revision and MSI just looks like more of an ass for being a douche all the way around, instead of just stealing ideas and calling it your own!

IDK guess I'm just so tired of people who are obviously wrong making others look bad to get themselves on the next rung of the ladder.


----------



## Lordbollo (Sep 8, 2011)

Steven B said:


> Well i think what GIGABYTE meant, is that there is native support for PCI-E Gen 3. now MSI took the one board, the UD7 that GIGABYTE did NOT list on their list of PCI-E 3.0 capable boards and attacked it, but GB never said the UD& had PCI-E gen 3 capability.



Um no they didn't the pics posted with this story clearly shows that cpu-z reports that they used the P67A-UD4-B3 board which GB claim has dude. Not the P67A-UD7-B3.


----------



## Steven B (Sep 8, 2011)

Just because something technical is put on a slide and delivered from a company for once doesn't mean what the text says is true or applicable to the matter at hand. 

I think it is very nice that MSI took the time to make those slides, and I think its a great way for them to show the community how the PCI-E lane allotment and standards work, sad part it is it doesn't disprove that the first 8x lanes of the first 16x slot on the UD4 isn't PCI-E 3.0 capable. 

Besides if what VRZone says is true, then this doesn't matter.


----------



## AsRock (Sep 8, 2011)

DannibusX said:


> False advertising is pretty serious in the States.  Can't the FTC look into this?  It might suck to be Gigabyte pretty soon.  I hope not though, they make good stuff.



Yes...

Isn't Gen 3 like Gen 1 to Gen 2 for performance gains ?.  Maybe it's why Gigabyte ( if they did that is ) skimped out as the bandwidth is not used most of the time anyways.


----------



## n-ster (Sep 8, 2011)

[H]@RD5TUFF said:


> Sorry if you trust gigabyte your a sucker, they have proven at every turn to be a shady group with shady if not out right false marketing!



I'm not a sucker... Gigabyte has been making great boards, and TBH, IMO, they have had their 2nd strike now (if this is true). I didn't even know about the hypermemory false advertising until now, does that mean I am a sucker? I have to admit I will be more cautious around GB products, but questionable advertising doesn't mean lower quality products. Also note that GB hasn't had time to respond to this, give them a fucking chance!

I understand you not liking GB for this, and that is not only your right, but it is totally understandable. However calling anyone who trusts GB a sucker is going a bit far. At least have some respect and say something like "gullible". 



Derek12 said:


> I don't see the issue here, they said it is HYPERMEMORY. My POWERCOLOR also says 1GB HYPERMEMORY and has 512MB dedicated what's the issue?



It said HM to 1GB GDDR*5* but it clearly can only have 512MB of GDDR5 and then its DDR2/3. It also doesn't say the actual memory size on the box. Definitively shady and misleading. They screwed up


As per the topic, I think this could potentially be a big blow to GB. The hypermemory is shady advertising, but it is still somewhat gullible of the buyer to buy it as the specs say 512mb, this doesn't excuse GB, but makes it a bearable mistake. However it seems they have no excuse for this, we will just have to wait and see


----------



## Mussels (Sep 8, 2011)

not to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.


in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.


----------



## neliz (Sep 8, 2011)

Mussels said:


> not to mention the hypermemory thing, iirc was a chinese only deal. for all we know, the blame there lay with whoever they hired for box art design, and not gigabyte themselves.
> 
> 
> in this case, we know that whatever it turns out - giga IS responsible. this time around its clearly official marketing, and not just random misleading box art/stickers on a few products.



It also involved stickers on the card to indicate 1G. And China is a huge market BTW


----------



## n-ster (Sep 8, 2011)

is it me or is the Source for the MSI slides missing?

I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?


----------



## Mussels (Sep 8, 2011)

n-ster said:


> is it me or is the Source for the MSI slides missing?
> 
> I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?



gigabyte: we have PCI-E 3!

MSI: we have PCI-E3... and gigabyte doesnt. liars.


----------



## Suhidu (Sep 8, 2011)

n-ster said:


> is it me or is the Source for the MSI slides missing?
> 
> I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?



Ivy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).


----------



## n-ster (Sep 8, 2011)

Suhidu said:


> Ivy Bridge(successor to Sandy Bridge) CPUs will support PCI-E 3.0, which is faster than the current PCI-E 2.0. Ivy Brige CPUs are slated to work on current Socket 1155 motherboards. However, if you want PCI-E 3.0 speeds, then the interconnects on the motherboard must also support it. MSI is claiming that Gigabyte is using PCI-E 2.0 switches(among other parts) on their motherboards, thus limiting speeds to PCI-E 2.0 (even though they'll work with and be "Ready for" PCI-E 3.0 cards).



Is MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?

@ Mussels  I meant a more detailed summary


----------



## sneekypeet (Sep 8, 2011)

you all need to read, Gigabyte only claims bandwidth of PCI-e 3.0 nothing more!

As far as Native, sure once you add a Ivy Bridge its native to that NB chipset!


----------



## neliz (Sep 8, 2011)

n-ster said:


> is it me or is the Source for the MSI slides missing?
> 
> I don't fully get the slides, can someone summarize them for me so they are a bit more understandable?



Powerpoint is here, some other sites are referring to it: http://media.msi.com/main.php?g2_itemId=68762


----------



## Lordbollo (Sep 8, 2011)

Steven B said:


> Just because something technical is put on a slide and delivered from a company for once doesn't mean what the text says is true or applicable to the matter at hand.
> 
> I think it is very nice that MSI took the time to make those slides, and I think its a great way for them to show the community how the PCI-E lane allotment and standards work, sad part it is it doesn't disprove that the first 8x lanes of the first 16x slot on the UD4 isn't PCI-E 3.0 capable.
> 
> Besides if what VRZone says is true, then this doesn't matter.



And just because someone posts a counter claim on a forum doesn't mean that it is to be believed as well.

All I am saying is that they have reported (however truthfully) that the UD4 isn't compliant and you came on and started talking about the UD7 which after reading what was in the story that TPU put up was not even mentioned. 

I will agree that yes it doesn't really disprove as you said but why did you in the first place introduce the UD7 in to this. No where in the article is it mentioned.


----------



## n-ster (Sep 8, 2011)

is this the ud7 mentionned that he might be talking about?


----------



## cool_recep (Sep 8, 2011)

sneekypeet said:


> Ha it would be awesome if gigabyte was doing a revision and MSI just looks like more of an ass for being a douche all the way around, instead of just stealing ideas and calling it your own!
> 
> IDK guess I'm just so tired of people who are obviously wrong making others look bad to get themselves on the next rung of the ladder.



And what about Gigbyte's stealing HiC Cap from MSI?


----------



## neliz (Sep 8, 2011)

n-ster said:


> Is MSI claiming GB's mobos won't be running in PCI-E 3.0 mode then? because if the argument is speed, wouldn't PCI-E 2.0 x16 "speeds" be = to PCI-E 3.0 x8 speeds?



PCI-E 2.0 x16 is slower than PCI-E 3.0 x8.
I've talked with some knowledgeable MB folks and they all say that the CPUs will stay in 2.0 x16 mode when you install a PCI-E 3.0 card in it without the proper switches.


----------



## sneekypeet (Sep 8, 2011)

cool_recep said:


> And what about Gigbyte's stealing HiC Cap from MSI?



I wasn't saying GB is more right than MSI. Just that MSI isn't any better


----------



## Lordbollo (Sep 8, 2011)

n-ster said:


> http://www.techpowerup.com/img/11-09-07/41b.jpg
> 
> is this the ud7 mentionned that he might be talking about?



Um OK I have egg on my face sorry. I was more looking at the cpu-z pics.


----------



## Ultim8 (Sep 8, 2011)

Looks like after MSI's mud throwing they may end up with egg on there face!!!

http://vr-zone.com/articles/the-upg...ight-be-blocked-by-changes-to-uefi/13513.html

Hmmm gigabyte = Dual Bios + No UEFI


----------



## neliz (Sep 8, 2011)

Lordbollo said:


> Um OK I have egg on my face sorry. I was more looking at the cpu-z pics.



No, Don't be ashamed, you were right and n-ster got stuck in Gigabyte's quagmire of lies, since the press release said that their *ENTIRE* (yeah ENTIRE!) 6 series line-up was compatible:

http://www.gigabyte.us/press-center/news-page.aspx?nid=1048



> *GIGABYTE Announces Entire 6 Series Ready to Support Native PCIe Gen. 3*
> Future Proof Your Platform for Next Generation Intel 22nm CPUs
> 2011/08/08
> 
> ...



So yeah, there you have it, even with the original press release they already knew they were lying since the UD7 has the NF200 and you will NOT have maximum data bandwidth for future discrete graphics cards.





Ultim8 said:


> Hmmm gigabyte = Dual Bios + No UEFI


Yeah, there are going to be loads of ****ed off gigabyte customer once they find out their 2011 system can't run Windows 8


----------



## Ultim8 (Sep 8, 2011)

the UD7 isn't on the list???


----------



## neliz (Sep 8, 2011)

Ultim8 said:


> the UD7 isn't on the list???



Yes, Gigabyte already lied in the original press release since "entire lineup" would include the UD7's


----------



## Ultim8 (Sep 8, 2011)

windows 8???  windows 8 doesnt need UEFI.

Even OA3 doesnt need UEFI which is for large SI's as the bios strings can still be built into older bios's.

Also on the UD7 Gigabytes board if I remember rightly doesn't activate the NF200 chip until 3 or more pcie16 lanes are used which is different to the way boards such as the ROG board on Asus work where they send all pcie traffic via the nf200,  so in theory it should work for the first pcie lane too.  I think there are 2 ways to look at the news,  Gigabyte have said they are PCIe3 Native ready which suggests you can use PCIe3.

Now i know you can use PCIe3 without the switches but only in the first slot.  MSI's pictures are misleading in their presentation because they show the switches below the path of the cpu.  In reality these chips are to bridge pcie lanes.

Now this means that the first pciex16 slot has a direct connection with the cpu,  this means it will become a pcie3 slot.  However the speed through the switches will be reduced to the limits of the switch.

So something like this:

Ivy Bridge CPU-------PCIe3---Switch Gen2------PCIe2

vs

Ivy Bridge CPU-------PCIe3---Switch Gen3------PCIe3

Now depending on your view you might say Gigabyte are the good guys as they are giving existing customers PCIe3 albeit only in one slot,  where as other manufacturers are charging you to upgrade for gen 3 support.

They could have made it clearer I agree but I think MSI are mud throwing here and they will come out worse for it.  Especially if IVY bridge needs them to wipe all there UEFI bios which cant be done at a service or reseller level!


----------



## jfk1024 (Sep 8, 2011)

*why msi is wrong?*

WHY IS MSI WRONG? ...because the PCI-e 3.0 physical layer is the same as PCI-e 2.0. The only thing that is different is the '128b/130b' encoding. The data is encoded/decoded by the PCI-e controller from the processor (sandy bridge) and from the graphic card. So, the PCI-E Express 3.0 has the same physical characteristics as PCI-E Express 2.0,  that means:  if the PCI-e controller know how to  encode and decode PCI-E 3.0 data then we can transfer PCI-E 3.0 data through a PCI-E 2.0 physical link.


----------



## Ultim8 (Sep 8, 2011)

correct the controller is on the cpu but the traces, sockets ect are identical so in theory every board manufacturer could give you pcie3 support for the first slot but only Gigabyte did this.....why dont the others???

You decide but i know why


----------



## cadaveca (Sep 8, 2011)

MSi has updated their entire lineup to be fully PCIe 3.0 ready. All boards have been revised with new components, and feature the (G3) moniker. I reviewed one of these boards, the GD65, last week or the week before.

The only question I have is why did MSi single out Gigabyte? What makes them different, from, say, ASUS?

ASUS hasn't mentioned PCIe 3.0 at all, that I can tell.


----------



## jfk1024 (Sep 8, 2011)

PS: Probably MSI needs a PCI-E 3.0 GPU to make some real "testing"


----------



## Ultim8 (Sep 8, 2011)

Cadaveca it could come back and bite everyone in the ass though apart from gigabyte.

Read this http://vr-zone.com/articles/the-upg...ight-be-blocked-by-changes-to-uefi/13513.html

lol


----------



## neliz (Sep 8, 2011)

jfk1024 said:


> WHY IS MSI WRONG? ...because the PCI-e 3.0 physical layer is the same as PCI-e 2.0.



If you read the presentation, you also see the other components that need be used besides the switches.
You can't just expect a 5 GT/s circuitry to handle 8GT/s data without issues.




Ultim8 said:


> Cadaveca it could come back and bite everyone in the ass though apart from gigabyte.
> lol



I fail to see how you can not do a complete ROM reflash on any mainboard and require a dual bios for this since it's pretty much standard business in the server world.

Unless of course, one certain company that likes to promote dual bios'es on their boards would tell the press that "it might" and "it could" to make people afraid.




jfk1024 said:


> PS: Probably MSI needs a PCI-E 3.0 GPU to make some real "testing"


Yes, or a simple Gen3 card with test chip.


----------



## Ultim8 (Sep 8, 2011)

Neliz it does,  The controller is on the cpu.

For multi gpu yes you need PCIe3 switches for the bridge but Gigabyte have given all there 6 series pci gen3 for the first slot at least.


----------



## RoutedScripter (Sep 8, 2011)

HAHAHA lol big win for MSI PR .... what a good oportunity taken for some technical explanation which might not only win over people to buy msi boards but also make better company image

shame on you gigabyte ... would expected it to be asus


----------



## neliz (Sep 8, 2011)

Ultim8 said:


> Neliz it does,  The controller is on the cpu.
> 
> For multi gpu yes you need PCIe3 switches for the bridge but Gigabyte have given all there 6 series pci gen3 for the first slot at least.



Let me make this VERY clear for you. in a LOT of these boards, data travels through the PCI express switches as it NEEDS TO.
Otherwise it's impossible to switch the first slot between x16 and x8.


----------



## Ultim8 (Sep 8, 2011)

neliz said:


> Let me make this VERY clear for you. in a LOT of these boards, data still travels to the PCI express switches as it NEEDS TO.
> Otherwise it's impossible to switch the first slot between x16 and x8.
> 
> http://www.techpowerup.com/img/11-09-07/41g.jpg



Only if you use the 2nd slot,  if you dont use the 2nd slot you dont need the switch.....then there is no bottle neck.  Get it.


----------



## neliz (Sep 8, 2011)

Ultim8 said:


> Only if you use the 2nd slot,  if you dont use the 2nd slot you dont need the switch.....then there is no bottle neck.  Get it.



Sorry, but if you don't know the first thing about how a Sandy Bridge CPU switches between 16/0 and 8/8 and why the switch chips are there and how the traces work. .I'll need to use mspaint to draw pictures. HOLD ON!


----------



## jfk1024 (Sep 8, 2011)

neliz said:


> If you read the presentation, you also see the other components that need be used besides the switches.
> You can't just expect a 5 GT/s circuitry to handle 8GT/s data without issues.



The physical links are the same. So... i know you need a new physical link for every new coding, but you can also use the same physical link to transfer different encoded data.


----------



## neliz (Sep 8, 2011)

jfk1024 said:


> The physical links are the same. So... i know you need a new physical link for every new coding, but you can also use the same physical link to transfer different encoded data.



you're talking about PCI Express 2.1 then? Which, with 128/130 encoding has exactly the same BW as PCI Express 2.0?


----------



## Ultim8 (Sep 8, 2011)

lol.

Right the pcie controller is on the cpu do you agree?

So if the controller is on the cpu and the traces are identical (They are) then why wont a single graphics card work at 16x gen3?

The switch is only used one a 2nd gpu is inserted


----------



## neliz (Sep 8, 2011)

PCI Express switching 101:

On top the CPU
8 PCIe lanes go the the first slot
8 PCIe lanes go the the 4 PCI express switches
If no card is detected in the second slot, all traffic will go to slot 1
The clock gen for the second PCIe card is actually housed in the PCH/southbridge 

The only way to do 8/8 is by having switches and a setup like in the picture below
NO switches, IE all 16 lines are going straight from the CPU to the first slot results in 16/4 setups for crossfire for instance.


----------



## n-ster (Sep 8, 2011)

neliz said:


> No, Don't be ashamed, you were right and n-ster got stuck in Gigabyte's quagmire of lies, since the press release said that their *ENTIRE* (yeah ENTIRE!) 6 series line-up was compatible:
> 
> http://www.gigabyte.us/press-center/news-page.aspx?nid=1048



I never said GB did or did not lie, I was just showing the UD7 thing that the other was talking about.



neliz said:


> PCI-E 2.0 x16 is slower than PCI-E 3.0 x8.
> I've talked with some knowledgeable MB folks and they all say that the CPUs will stay in 2.0 x16 mode when you install a PCI-E 3.0 card in it without the proper switches.



I might sound stupid, but...






PCI-E 2.X is 5GT/s, but with the overhead, it is actually closer to 4GT/s, so 16 lanes of 2.X should be equivalent to 8 lanes of PCI-E 3.0 no?


----------



## jfk1024 (Sep 8, 2011)

neliz said:


> you're talking about PCI Express 2.1 then? Which, with 128/130 encoding has exactly the same BW as PCI Express 2.0?



let's just wait until the ivy bridge and pci-e 3.0 gpu are released. Until then, It's just speculation.


----------



## Ultim8 (Sep 8, 2011)

neliz said:


> PCI Express switching 101:
> 
> On top the CPU
> 8 PCIe lanes go the the first slot
> ...



Exactly so the first slot will be gen3 x16 as long as the other pcie lanes are not occupied.
This is what i said from the beginning


----------



## cadaveca (Sep 8, 2011)

Ultim8 said:


> This is what i said from the beginning



But that is not possible. what will happen is that the primary slot will only get x8 link that is PCIe 3.0, and the other 8 lanes will not be capable of 3.0 due to the board's hardware in the link. This may create situation where the slot defaults to PCIe 1.0, or perhaps 2.0, because of the lane confusion.


TBH, I'm not sure, exactly, what will happen with these boards and the primary slot. It's not as simple as it seems.


----------



## neliz (Sep 8, 2011)

Ultim8 said:


> Exactly so the first slot will be gen3 x16 as long as the other pcie lanes are not occupied.
> This is what i said from the beginning



The first slot is X16 because they go through the PCI express switches  so they ARE NOT GEN3 

I've highlighted it on a gigabyte board so you can see where the lanes are coming from.


----------



## Ultim8 (Sep 8, 2011)

n-ster said:


> I never said GB did or did not lie, I was just showing the UD7 thing that the other was talking about.
> 
> 
> 
> ...



PCIe3 8 will be faster as  128/130 encoding is ~1.5% loss


----------



## Ultim8 (Sep 8, 2011)

But Neliz even from that diagram the PCIe switches are only touched or needed when a second pcie device is installed???


----------



## [H]@RD5TUFF (Sep 8, 2011)

sneekypeet said:


> I wasn't saying GB is more right than MSI. Just that MSI isn't any better



Exactlly there you really can't trust either, but I trust Gigabyte the least.


----------



## neliz (Sep 8, 2011)

n-ster said:


> I might sound stupid, but...



I think it will 



> PCI-E 2.X is 5GT/s, but with the overhead, it is actually closer to 4GT/s, so 16 lanes of 2.X should be equivalent to 8 lanes of PCI-E 3.0 no?




No! 

How to calculate PCI Express bandwidth:
Transfer rate * encoding

PCI Express Gen2:
Transfer rate: 5GT/s = 5000 Gb/s  = 625 GB/s
625 * 8/10 (encoding) = 500 MB/s
PCI Express = Full duplex, so total bandwidth = 500*2 = 1GB/s

16 lanes*1GB/s = 16GB/s

PCI Express Gen3:
Transfer rate: 8GT/s = 8000Gb/s = 1000 GB/s
1000 * 128/130 = 984.6154 GB/s
PCI Express = Full Duplex, so total bandwidth = 984.6154 * 2 = 1969.2308
8 lanes *1969.2308GB/s = 15753.8464 GB/s



Ultim8 said:


> But Neliz even from that diagram the PCIe switches are only touched or needed when a second pcie device is installed???



No, PLEASE, check the picture:

You can see that only 8 lanes are connected to the CPU (on top) the other 8 lanes come from the PCI express switches.
If you have a good high res picture of a board you can actually SEE the traces from the switch to the slot.


----------



## jfk1024 (Sep 8, 2011)

neliz said:


> I think it will
> 
> 
> 
> ...



Do you believe yourself saying that?


----------



## neliz (Sep 8, 2011)

jfk1024 said:


> Do you believe yourself saying that?



Yes, do you have any reason to doubt?
Please prove me wrong because I hate it when I make big mistakes in public


----------



## jfk1024 (Sep 8, 2011)

neliz said:


> Yes, do you have any reason to doubt?
> Please prove me wrong because I hate it when I make big mistakes in public



when the second pci-e slot is NC the SW is OFF so all the PCI-e lanes are directly connected to the CPU.


----------



## n-ster (Sep 8, 2011)

so PCI-E 2.X x16 is faster than PCI-E 3.0 x8... So speed-wise, PCI-E 2.X x16 is already pushing PCI-E 3.0 bandwidth, although in an 8 lane configuration


----------



## neliz (Sep 8, 2011)

jfk1024 said:


> when the second pci-e slot is NC the SW is OFF so all the PCI-e lanes are directly connected to the CPU.



No because I just showed in the picture above that only 8 lanes are directly connected to the CPU, the other 8 lanes COME FROM THE SWITCH.

If you can show me a pericom switch that has a supersecret awesomemode where it magically transforms from a 5GT/s switch to a 8GT/s passive transceiver, you've got me convinced.
Otherwise, I have NO idea where you get your information from that the Gen2 switches can turn themselves of and do some mystical rerouting.



n-ster said:


> so PCI-E 2.X x16 is faster than PCI-E 3.0 x8... So speed-wise, PCI-E 2.X x16 is already pushing PCI-E 3.0 bandwidth, although in an 8 lane configuration


Gigabyte's wording was "Gen3 maximum bandwidth" which would be 32GB/s not 16GB/s


----------



## Steven B (Sep 8, 2011)

no when there no device in second slot the lanes are directed toward slot 1 for full 16x. The whole does with having lanes directly connected is because it saves money on switches that aren't needed.


----------



## neliz (Sep 8, 2011)

Steven B said:


> no when there no device in second slot the lanes are directed toward slot 1 for full 16x. The whole does with having lanes directly connected is because it saves money on switches that aren't needed.



Again, please show in that picture where the 16 lanes are coming from. I see 8 from CPU and 8 from switch. If you want I cqn look for higher res shots and highlight the traces for you.

If traffic passes through the switches, its gen2, simple as that.


----------



## n-ster (Sep 8, 2011)

neliz said:


> Gigabyte's wording was "Gen3 maximum bandwidth" which would be 32GB/s not 16GB/s



AFAIK, the wording was 





> as well as offer native support for PCI Express Gen. 3 technology, delivering maximum data bandwidth for future discrete graphics cards



which does not mean maximum bandwidth of PCI-E 3.0 IMO


----------



## neliz (Sep 8, 2011)

n-ster said:


> AFAIK, the wording was
> 
> which does not mean maximum bandwidth of PCI-E 3.0 IMO



the maximum data bandwidth of the upcoming graphics cards is ~32GB/s as far as I know.
Maximum in that sentence refers to the graphics card and not the maximum of the motherboard.

The fact that you and I can't agree on what the wording means makes it clear for me that Gigabyte intended on misleading the consumer to thinking the boards support 32GB/s, or as they put it "maximum data bandwidth for future discrete graphics cards "


----------



## n-ster (Sep 8, 2011)

it says DISCRETE graphic cards


----------



## neliz (Sep 8, 2011)

n-ster said:


> it says DISCRETE graphic cards



No, it doesn't

go to the GBT site and read.



> future discrete graphic*s* cards



See, you missed an S there 

Now, again, I'll type slowly, maybe that will help.

They say FUTURE DISCRETE GRAPHICS CARDS and hint at for instance AMD's Radeon 7000 serie or NVIDIA's Geforce 600 series.
These cards are rumored (or confirmed already?) to have PCI Express 3.0 x16.
That means ~32GB/s _maximum data bandwidth_ (gigacheat's own words)

NOW
THE LINK

Putting a PR statement out that your motherboards offer _delivering maximum data bandwidth_ in correlation with Gen3 compatibility and Gen3 graphics cards at least to me seems like they are talking about the same thing, Gen3 x16.


----------



## TheLostSwede (Sep 8, 2011)

I think the multiple meaning in this case refers to the fact that there are more than one company that makes cards, not so much that you'll be able to run more than one card in the boards, but whatever...
Interpreting written language is an art, just ask all those people that have a different opinion about what it says in all the "holy" books out there... wars have been started over it so hey...


----------



## dazz (Sep 8, 2011)

jfk1024 said:


> when the second pci-e slot is NC the SW is OFF so all the PCI-e lanes are directly connected to the CPU.





Steven B said:


> no when there no device in second slot the lanes are directed toward slot 1 for full 16x. The whole does with having lanes directly connected is because it saves money on switches that aren't needed.



Neliz is right for all I know. The switches work multiplexing 8 lanes, so depending on the slot configuration you have those 8 lanes connected (switched) to slot 1 (only 1st slot populated @ 16x) or slot 2 (both slots populated @ 8x8x), but always through the switch cause that's how the lanes are wired. 
In motherboards with no SLI/XFire support where there's only one PCIe 16x slot, there's no need for switches and all 16 lanes are hardwired to the 16x slot. Those may work at full 3.0 speeds if the capacitors and resistors don't need to be upgraded too, but then again who needs PCIe 3.0 in single GPU setups. The 4x slot gets it's bandwidth from the PCH / DMI connection if I'm not wrong



cadaveca said:


> But that is not possible. *what will happen is that the primary slot will only get x8 link that is PCIe 3.0, and the other 8 lanes will not be capable of 3.0* due to the board's hardware in the link. This may create situation where the slot defaults to PCIe 1.0, or perhaps 2.0, because of the lane confusion.
> 
> 
> TBH, I'm not sure, exactly, what will happen with these boards and the primary slot. It's not as simple as it seems.



That's what I would like to know too, but if it's posible to have one slot at 8x PCIe 3.0 and the second at 8x PCIe 2.0 in SLI/XFire, you already have a 50% extra BW theoretically in multi GPU setups where it may help someday. 
I mean, if you can have TRI-SLI setups at 16x8x8x, it should be possible to have that too since it's different slots at different speeds, not different lanes in the same slot at different speeds


----------



## n-ster (Sep 9, 2011)

neliz said:


> No, it doesn't
> 
> go to the GBT site and read.
> 
> ...



the future discrete GPUs will not utilize more than PCI-E 3.0 x8 for sure. Just because USB 3 is capable of 4Gbps, doesn't mean a USB 3 flash drive will use the 4Gbps. There are SATA 6Gbps HDDs, but these HDDs don't even max out SATA 3Gbps!!! A future DISCRETE graphics card is not going to be using more than 16Gbps, that is for sure, so you are getting maximum data bandwidth for future DISCRETE GPUs!


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> Again, please show in that picture where the 16 lanes are coming from. I see 8 from CPU and 8 from switch. If you want I cqn look for higher res shots and highlight the traces for you.
> 
> If traffic passes through the switches, its gen2, simple as that.



so... you are saying that when the second pci-e slot is NC the data is sent from the CPU to the first 8 lanes and also the data is sent through the pci-e switch to the last 8 lanes?


----------



## cadaveca (Sep 9, 2011)

jfk1024 said:


> so... you are saying that when the second pci-e slot is NC the data is sent from the CPU to the first 8 lanes and also the data is sent through the pci-e switch to the last 8 lanes?



Yes, that is it, exactly. So, how is there any slot with real PCIe 3.0 on a board that does not have these PCIe 3.0 switches, and offers both x16 and x8/x8?

Will these board do PCIe 3.0 x8 to the first slot only? Or will they report PCIe 3.0, but not actually be doing PCIe 3.0? And how does that work with the second slot?


I do not know how this will work. I need PCIe 3.0 CPUs and VGAs before I can comment on what's really gonna happen here, and neither are expected, that i know of, in the next 6 months.


----------



## dazz (Sep 9, 2011)

jfk1024 said:


> so... you are saying that when the second pci-e slot is NC the data is sent from the CPU to the first 8 lanes and also the data is sent through the pci-e switch to the last 8 lanes?



that's how I understood it


----------



## dazz (Sep 9, 2011)

cadaveca said:


> Yes, that is it, exactly. So, how is there any slot with real PCIe 3.0 on a board that does not have these PCIe 3.0 switches, and offers both x16 and x8/x8?
> 
> Will these board do PCIe 3.0 x8 to the first slot only? Or will they report PCIe 3.0, but not actually be doing PCIe 3.0? And how does that work with the second slot?
> 
> ...



I really have no idea, but again, I'm guessing at least in dual GPU setups, having the first slot at 8x 3.0 and the second (the switched one) at 8x 2.0 makes sense (unless new resistors and capacitors are needed as MSI says) 
After all it's not uncommon to have different slots at different speeds in TRI-SLI scenarios like in X58 at 16x8x8x


----------



## neliz (Sep 9, 2011)

dazz said:


> that's how I understood it
> 
> http://cdn1.techbang.com.tw/system/...c12cec39928b4abd3539cd23d57eec.png?1313086800



Thanks Dazz! that picture is more clear than anything I can paint.exe


----------



## jfk1024 (Sep 9, 2011)

dazz said:


> that's how I understood it
> 
> http://cdn1.techbang.com.tw/system/...c12cec39928b4abd3539cd23d57eec.png?1313086800



So a motherboard with a single pci-e connected directly to the CPU it is a PCI-e 3.0 ready motherboard, right?


----------



## jfk1024 (Sep 9, 2011)

http://www.gigabyte.com/press-center/news-page.aspx?nid=1048

" * The specifications are subject to change without notice. "


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> So a motherboard with a single pci-e connected directly to the CPU it is a PCI-e 3.0 ready motherboard, right?



No, not necessarily as it still lacks the required capacitors and resistors required by Intel for Gen3 validation. 
Short version: No


----------



## dazz (Sep 9, 2011)

jfk1024 said:


> So a motherboard with a single pci-e connected directly to the CPU it is a PCI-e 3.0 ready motherboard, right?



I'm not an expert on the matter, so take what I say with a grain of salt, but it looks like unless there's something else in the circuitry apart of the switches (like those resistors and capacitors) that needs upgrading to achieve 3.0 speeds, then yes, those boards with a single 16x slot should be PCIe 3.0 ready. But with just one GPU the PCIe bandwidth shouldn't be an issue at all for a long time, even in dual SLI/Xfire setups it's very unlikely that PCIe 2.0 BW will be a limiting factor with Kepler or AMD's 7000 series.


----------



## neliz (Sep 9, 2011)

dazz said:


> even in dual SLI/Xfire setups it's very unlikely that PCIe BW will be a limiting factor with Kepler or AMD's 7000 series.



And what if AMD/NV releases their professional parts or Compute oriented models first? BW requirements there are much bigger than in games.


----------



## dazz (Sep 9, 2011)

neliz said:


> And what if AMD/NV releases their professional parts or Compute oriented models first? BW requirements there are much bigger than in games.



obviously that would change everything. I guess there must be some applications right now that would take advantage of the increased BW, but not in gaming for now 

And there's something else we may be overlooking: PCIe 3.0 power draw specs are up to 375W per slot (I think). Is current circuitry capable of that? maybe that's why the resistors and capacitors need upgrading? Will the controller be able to detect that and downgrade all the slots to 2.0 speeds with or without switches?


----------



## neliz (Sep 9, 2011)

dazz said:


> And there's something else we may be overlooking: PCIe 3.0 power draw specs are up to 375W per slot (I think). Is current circuitry capable of that? maybe that's why the resistors and capacitors need upgrading? Will the controller be able to detect that and downgrade all the slots to 2.0 speeds with or without switches?



No, PCI-SIG didn't change anything related to the power, so you'll still have your same limits.

The resistors and caps are there I think because of the increased frequency (signal integrity)


----------



## dazz (Sep 9, 2011)

neliz said:


> No, PCI-SIG didn't change anything related to the power, so you'll still have your same limits.
> 
> The resistors and caps are there I think because of the increased frequency (signal integrity)



Ok, thanks for the clarification. 
Truth is I'm not too worried about this. I know I don't need PCIe 3.0... I'm much more concerned with the UEFI thing now! hope it's not true and I can upgrade to IB with my current P67 board if I want to.


----------



## neliz (Sep 9, 2011)

dazz said:


> Ok, thanks for the clarification.
> Truth is I'm not too worried about this. I know I don't need PCIe 3.0... I'm much more concerned with the UEFI thing now! hope it's not true and I can upgrade to IB with my current P67 board if I want to.



I for one am not worried about this, in the past 15 years I've had zero problems doing complete firmware rewrites, microcode updates and BIOS reflashes on business PC's and Itanium servers.
Why would a slight increase from one version of UEFI to the other have more impact?

I just think gigabyte is trying to use some google translate errors there to feed media a scary story.


----------



## Millennium (Sep 9, 2011)

I would hate to think I upgraded to sandy bridge for no reason. My main motivation was to be ready for Ivy Bridge. Damnit!


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> No, not necessarily as it still lacks the required capacitors and resistors required by Intel for Gen3 validation.
> Short version: No



required by Intel? what is this? PCI-E specifications or Ivy bridge requirements?


----------



## jfk1024 (Sep 9, 2011)

This is just a marketing strategy. Ivy bridge and pci-e 3.0 will work just fine on gigabyte motherboards.


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> This is just a marketing strategy. Ivy bridge and pci-e 3.0 will work just fine on gigabyte motherboards.



So you're saying that I get 32GB/s bandwidth on a board with PCI express switches, wow 

Because that completely defies logic.

If you mean, Gigabyte boards will never use Gen3 because the cards and CPU will have to switch down to Gen2 and by that way mean "support" then, sure, I believe you


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> So you're saying that I get 32GB/s bandwidth on a board with PCI express switches, wow
> 
> Because that completely defies logic.
> 
> If you mean, Gigabyte boards will never use Gen3 because the cards and CPU will have to switch down to Gen2 and by that way mean "support" then, sure, I believe you



even pci-e 2.0 is just a marketing strategy, i mean in terms of real world performance . Reality defies everything.


----------



## Arctic Pidgeon (Sep 9, 2011)

lol Neliz you obviously an MSI employee, after finding this thread on google i had to join in.

How can you say Gigabytes boards wont work when there is no pcie3 cards yet. 
But then your dismissing the news about uefi bios not working with ivy bridge immediately....

hmmm looks like I'm right Mr Dennis Achterberg Product Marketing Officer at MSI - Micro-star International CO., Ltd

http://www.linkedin.com/in/dennisachterberg


----------



## neliz (Sep 9, 2011)

Arctic Pidgeon said:


> lol Neliz you obviously an MSI employee, after finding this thread on google i had to join in.
> 
> How can you say Gigabytes boards wont work when there is no pcie3 cards yet.
> But then your dismissing the news about uefi bios not working with ivy bridge immediately....
> ...



1 wow,amazing Google skills
2, there are design rules for 22mm clue and Gen3 that were published long after those boards were designed and manufactured.
3, there are things called test boards and related measurement equipment that for instance Intel, NVIDIA and us use to verify boards, slots, cpu etc.
4, we haven't seen any signs during testing that a uefi update can't continue because there's only one chip. do you really really believe that Intel would want to do a recall on those millions of sandy bridge maonboards out there?

Sent from my HTC


----------



## Arctic Pidgeon (Sep 9, 2011)

4)  Recall no that would never happen would it.......*Cough* B3 *Cough*


----------



## tallyhoe (Sep 9, 2011)

neliz said:


> I hate it when I make big mistakes in public


----------



## neliz (Sep 9, 2011)

Arctic Pidgeon said:


> 4)  Recall no that would never happen would it.......*Cough* B3 *Cough*



That was relatively early in the livecycle of Sandy Bridge with a non-fatal issue (though very bothersome.)

Business wise it also doesn't make sense for Microsoft to demand something of mainboards that would limit it only to the latest generation of compatible mainboards.

I.o.w. don't worry.


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> That was relatively early in the livecycle of Sandy Bridge with a non-fatal issue (though very bothersome.)
> 
> Business wise it also doesn't make sense for Microsoft to demand something of mainboards that would limit it only to the latest generation of compatible mainboards.
> 
> I.o.w. don't worry.



Recall is good just for Gigabyte ). 22mm is quite big for a fabrication process for cpus, right?


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> 1 wow,amazing Google skills
> 2, there are design rules for 22mm clue and Gen3 that were published long after those boards were designed and manufactured.
> 3, there are things called test boards and related measurement equipment that for instance Intel, NVIDIA and us use to verify boards, slots, cpu etc.
> 4, we haven't seen any signs during testing that a uefi update can't continue because there's only one chip. do you really really believe that Intel would want to do a recall on those millions of sandy bridge maonboards out there?
> ...



AND... I haven't seen any signs during testing that my pci-e 3.0 GPU is not working with my Ivy Bridge processor on my Gigabyte motherboard better than on your MSI motherboard


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> AND... I haven't seen any signs during testing that my pci-e 3.0 GPU is not working with my Ivy Bridge processor on my Gigabyte motherboard better than on your MSI motherboard



You haven't done testing and I've shown you screenshots.


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> You haven't done testing and I've shown you screenshots.



what screenshots? 

The screenshot with the gigabyte motherboard?


Let's make a bet: I bet that the first pci-e 3.0 gpu on the market will have the same performance in real world gaming on fake PCI-e 3.0 gigabyte motherboards as on your MSI real PCI-e 3.0 motherboard


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> what screenshots?



Are you actually paying attention to the OP or did you just register to defend Gigabyte? 







22nm CPUs on GBT boards with the advertised BIOS'es switch DOWN  when a Gen3 board is inserted.


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> Are you actually paying attention to the OP or did you just register to defend Gigabyte?
> 
> http://www.techpowerup.com/img/11-09-07/41i.jpg
> 
> 22nm CPUs on GBT boards with the advertised BIOS'es switch DOWN  when a Gen3 board is inserted.



How sweet, you've tested Gigabyte Motherboards . I wonder ... what happens when people from gigabyte will test your cards?


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> How sweet, you've tested Gigabyte Motherboards . I wonder ... what happens when people from gigabyte will test your cards?



the cards are not ours  they're from a third party providing boards for Gen3 testing 

So all we did was get a gigabyte board, put a bios on it that gigabyte advertises and then see if it would actually work as advertised (and required by Intel.)
And you've seen the end result


----------



## dazz (Sep 9, 2011)

One thing is to know it won't make a difference performance wise in games (but for how long?) and another thing is the flood of angry GB customers reporting seeing in GPU-Z their "PCIe 3.0 ready" slots running at half the speed they expected


----------



## Arctic Pidgeon (Sep 9, 2011)

neliz said:


> That was relatively early in the livecycle of Sandy Bridge with a non-fatal issue (though very bothersome.)
> 
> Business wise it also doesn't make sense for Microsoft to demand something of mainboards that would limit it only to the latest generation of compatible mainboards.
> 
> I.o.w. don't worry.



Neliz the uefi issue is not about Microsoft,  its about Ivy Bridge support.
So far the news sounds as if intel have made another mistake and UEFI needs a complete re-wipe which according to some news is only able to be done above service or end user level (so a recall or just a stuff it buy the next platform).

The UEFI issue you are talking about sounds like the "Windows 8 NEEDS UEFI"  if your saying this is not the case please speak to your counterparts as they are saying you need UEFI for X86 hardware.

This is also incorrect only ARM platforms require UEFI for Win 8.


----------



## neliz (Sep 9, 2011)

Arctic Pidgeon said:


> The UEFI issue you are talking about sounds like the "Windows 8 NEEDS UEFI"  if your saying this is not the case please speak to your counterparts as they are saying you need UEFI for X86 hardware.
> 
> This is also incorrect only ARM platforms require UEFI for Win 8.



I'll tell people from microsoft sending us these roadmaps!


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> the cards are not ours  they're from a third party providing boards for Gen3 testing
> 
> So all we did was get a gigabyte board, put a bios on it that gigabyte advertises and then see if it would actually work as advertised (and required by Intel.)
> And you've seen the end result



i will rephrase: I wonder ... what happens when people from gigabyte will test your MSI motherboards?


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> i will rephrase: I wonder ... what happens when people from gigabyte will test your MSI motherboards?



Our G3 boards? they'll get full PCI Express 3.0 x16.

How I know? Because (for instance) Intel already tested and certified our boards.

Capiche?


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> Our G3 boards? they'll get full PCI Express 3.0 x16.
> 
> How I know? Because Intel already tested and certified our boards.
> 
> Capiche?



So, explain me something. Gigabyte made that affirmation before final PCI-e 3.0 adjustments and before having a ivy bridge processor or a pci-e 3.0 GPU? right? Then why attack them?


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> So, explain me something. Gigabyte made that affirmation before final PCI-e 3.0 adjustments and before having a ivy bridge processor or a pci-e 3.0 GPU? right? Then why attack them?



Gigabyte made that explanation AFTER the final requirements were known.
They retroactively applied this to all older boards, hoping no one would notice. (fool customers, they're really good at it)

Gigabyte probably DOES have an 22nm CPU sample AND PCI-e 3.0 testing cards at the time of the announcement, otherwise they wouldn't have the G1.Sniperv2.

Is it clear for you this way?

(and no, I don't know dazz!)


----------



## dazz (Sep 9, 2011)

jfk1024 said:


> So, explain me something. Gigabyte made that affirmation before final PCI-e 3.0 adjustments and before having a ivy bridge processor or a pci-e 3.0 GPU? right? Then why attack them?



Because their advertisement of Gen.3 ready motherboards with just a BIOS update is misleading at best, and flat out fraudulent at worse


----------



## Steven B (Sep 9, 2011)

dazz said:


> that's how I understood it
> 
> http://cdn1.techbang.com.tw/system/...c12cec39928b4abd3539cd23d57eec.png?1313086800



this is correct

GB is a full intel Partner, they didn't make any promises if you read their stuff correctly. with the VRZone announcment you might not even see PCI-E 3.0 on IVY or even be able to run IVY on current boards.


----------



## jfk1024 (Sep 9, 2011)

dazz said:


> Because their advertisement of Gen.3 ready motherboards with just a BIOS update is misleading at best, and flat out fraudulent at worse



Let's wait and see what happens . It's better to wait until bulldozer and sandy bridge-e are released and forget about future proof motherboards


----------



## jfk1024 (Sep 9, 2011)

neliz said:


> Gigabyte made that explanation AFTER the final requirements were known.
> They retroactively applied this to all older boards, hoping no one would notice. (fool customers, they're really good at it)
> 
> Gigabyte probably DOES have an 22nm CPU sample AND PCI-e 3.0 testing cards at the time of the announcement, otherwise they wouldn't have the G1.Sniperv2.
> ...



no, because Gigabyte made that affirmation before G1.Sniperv2 was released


----------



## neliz (Sep 9, 2011)

jfk1024 said:


> no, because Gigabyte made that affirmation before G1.Sniperv2 was released



Lol, no.




> including the recently launched G1.Sniper 2


http://www.gigabyte.us/press-center/news-page.aspx?nid=1048

So, PR came after the sniper 2


----------



## dazz (Sep 9, 2011)

jfk1024 said:


> Let's wait and see what happens . It's better to wait until bulldozer and sandy bridge-e are released and forget about future proof motherboards



Of course. Don't get me wrong, I have an Asus P8P67 Pro that will run at PCIe 2.0 speeds even with an Ivy Bridge CPU + Kepler GPU, and I couldn't care less. By the time I need more BW I'll get a new motherboard, and I'll have a lot better options to choose from by then, new chipsets, etc....

The thing is that many users in the market now for a new system will be making a decision based on PCIe 3.0 among other factors. To futureproof, in the hopes that it will help performance or whatever. GB's announcement may not have promised anything, but will surely lead many to mistakenly believe that they will be getting a fully PCIe 3.0 ready board when it's not true (unless they pick the G1 Sniper V2, that is)


----------



## sneekypeet (Sep 9, 2011)

neliz said:


> Is it clear for you this way?



is this clear to you? Why sit here and bash GB when you all have yet to answer how you "developed" a tech http://www.techpowerup.com/forums/showthread.php?t=151660, yet SPARKLE had it in February http://www.techpowerup.com/115132/S...-Cards-With-Dual-Layer-Fan-Blade-Cooling.html

Even if what you say rings any truth, MSI are hypocrites and your opinion is invalid!


----------



## jfk1024 (Sep 9, 2011)

dazz said:


> Of course. Don't get me wrong, I have an Asus P8P67 Pro that will run at PCIe 2.0 speeds even with an Ivy Bridge CPU + Kepler GPU, and I couldn't care less. By the time I need more BW I'll get a new motherboard, and I'll have a lot better options to choose from by then, new chipsets, etc....
> 
> The thing is that many users in the market now for a new system will be making a decision based on PCIe 3.0 among other factors. To futureproof, in the hopes that it will help performance or whatever. GB's announcement may not have promised anything, but will surely lead many to mistakenly believe that they will be getting a fully PCIe 3.0 ready board when it's not true (unless they pick the G1 Sniper V2, that is)



yes, you're right but it doesen't matter, you said it yourself : "By the time I need more BW I'll get a new motherboard, and I'll have a lot better options to choose from by then, new chipsets"


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> is this clear to you? Why sit here and bash GB when you all have yet to answer how you "developed" a tech http://www.techpowerup.com/forums/showthread.php?t=151660, yet SPARKLE had it in February http://www.techpowerup.com/115132/S...-Cards-With-Dual-Layer-Fan-Blade-Cooling.html
> 
> Even if what you say rings any truth, MSI are hypocrites and your opinion is invalid!



Let me check the press release because no where do we say that Dust removal tech is an MSI only feature: http://event.msi.com/vga/msifantechnology/page3.html

It seems that your biggest gripe is the TPU headline because on the page about DRT we don't mention this  "exclusiveness"


----------



## sneekypeet (Sep 9, 2011)

No you stole the idea! Why would the fan on the Sparkle go backwards at boot. just for giggles? So because they don't say dust removal that makes it right for MSI to lie and say they "developed" the tech versus the truth that you just stole it?

Why is it marketing 101 is fine when you use it, but not when GB tries. Why not worry about the old rep of burning GPU's and motherboards from crap parts and not have to attack another company to make MSI feel better about itself?


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> No you stole the idea! Why would the fan on the Sparkle go backwards at boot. just for giggles? So because they don't say dust removal that makes it right for MSI to lie and say they "developed" the tech versus the truth that you just stole it?



Calm down man, the "Develop" is ONLY in the TPU headline.
In the press release we don't say it's MSI's unique invention or our own development.

Chillax and have a beer or something, you don't have to defend your purchase with unfounded aquisations


----------



## sneekypeet (Sep 9, 2011)

So stealing ideas and making it seem innovative is proper marketing to MSI?

Why not say hey we saw this cool trick and we are on their nuts so we took it from them!


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> So stealing ideas and making it seem innovative is proper marketing to MSI?



Stealing or ...*purchasing?*

If you have no idea how this works it might be the wisest thing to not get all up in a bunch over this.


----------



## dazz (Sep 9, 2011)

jfk1024 said:


> yes, you're right but it doesen't matter, you said it yourself : "By the time I need more BW I'll get a new motherboard, and I'll have a lot better options to choose from by then, new chipsets"



It doesn't matter to me, no. But the point is that it matters to some, and they won't like it when they realise what they bought is not what they thought they were buying


----------



## sneekypeet (Sep 9, 2011)

neliz said:


> Stealing or ...*purchasing?*
> 
> If you have no idea how this works it might be the wisest thing to not get all up in a bunch over this.



Yup you are so right about me not knowing anything about marketing or anything on the inner workings of the ways of PC marketing. Maybe you should learn who you are talking to before you make assumptions. Also nice way to blow over the fact of fires with your components and picking on GB to make yourselves look better. Why not just develop and design your own stuff and keep your nose out of other companies business, or is this marketing 101 too?

Who died and left MSI as Big Brother?


----------



## TheLaughingMan (Sep 9, 2011)

sneekypeet said:


> who died and left msi as big brother?



DFI did.


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> Yup you are so right about me not knowing anything about marketing or anything on the inner workings of the ways of PC marketing. Maybe you should learn who you are talking too before you make assumptions.



You seemed ill informed and basing your whole tirade on a single word from one website that was not used by the original party.



> Also nice way to blow over the fact of fires with your components and picking on GB to make yourselves look better. Why not just develop and design your own stuff and keep your nose out of other companies business, or is this marketing 101 too?



I think it's generally a decent thing to warn people when there's fraudulent behavior going on. But I'll let you be the judge of your own ethics


----------



## sneekypeet (Sep 9, 2011)

TheLaughingMan said:


> DFI did.



At least when I broke my DFI boards I didn't get two RMAs that caught fire just like the original I RMA'd did, unlike MSI.

@ neilz, If you are all so honest, why doesn't the press release from the fan give credit to the true innovator, you are playing the marketing game with it to hit the people who don't know that you are offering something new, which you aren't. So who has the ethics issue? It sure isn't me!


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> @ neilz, If you are all so honest, why doesn't the press release from the fan give credit to the true innovator, you are playing the marketing game with it to hit the people who don't know that you are offering something new, which you aren't. So who has the ethics issue? It sure isn't me!



The true innovator is probably a third party (certainly not Sparkle) that has plenty of opportunity to bring out their own press release, really


----------



## TheLaughingMan (Sep 9, 2011)

neliz said:


> The true innovator is probably a third party (certainly not Sparkle) that has plenty of opportunity to bring out their own press release, really



We can just do that? I invented the Sandforce 3000 controller. I will work on the Press Release since its first come first server.


----------



## sneekypeet (Sep 9, 2011)

And yet you don't even mention buying the tech or who may need credit for the money you are trying to make. Your marketing is just as bad, and as I said way back in the beginning in both threads. Those in glass houses shouldn't throw stones.

Yet you are still glossing over the fact you are attacking GB to make MSI look better by proxy. 
No matter what you say, your ignoring the more important fact that in most buyers minds here, we saw DonInKansas burn 3 cards in a row, and due to MSI using low layer counts in motherboard PCBs allowing the 24-pin to melt (ColdStorm), we aren't buying your trash! At least I wont!


----------



## dazz (Sep 9, 2011)

LOL, I love this board's moderation


----------



## sneekypeet (Sep 9, 2011)

dazz said:


> LOL, I love this board's moderation



Sorry for not being a sheep, maybe that's why I am a mod to begin with? hell I don't really care about my mod status. Point is here I need to bring a shovel for all this BS in this thread.

My UD4 box doesnt say shit about PCI-e 3.0 anywhere on the box, and I would have to be a retard to think shit magically appears with a bios update. Fact is since no one can actually test the bandwidth, this is all just BS on a technicality.


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> Sorry for not being a sheep, maybe that's why I am a mod to begin with? hell I don't really care about my mod status. Point is here I need to bring a shovel for all this BS in this thread.



Which you drove in with your Sparlke story? 

(hug)


----------



## sneekypeet (Sep 9, 2011)

neliz said:


> Which you drove in with your Sparlke story?
> 
> (hug)



Again look in your data base, you aren't convincing me that a company that allows cards to go out that burn in a customers hows good business, yet you are here acting as if MSI is the champion and never did anything dirty!


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> Again look in your data base, you aren't convincing me that a company that allows cards to go out that burn in a customers hows good business, yet you are here acting as if MSI is the champion and never did anything dirty!



That issue has been resolved with AMD.


----------



## TheLaughingMan (Sep 9, 2011)




----------



## neliz (Sep 9, 2011)

TheLaughingMan said:


> http://t3.gstatic.com/images?q=tbn:ANd9GcRd8Vi7ed9fpjuVq9BG-uFFuaZUce2nzwzjJltO5T5uhLmR0NTJX731WGcs



Exactly, we need more manhugs


----------



## cadaveca (Sep 9, 2011)

Um...let's just put this out there...personally, I think this whole discussion is unimportant, and should have waited for just before the launch of actual PCIe 3.0 devices and CPUs. Nobody can confirm either side of this tirade, regarding PCIe 3.0, unless they directly work for an OEM, and clearly someone other than an OEM needs to address this.






Enjoy your weekends, boys.


----------



## dazz (Sep 9, 2011)

May I go back on topic and ask if having a slot running at 3.0 and another one at 2.0 is effectively the same as having an extra 50% BW?


----------



## cadaveca (Sep 9, 2011)

dazz said:


> May I go back on topic and ask if having a slot running at 3.0 and another one at 2.0 is effectively the same as having an extra 50% BW?






It's called deceptive marketing, playing on words, and frankly, shouldn't occur. There's no need for such tactics to garner sales.

And Neliz, i do think you should approach W1zz for a MSI tag under your UID, since noone else seems to want to say it.


----------



## Steven B (Sep 9, 2011)

sneekypeet said:


> So stealing ideas and making it seem innovative is proper marketing to MSI?
> 
> Why not say hey we saw this cool trick and we are on their nuts so we took it from them!



duh LOL hahaha


----------



## dazz (Sep 9, 2011)

cadaveca said:


> It's called deceptive marketing, playing on words, and frankly, shouldn't occur. There's no need for such tactics to garner sales.



agreed, it's intentionaly misleading, but I'd still like to know if there's some sort of "partial" gain from running 1 slot at 3.0


----------



## Kreij (Sep 9, 2011)

Neliz said:
			
		

> It seems that your biggest gripe is the TPU headline because on the page about DRT we don't mention this "exclusiveness"



From Cadaveca's post above, I'd say you've been pretty much pwned.

I'm just a bystander though, so carry on.


----------



## neliz (Sep 9, 2011)

Kreij said:


> From Cadaveca's post above, I'd say you've been pretty much pwned.



How many current Sparkle cards still carry this tech?

http://sparkletw.com/calibre/products/ 



Also no other graphics card vendor in the world currently bundles those three technologies (Propeller blade, Temp Sensor and dust removal) in one fan.


----------



## cadaveca (Sep 9, 2011)

dazz said:


> agreed, it's intentionaly misleading, but I'd still like to know if there's some sort of "partial" gain from running 1 slot at 3.0



Well, considering we don't have any idea as to when PCIe 3.0 devices will even come out, nevermind how they will utilize teh connection, there's no point in even discussing it, as I said a moment ago.


Beleive me, now that this has happened, I'll be paying close attention to things regarding this, as it does fall under the umbrella that is my reviews, and directly relates to my job here @ TPU. You can expect me to be taking a look at this in close detail once PCIe 3.0 devices are launched, and expect at least a thread, if not an article about it.



neliz said:


> How many current Sparkle cards still carry this tech?
> 
> http://sparkletw.com/calibre/products/
> 
> ...



You can still say it's exclusive tech, even without all those caveats. It's an exclusive technology, that includes just those three items, maybe not a tech exclusive to MSI. It's all up for interpretation, and seemingly, done intentionally so. This is the problem about that press release.


----------



## neliz (Sep 9, 2011)

cadaveca said:


> This is the problem about that press release.



As far as I'm aware Sparkle doesn't carry that product anymore nor any thermal design with the DRT so "exclusive" would still apply solely to DRT at this point in time as well.


----------



## cadaveca (Sep 9, 2011)

That doesn't even matter, considering the actual english definition of the word. Exclusive works just fine. It's in limited release, only with 580's, correct, so that's enough. It's exclsuive to that card, even within your own product lines, currently.

It's not important, or even on topic. I just wanted to be clear here, where it was that Sneeky was coming from. I really don't want any part of that discussion.


The PCIe 3.0 thing, as I've said before, I will deal with later, months from now. Please feel free to ensure that I receive products to be able to do so.


----------



## sneekypeet (Sep 9, 2011)

neliz said:


> That issue has been resolved with AMD.



Really because I was talking about Nvidia cards and a p55 mobo, glad AMD took care of that for ya


----------



## neliz (Sep 9, 2011)

sneekypeet said:


> Really because I was talking about Nvidia cards and a p55 mobo, glad AMD took care of that for ya



Yeah I had no reference so I just went with what everyone seems to be complaining about


----------



## Solaris17 (Sep 9, 2011)

"you just got served"


----------



## Kreij (Sep 9, 2011)

neliz said:


> Also no other graphics card vendor in the world currently bundles those three technologies (Propeller blade, Temp Sensor and dust removal) in one fan.



I wasn't knocking you on the fact that there may be new technology incorporated into the design which no one else uses, I was knocking you on the fact that you said there was no "exclusiveness" included in the DRT promo, which appears on the same page as the "Exclusive" claim.

Whether that is deceptive is up to the person reading the promotional literature and how much research they have done on the product. Here at TPU we tend to read the fine print and take the stance of "Caveat Emptor".

No offense or denigration to any product intended.


----------



## cadaveca (Sep 9, 2011)

Kreij said:


> No offense or denigration to any product intended.



Exactly. Myself, I pride on being 100% impartial to anything but performance. I don't care what vendor brings it, as long as it's real, and can be had by many(ie, you cherry pick my samples, I WILL KNOW, and will say so).

MSI, ASRock, GIgabyte et al can duke this out, just be sure that I wll show the truth of the matter when time, and product releases, allow.



P.S.

We do have a policy here @ TPU, on the forums.



> Behavior that is inappropriate/should be reported:
> 
> Posting "FUD" (Fear, Uncertainty, and Doubt), especially if the poster is trying to pass it off as legitimate news.



I'm not saying that any of this IS FUD, but I *am* saying that we will seek the truth of any situation.


----------



## Steven B (Sep 9, 2011)

Interesting, i didn't know what FUD stood for LOL.

BTW we don't even know Ivy Bridge will have PCI-E 3.0...... 

I look forward to your article cadaveca


----------



## Easy Rhino (Sep 9, 2011)

both MSI and gigabyte are poop so who really cares?


----------



## Frick (Sep 10, 2011)

Easy Rhino said:


> both MSI and gigabyte are poop so who really cares?



Naaaah, I'd go with either one really. I've had awesome boards from both of them and I rarely pay attention to marketing anyway.


----------



## micropage7 (Sep 10, 2011)

and its return again: marketing words
how many consumers get  get blinded by that?


----------



## Mussels (Sep 10, 2011)

i just got some more popcorn, not too often we get to see a company rep and a mod duke it out.


----------



## jfk1024 (Sep 10, 2011)

http://www.msi.com/product/mb/Z68A-GD65--G3-.html

MSI fake affirmation about PCI-E 3.0  . ASrock was first


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> MSI fake affirmation about PCI-E 3.0  . ASrock was first



MSI announced it's first Z68 Gen3 boards on (or about) June 1st on Computex.
ASRock announed that they were "first" on June 28th, at which time MSI was already shipping and selling 

It would be really nice if you actually backed up your accusations with something 

http://www.techpowerup.com/live/Computex_2011/MSI.php
http://pcper.com/news/Graphics-Cards/MSI-shows-Gen3-PCIe-X79-Motherboard-and-GTX-580-Extreme


----------



## jfk1024 (Sep 10, 2011)

neliz said:


> MSI announced it's first Z68 Gen3 boards on (or about) June 1st on Computex.
> ASRock announed that they were "first" on June 28th, at which time MSI was already shipping and selling
> 
> It would be really nice if you actually backed up your accusations with something
> ...



ASRock reached the shelves first. So, they did it first.


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> ASRock reached the shelves first. So, they did it first.



Hmm, not really
http://www.tcmagazine.com/tcm/news/...ses-pci-express-30-supporting-z68-motherboard



> Unfortunately, despite ASRock’s bragging about the Fatal1ty Z68 Professional Gen3 going ‘on sale now’, we haven’t found it in stores.



At which time we were already selling. At its best, they could claim listing in pricewatches (pre-ordering) at that time.

You registered specifically for this subject (burn MSI?) but you haven't brought anything to the table yet. I don't mind discussing things but you keep digging and not striking gold.


----------



## jfk1024 (Sep 10, 2011)

neliz said:


> Hmm, not really
> http://www.tcmagazine.com/tcm/news/...ses-pci-express-30-supporting-z68-motherboard
> 
> 
> ...



I was talking about ASrock Z68 Extreme3 Gen3, or you are going to say that is not a PCI-E 3.0 motherboard?


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> I was talking about ASrock Z68 Extreme3 Gen3, or you are going to say that is not a PCI-E 3.0 motherboard?



Which was "launched" (not on sale!) at what, July 13/14/15? this is not only a month and a half after MSI's announcements but also a good two weeks after ASRock's first Gen3 board the Z68 Fatal1ty Pro Gen3.

http://www.tcmagazine.com/tcm/news/...ease-four-more-lga-1155-boards-pci-express-30


> The prices and availability of the new Gen3 models have not been mentioned yet.



Again, I don't mind discussing this at all but you don't bring any facts to the table. Since you registered here and desperately wanted to make a point (Anti-MSI) I hope we can have a fruitful discussion based on facts.



Please?


----------



## jfk1024 (Sep 10, 2011)

neliz said:


> Which was "launched" at what, July 13/14/15? this is not only a month and a half after MSI's announcements but also a good two weeks after ASRock's first Gen3 board the Z68 Fatal1ty Pro Gen3.
> 
> http://www.tcmagazine.com/tcm/news/...ease-four-more-lga-1155-boards-pci-express-30
> 
> ...



I said that ASrock PCI-E 3.0 motherboards hit the shelves before MSI


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> I said that ASrock PCI-E 3.0 motherboards hit the shelves before MSI



Maybe in some local stores in Romania (allocation?) but this is not a worldwide trend, especially not for a "later" product like the Extreme3.

Again, please, facts etc. are nice since they supporting a point in a discussion.


----------



## jfk1024 (Sep 10, 2011)

I don't care about launch date, marketing and other stuff. Important is that we could buy it first from ASrock .


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> I don't care about launch date, marketing and other stuff. Important is that we could buy it first from ASrock .



And that's the point, you're looking at your own circumstance and automatically assume it's "we."

I'm pretty sure that if you tried ordering it somewhere in Europe you'd had the ability. Or does the fact that I had my hands on Gen3 boards before that also count as "being able to buy it?"

Oh and you didn't buy it from ASRock, you buy it from a store at which point the store decides which product you can buy


----------



## jfk1024 (Sep 10, 2011)

You remember this? This was the first thing TPU posted about a  PCI-E 3.0 SW and PCI-e 3.0 motherboard


----------



## cadaveca (Sep 10, 2011)

OK guys, arguments between reps from different companies? Are you guys serious?

Like, no offense, but WTF are you doing here?

OF course, I am assuming jfk1024 is AsRock rep, and he's not here to give me motherboards for review. It's hard to beleive this discussion can happen any other way.

Interesting.


OH. Wait a minute. I understand why AsRock won't send me boards now, too. We already have some members with one


----------



## jfk1024 (Sep 10, 2011)

http://lab501.ro/stiri/asrock-pci-e-3-0-si-gama-fatal1ty

that was on 2nd august 2011. And now you come on 7th september and say that a PCI-E 3.0 SW is needed. WOW, a month later MSI discovered this? 

The right title for this article is: MSI Calls Bluff on Gigabyte's PCIe Gen 3 Ready Claim one month later after ASRock.


----------



## jfk1024 (Sep 10, 2011)

cadaveca said:


> OK guys, arguments between reps from different companies? Are you guys serious?
> 
> Like, no offense, but WTF are you doing here?
> 
> ...



I am not an ASRock rep. I think ASUS motherboards are the best. So i'm going to wait until ASUS releases a X79 motherboard.


----------



## jfk1024 (Sep 10, 2011)

I think that if we put a ivy bridge processor and a PCI-E 3.0 GPU on intel 1155 motherboard without a PCI-E SW the link between the CPU and the GPU will have a max BW of 32GB/s.


----------



## neliz (Sep 10, 2011)

jfk1024 said:


> I think that if we put a ivy bridge processor and a PCI-E 3.0 GPU on intel 1155 motherboard without a PCI-E SW the link between the CPU and the GPU will have a max BW of 32GB/s.



And I *know* that it's not as simple as that since your board will require a redesign for new components:




There's more than this but since there's also things like NDA's there are plenty more reasons for it to not work 

2 more points, there is an edit button so you don't have to triple post all the time, you can place everything conveniently in one.

And as for your discussion on dates, I gladly point you to media.msi.com since you seem to be good in googling asrock/gigabyte and not MSI 
Slide number 8: http://media.msi.com/main.php?g2_view=core.DownloadItem&g2_itemId=67153 2 weeks before Gigabytes announcement.

MSI at that time didn't attack other vendors because those vendors were not pretending to do Gen3, That changed after gigabytes press release, okay? It's not fun to have to point out cheating and it requires a lot of manpower on research verifying claims one way or the other. 

And this is the end of my response to your unfounded (and frankly very tiresome) accusations, since you haven't got one thing right yet and you don't seem to want to discuss anything.


----------



## jfk1024 (Sep 10, 2011)

neliz said:


> And I *know* that it's not as simple as that since your board will require a redesign for new components:
> http://www.techpowerup.com/img/11-09-07/41c.jpg
> There's more than this but since there's also things like NDA's there are plenty more reasons for it to not work
> 
> ...



I got it, but... anyway we will see


----------



## CDdude55 (Sep 10, 2011)

Meh, Gigabyte and MSI are still very solid manufactures.


----------



## sneekypeet (Sep 10, 2011)

neliz said:


> And this is the end of my response to your unfounded (and frankly very tiresome) accusations, since you haven't got one thing right yet and you don't seem to want to discuss anything.



And at what point are you going to explain where being an industry narc became marketing? No one likes a rat neilz!

To be point blank, I don't remember seeing a personal invite for you to come here and try to explain why going after someone else was a good idea. If you don't have thick skin you don't belong at TPU anyways.

You know for a company rep you are very condescending to my members when you are the one who dropped in here on your own to try to be the hero!  When you want to play both sides of the question and answer that is fine, but the immature way that you are picking and choosing what to answer, and if this is tiresome, don't let the door hit you in the ass on the way out!


----------



## neliz (Sep 10, 2011)

sneekypeet said:


> And at what point are you going to explain where being an industry narc became marketing?



just go OT in PM's okay


----------



## cadaveca (Sep 10, 2011)

sneekypeet said:


> And at what point are you going to explain where being an industry narc became marketing? No one likes a rat neilz!




Ok, Mr Tweaktown case and cooler reviewer. You're just as much a narc as he is, not identifying yourself fully. 

Pot and kettle, you know.



sneekypeet said:


> Maybe you should learn who you are talking to before you make assumptions.



I cannot beleive two days later, and you guys are still arguing over this.



sneekypeet said:


> my members



How about OUR, you are not the website alone.


----------



## sneekypeet (Sep 10, 2011)

I am not a narc, and my position as a reviewer is irrelevant to this. Unless MSI is making cases and coolers now.


----------



## neliz (Sep 10, 2011)

sneekypeet said:


> I am not a narc, and my position as a reviewer is irrelevant to this. Unless MSI is making cases and coolers now.



Oh wait... MSI DOES make coolers *zing*!


----------



## sneekypeet (Sep 10, 2011)

not any worth my time!

Also I just looked to verify...where exactly do I look for your coolers and cases? Or you do mean the Wind plastic boxes are your case lineup? Coolers, you mean the ones that come on your cards and are not aftermarket?


----------



## TheLaughingMan (Sep 10, 2011)

neliz said:


> Oh wait... MSI DOES make coolers *zing*!



What kind of coolers? CPU, GPU, beer, food, portable, liquid based, air based, nitrogen, industrial, commercial, consumer, and/or good ones?


----------



## neliz (Sep 10, 2011)

TheLaughingMan said:


> What kind of coolers? CPU, GPU, beer, food, portable, liquid based, air based, nitrogen, industrial, commercial, consumer, and/or good ones?



We actually started selling the Twin Frozr II seperately though we actually have decent beer and watercoolers too at the office.


----------



## TheLaughingMan (Sep 10, 2011)

neliz said:


> We actually started selling the Twin Frozr II seperately though we actually have decent beer and watercoolers too at the office.



I knew the first part and the latter two are not made by MSI.


----------



## sneekypeet (Sep 10, 2011)

In Japan...Thanks that does us a lot of good, and at that $73 and its cooler only! What????

You mean you are saying you sell coolers for aftermarket? To me if you sell an aftermarket cooler you need to cover the Phase and everything, not just sell off overstocked "coolers". Again if you knew terminology, maybe we wouldn't be here discussing irrelevant points now


----------



## Kreij (Sep 10, 2011)

neliz said:


> We actually started selling the Twin Frozr II seperately though *we actually have decent beer* and watercoolers too at the office.



Need any network/system admins or IT managers? I've been doing it for almost 30 years.


----------



## neliz (Sep 10, 2011)

TheLaughingMan said:


> I knew the first part and the latter two are not made by MSI.



NO, but Technical guys can do fun things with Airco radiators and a gazillion 120mm fans 



sneekypeet said:


> To me..


I bought plenty of aftermarket VGA coolers that didn't include heatsinks for VRM or VRAM and I think world+dog still considered them exactly the same thing.

Though it's nice to see everyone including mods go way out of their line to personally harass based on interpretation and semantics



Kreij said:


> Need any network/system admins or IT managers? I've been doing it for almost 30 years.


Open job applications are always welcome


----------



## buggalugs (Sep 11, 2011)

Seems as though Gigabyte is silent and is backpeddling from this issue. Gigabyte havent made any comment on their website...


----------



## Maban (Sep 11, 2011)

Y'all should step outside and settle this like men. That's right, water balloon fight.


----------



## tallyhoe (Sep 11, 2011)

Neliz, when did you start working at MSI?  It seems your Neliz accounts at various forum sites were made long ago.  As a rep though, you really shouldn't be bashing other forums users the way you are.  You don't come across as a professional.  You're giving MSI a bad name with that attitude.   Learn to make a point without having to attack a person's character.

 I love MSI video cards but if I knew the majority of MSI employees acted the way you do I would think twice about buying from them again.




TheLostSwede said:


> No-one can prove 100% without a doubt that their motherboard(s) as today can work with PCI Express 3.0 cards, as there are no cards.


Manufacturers have access to early engineering samples before the public does.  MSI/ASUS/Gigabyte are already working on the upcoming X79 boards and each manufacture their own PCBs for graphics cards so they likely have PCIe 3.0 sample cards as well.  Intel also has to share engineering samples with the manufacturers in order for boards to be available at launch.


----------



## neliz (Sep 11, 2011)

tallyhoe said:


> Neliz, when did you start working at MSI?  It seems your Neliz accounts at various forum sites were made long ago.  As a rep though, you really shouldn't be bashing other forums users the way you are.  You don't come across as a professional.  You're giving MSI a bad name with that attitude.   Learn to make a point without having to attack a person's character.
> 
> I love MSI video cards but if I knew the majority of MSI employees acted the way you do I would think twice about buying from them again.



Ask that on PM not to leave more mud here.


----------



## sneekypeet (Sep 11, 2011)

neliz said:


> Ask that on PM not to leave more mud here.



You mean the whole reason you are here in this thread to begin with? To sling mud at gigabyte, damn hypocrites


----------



## n-ster (Sep 11, 2011)

neliz said:


> Ask that on PM not to leave more mud here.



I dont mind the mud...

What I do mind is that in the 11 pages of comments I have read, NONE of your posts helped me at all. I guess, as many have said, we just gotta wait and see


----------



## pr0n Inspector (Sep 12, 2011)

I ran out of popcorn.


----------



## heky (Sep 12, 2011)

sneekypeet said:


> You mean the whole reason you are here in this thread to begin with? To sling mud at gigabyte, damn hypocrites



What i read on these 11 pages, only makes neliz and MSI 100% right, and prooves Gigabyte makeing false statements.

And what you have done is not even worthy of being called a moderator. Going off-topic in every single post.

Also comparing MSI fan-design statement with Gigabytes fake statements about Gen3 PCI-Ex in just Apples to Oranges. Since MSI doesnt advertise something that doesnt work or simply isnt even possible like Gigabyte does.

Neliz isnt slinging mud at Gigabyte, Gigabyte is slinging mud at all the potential customers, scamming them!


----------



## entropy13 (Sep 12, 2011)

I agree that the MSI fan-design is irrelevant in this case. There aren't any "fan design standards" that have to be followed, while there's a PCI-E 3.0 standard, and there are only certain hardware that can "follow" that standard.

A similar situation would be making USB 2.0 slots as "USB 3.0 ready" with just a BIOS update or something.


----------



## Mussels (Sep 12, 2011)

USB 3.0 devices work in USB2.0 ports, so the ports are 'usb 3.0 ready!' - at least to marketing, they are.


----------



## neliz (Sep 12, 2011)

Mussels said:


> USB 3.0 devices work in USB2.0 ports, so the ports are 'usb 3.0 ready!' - at least to marketing, they are.



But they don't make claims to work at maximum data bandwidth :0

But the values of these words are sketchy (compatible, supported, ready etc.) when there is no technical reference.



> By installing the latest BIOS for their 6 series motherboards today, *users can be assured they are ready to take advantage of all the performance enhancements tomorrow's technologies have to offer.*



With Gen3 cards on those 40 odd gigabyte boards, there are is "enjoy performance enhancements" for users, that's my point.


----------



## n-ster (Sep 12, 2011)

heky said:


> What i read on these 11 pages, only makes neliz and MSI 100% right, and prooves Gigabyte makeing false statements.
> 
> And what you have done is not even worthy of being called a moderator. Going off-topic in every single post.
> 
> ...



You don't know that. MSI MIGHT know that, but YOU don't. You are just taking MSI's word over GB's word.



neliz said:


> But they don't make claims to work at maximum data bandwidth :0
> 
> But the values of these words are sketchy (compatible, supported, ready etc.) when there is no technical reference.
> 
> ...



I still don't see "native support for full speed PCI-E 3.0"... PCI-E 3.0 isn't merely more bandwidth, there are also "performance enhancements" as stated @ wikipedia as:


> New features for the PCIe 3.0 specification include a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements for currently supported topologies



One thing that doesn't look good for GB is their silence. They are losing sales because of this and for them not to say anything may indicate MSI might be right to some extent


----------



## neliz (Sep 12, 2011)

n-ster said:


> I still don't see "native support for full speed PCI-E 3.0"... PCI-E 3.0 isn't merely more bandwidth, there are also "performance enhancements" as stated @ wikipedia as:



First off, don't trust Wikipedia.

And second, as far as the tests go, the boards without the necessary components will *NOT* have the CPU switch to Gen3.


----------



## CDdude55 (Sep 12, 2011)

neliz said:


> First off, don't trust Wikipedia.



I can never understand why everybody says that, sure anyone can edit it, but if it's incorrect it gets corrected by mods all the time if it's not factual.


----------



## heky (Sep 12, 2011)

n-ster said:


> You don't know that. MSI MIGHT know that, but YOU don't. You are just taking MSI's word over GB's word.



No i am not taking MSI`s word over Gigabyte`s, its a fact! PCIEX GEN3 has to meet certain standards(not made up by MSI), and motherboards have to have certain components to be able to be GEN3 certified, and the Gigabyte boards(apart for a couple of models) dont have them. Simple as that!


----------



## neliz (Sep 12, 2011)

CDdude55 said:


> I can never understand why everybody says that, sure anyone can edit it, but if it's incorrect it gets corrected by mods all the time if it's not factual.



Example regarding PCI Express 3.0

http://en.wikipedia.org/wiki/PCI_Express
According to Wikipedia, you need a 32 lane PCI Express connector to reach 16GB/s on PCI Express 2.0 .. right, okay...

While PCI-SIG is clearly stating that they can do nearly 32GB/s with PCI Express Gen3 x16
http://www.pcisig.com/news_room/November_18_2010_Press_Release/

So. Gen3 suddenly *QUADRUPLED* bandwidth? No, it's an end-user interpretation of bandwidth and not what is being "marketed" by PCI-SIG for instance.

Then Gigabyte tried to use Wiki ahem "facts" on our FB page



			
				Michael Linden said:
			
		

> *I have checked this info in wikipedia. PCI Gen 3.0 have only 16GBps for Transfer, not 32 Gbps.* ... PCIe Gen 3.0 says 1GBPs per lane! You see is not a bluff!



now PCI-SIG:


> it is possible for products designed to the PCIe 3.0 architecture to achieve bandwidth near 1 gigabyte per second (GB/s) in one direction on a single-lane (x1) configuration and scale to *an aggregate approaching 32 GB/s on a sixteen-lane (x16) configuration.*



And since every tech site out there has it right (Really nice article from Anandtech) why doesn't Wikipedia?

Unless everyone wants all their PCI express cards running Simplex, I'm all for following PCI-SIG and ignoring Wikipedia.


----------



## n-ster (Sep 12, 2011)

heky said:


> No i am not taking MSI`s word over Gigabyte`s, its a fact! PCIEX GEN3 has to meet certain standards(not made up by MSI), and motherboards have to have certain components to be able to be GEN3 certified, and the Gigabyte boards(apart for a couple of models) dont have them. Simple as that!



Unless you know exactly what is needed for PCI-E 3.0 to work, you are taking MSI's word that the chip in the GB mobo will not work for PCI-E 3.0. You cannot have any CERTAINTY whatsoever about this unless you know what you are talking about, and neither you, nor I, and apparently most of the others from the forum have any experience with this kind of stuff.

I'm not saying what MSI is saying is FALSE, I'm saying it isn't necessarily TRUE. Until we have more info, we really can't judge.

I just wanted to throw this in here for all GB haters (I'll be talking lga 1366 as that's the only boards I know): Which board is better, Gigabyte's X58A-UD3R or a similarly priced X58 MSI board? I think the GB is the clear winner. So no, not all GB boards are garbage, just as not all MSI boards are garbage. Have some respect for both brands, don't just suddenly say: oh MSI godly GB garbage.

@ neliz

yes I know wiki can be wrong, but in this case wiki is right so it doesn't matter for the matter at hand


----------



## heky (Sep 12, 2011)

n-ster said:


> I just wanted to throw this in here for all GB haters (I'll be talking lga 1366 as that's the only boards I know): Which board is better, Gigabyte's X58A-UD3R or a similarly priced X58 MSI board? I think the GB is the clear winner. So no, not all GB boards are garbage, just as not all MSI boards are garbage. Have some respect for both brands, don't just suddenly say: oh MSI godly GB garbage.



I am not a GB hater, even though i now own a MSI board, i used to have a GB x48-ds5 mobo for my 775 rig. It has notthing to do with the brand, it has to do with the fact they are cheating people into buying something that doesnt even have the feature they advertise!


----------



## neliz (Sep 12, 2011)

n-ster said:


> yes I know wiki can be wrong, but in this case wiki is right so it doesn't matter for the matter at hand



Sure it is if you intentionally leave out half of the equation 

Also, maybe I'm wrong here, but that wiki page is littered with "PCI Express bus." There is no such thing as a PCI Express bus. So no kids, don't just trust Wikipedia.


----------



## Maban (Sep 12, 2011)

neliz said:


> Sure it is if you intentionally leave out half of the equation
> 
> Also, maybe I'm wrong here, but that wiki page is littered with "PCI Express bus." There is no such thing as a PCI Express bus. So no kids, don't just trust Wikipedia.



That's interesting, because I just spent 30 seconds searching PCI-SIG's site and I found them mention PCI Express bus.

Nothing against you, but damn, I wish I could get paid to argue on a forum.


----------



## neliz (Sep 12, 2011)

Maban said:


> That's interesting, because I just spent 30 seconds searching PCI-SIG's site and I found them mention PCI Express bus.



PCI Express is a point to point link, so it's not a bus, but that misconception is easily made, I agree 

(and yes, you can find "bus" and "PCI Express" related to eachother on the MSI website as well, I'll try to get that fixed ASAP.



> Nothing against you, but damn, I wish I could get paid to argue on a forum.


This is in my free time, like I've been doing for the past 10 years.

Getting paid to argue on forums (be it in money or hardware) is not worth it imho.
You would NEED to support something that's not your personal opinion.


----------



## neliz (Sep 15, 2011)

neliz said:


> (and yes, you can find "bus" and "PCI Express" related to eachother on the MSI website as well, I'll try to get that fixed ASAP.


Fixed!

It's now "Interface" instead of "Bus Standard"

http://www.msi.com/product/vga/N580GTX-Lightning.html#?div=Specification


----------



## Suhidu (Sep 15, 2011)

Woah! What a necro-bump, I still remember when we were all so confused on this PCI-E 3.0 stuff .

Anyway, nice fix, it should make it more clear to people.


----------



## RejZoR (Sep 15, 2011)

"bus" was sort of term used for pretty much all IO slots like PCI, AGP, ISA etc etc. The term sticked like so many others from the past...


----------



## neliz (Sep 15, 2011)

RejZoR said:


> "bus" was sort of term used for pretty much all IO slots like PCI, AGP, ISA etc etc. The term sticked like so many others from the past...



because ISA & PCI are actually a bus.
for AGP it is accepted because it runs on top of the PCI bus, so it was common to reference to AGP as a bus as well.

But in the end it's just a small detail.


----------



## Maban (Sep 15, 2011)

neliz said:


> Fixed!
> 
> It's now "Interface" instead of "Bus Standard"
> 
> http://www.msi.com/product/vga/N580GTX-Lightning.html#?div=Specification



That's interesting, because I just spent 30 seconds browing that product's info and I found it mention PCI Express 2.0 bus.


----------



## n-ster (Sep 15, 2011)

Under "Features"


			
				MSI said:
			
		

> Designed to run perfectly with new PCI Express 2.0 bus architecture, offering a future proofing bridge to tomorrow's most bandwidth-hungry games and 3D applications by maximizing the 5GT/s PCI Express 2.0 bandwidth (twice that of first generation PCI Express)


----------



## neliz (Sep 15, 2011)

n-ster said:


> Under "Features"



Thanks, I'll try to get that fixed ASAP as well.


----------



## TheMailMan78 (Sep 16, 2011)

I cannot believe I just saw this thread now. My g-d......so much trolling I missed!


----------



## btarunr (Sep 16, 2011)

TheMailMan78 said:


> I cannot believe I just saw this thread now. My g-d......so much trolling I missed!



You just missed an ice-cream truck the size of a semi.


----------



## TheMailMan78 (Sep 16, 2011)

btarunr said:


> You just missed an ice-cream truck the size of a semi.



I know......I know. I has a sad.


----------



## neliz (Sep 16, 2011)

neliz said:


> Thanks, I'll try to get that fixed ASAP as well.



http://www.msi.com/product/vga/N580GTX-Lightning.html#?div=Feature

It's fixed now, can you find any more things I can bug our webmasters with?


----------



## n-ster (Sep 16, 2011)

This thread really isn't about what's wrong on the MSI website, as there is plenty....

ie: http://www.msi.com/product/mb/760GM-P33.html (PCI Express bus)

just google


----------



## EarthDog (Sep 16, 2011)

neliz said:


> http://www.msi.com/product/vga/N580GTX-Lightning.html#?div=Feature
> 
> It's fixed now, can you find any more things I can bug our webmasters with?


Yes. The GD65 G3 says B3 in the title when you click on the page.


----------



## neliz (Sep 16, 2011)

EarthDog said:


> Yes. The GD65 G3 says B3 in the title when you click on the page.



No, it says "Intel® Z68 (B3) Chipset Based" 

The title etc. say (G3) nicely: http://www.msi.com/product/mb/Z68A-GD65--G3-.html


----------



## EarthDog (Sep 16, 2011)

Heh, saying B3 for the mobo rv.. yikes. Excuse me.


----------



## TheMailMan78 (Sep 16, 2011)




----------

