Saturday, July 19th 2008

PCI Express 3.0 by 2010, Supports Heavier, Gluttonous Cards

System component expansion interface PCI-Express could get its next major face-lift in 2009, following which products compatible with the interface could be out by 2010. The PCI Express Special Interest Group (SIG) is in the process of devising the new interface that provides devices with twice the amount of bandwidth as that of the current PCI-Express 2.0, that's 8 Giga-transfers per second (GT/s). it is said to be backwards compatible with older versions of the interface.

Changes in specifications are being made that allow this interface to support triple-slot, 300W (from the interface), 1.5 kg (roughly 3 lbs) graphics cards. Perhaps this is the ideal interface for 'heavier' products from NVIDIA, AMD, and soon Intel.
Source: GPU Café
Add your own comment

34 Comments on PCI Express 3.0 by 2010, Supports Heavier, Gluttonous Cards

#1
Cuzza
1.5kg? frickin hell, sure if your graphics card is solid steel....
Posted on Reply
#2
Mussels
Freshwater Moderator
great. they're going to support the cards i really DONT want.

i want small, lightweight cards with low power usage.
Posted on Reply
#3
btarunr
Editor & Senior Moderator
Soon you'll have a graphics card "ohm nom nom" avatar.
Posted on Reply
#4
Mussels
Freshwater Moderator
btarunrSoon you'll have a graphics card "ohm nom nom" avatar.
it will eat my physical space, my wallet, my power bill, and have its own gravitational field :'(
Posted on Reply
#5
$ReaPeR$
Musselsit will eat my physical space, my wallet, my power bill, and have its own gravitational field :'(
i totally agree with you Mussels. i mean what the hell are they trying to do? throw us back in the PC stone age when the PC parts where as big as the mobo!!! if so i wold like to create a new company called AMD (advanced MEGA devices) who wants to join me??:nutkick::nutkick::nutkick:

P.S. i am not imposing something about AMD.:respect:
Posted on Reply
#6
tkpenalty
I'd expect them to reinforce the substrate and the slot with carbon nanotubes at least :laugh: Geez... PCI-E ports probably will need backplates now + bolt thru screws :laugh:
Posted on Reply
#7
INSTG8R
Vanguard Beta Tester
Silly Crap! I mean nothing even ended up using max Bandwidth on AGP and now they just keep making the pipe bigger and bigger?? Is ANYTHING actually running out of room on PCI-1.0 16x yet?? I mean c'mon this is getting a bit silly..
Posted on Reply
#8
candle_86
so this new slot will hold current cards without screwing them down, woot
Posted on Reply
#9
Deleted member 3
INSTG8RSilly Crap! I mean nothing even ended up using max Bandwidth on AGP and now they just keep making the pipe bigger and bigger?? Is ANYTHING actually running out of room on PCI-1.0 16x yet?? I mean c'mon this is getting a bit silly..
That's quite wrong. The first 8800's already slowed down on x8 slots and got severely crippled by x4 slots. So a 9800GX2 or 4870x2 will be bottlenecked by a single x16 slot(1.0).
Besides, the standard needs to be ready for future cards, not for last generation. For this reason AGP kept increasing bandwidth as well. The industry keeps moving on, whether you like it or not.
Posted on Reply
#10
tkpenalty
DanTheBanjomanThat's quite wrong. The first 8800's already slowed down on x8 slots and got severely crippled by x4 slots. So a 9800GX2 or 4870x2 will be bottlenecked by a single x16 slot(1.0).
Besides, the standard needs to be ready for future cards, not for last generation. For this reason AGP kept increasing bandwidth as well. The industry keeps moving on, whether you like it or not.
Add GTX280/260 to that list.

what dan said is right. We might as well get ready.
Posted on Reply
#11
INSTG8R
Vanguard Beta Tester
DanTheBanjomanThat's quite wrong. The first 8800's already slowed down on x8 slots and got severely crippled by x4 slots. So a 9800GX2 or 4870x2 will be bottlenecked by a single x16 slot(1.0).
Besides, the standard needs to be ready for future cards, not for last generation. For this reason AGP kept increasing bandwidth as well. The industry keeps moving on, whether you like it or not.
Well the 8800's used bridges did they not? that alone could contribute to that couldn't it?

Yeah I get things have to move on but 2.0 has barely become standard with only this last gen of cards supporting it and already its going up another spec.
Posted on Reply
#12
btarunr
Editor & Senior Moderator
INSTG8RWell the 8800's used bridges did they not? that alone could contribute to that couldn't it?

Yeah I get things have to move on but 2.0 has barely become standard with only this last gen of cards supporting it and already its going up another spec.
No, 8800 (G80) did not use bridges of any sort, it's GPUs such as NV40 (GeForce 6800 PCI-E series) that had bridges (bus translation logic).
Posted on Reply
#13
INSTG8R
Vanguard Beta Tester
btarunrNo, 8800 (G80) did not use bridges of any sort, it's GPUs such as NV40 (GeForce 6800 PCI-E series) that had bridges (bus translation logic).
rgr, thanks for the clarification.
Posted on Reply
#14
Kreij
Senior Monkey Moderator
I believe that the max current draw from the 2.0 spec is around 75 watts. At 300 watts this could mean that graphics cards that max out around 150 watts or so could get all their power from the slot and would not need a seperate PSU cable.

This would be nice for cable management in systems that did not use the behemoth cards.
Posted on Reply
#15
btarunr
Editor & Senior Moderator
This would mean more power inputs for the motherboard. The power has to come from somewhere.
Posted on Reply
#16
Kreij
Senior Monkey Moderator
True, but the power cable for the mobo is easier to hide as it is usually on the side near the PSU.
The GC cables have to either cut across the mobo our be routed around the side.

It was just a thought. :) Now if someone can figure out a way to eliminate all of the cables to the storage devices we will all have really clean cases. :toast:
Posted on Reply
#17
PCpraiser100
Holy shit, 8 Gigs per second, thats overkill and beyond....
Posted on Reply
#18
Kreij
Senior Monkey Moderator
PCpraiser100Holy shit, 8 Gigs per second, thats overkill and beyond....
Remember what forum you are on. I don't think "overkill" is in our dictionary :D
Posted on Reply
#19
Mussels
Freshwater Moderator
KreijRemember what forum you are on. I don't think "overkill" is in our dictionary :D
its in my thesaurus. its in there next to 'barely enough' and 'acceptable'
Posted on Reply
#20
Kreij
Senior Monkey Moderator
Musselsits in my thesaurus. its in there next to 'barely enough' and 'acceptable'
:laugh: When I type "overkill" into my thesaurus is returns "Good enough for now".
Posted on Reply
#21
oli_ramsay
I'm a little concerned when people say PCI-E 1.1 is bottlenecking powerful cards. Do you think it's a possibility that my P35 is bottlenecking my 4870 because it's not PCI-E 2.0?
Posted on Reply
#22
Mussels
Freshwater Moderator
oli_ramsayI'm a little concerned when people say PCI-E 1.1 is bottlenecking powerful cards. Do you think it's a possibility that my P35 is bottlenecking my 4870 because it's not PCI-E 2.0?
unlikely. its more the dual GPU cards that would be bottlenecked.

1.1 has more bandwidth than 1.0, its dual GPU 2.0 cards on a 1.0 slot that are the concern.
Posted on Reply
#23
Deleted member 3
oli_ramsayI'm a little concerned when people say PCI-E 1.1 is bottlenecking powerful cards. Do you think it's a possibility that my P35 is bottlenecking my 4870 because it's not PCI-E 2.0?
Bottlenecking not per se, but it probably would perform slightly better on a 2.0 bus. I doubt the difference will be huge though.
Posted on Reply
#24
zithe
PCpraiser100Holy shit, 8 Gigs per second, thats overkill and beyond....
A system may be considered "Overkill" for about a month. Next month it's still fun. Next month it's an average system. Then next month you're window shopping all over again.

Quoting myself : "I'll buy a system and put a lot of money in it and turn it on every morning thinking 'wow, I can't believe I built this!' A year later, I turn it on thinking 'Ugh... I can't believe I built this...' "

Maybe not that drastic. If you get a 4870x2 now, you can play all games maxed with high AA (except the poorly coded game known as crysis). In 2 years, you could probably still play games on max with a little AA, and a year later it's just maxed. Same case with my X1800XT. It play UT3 admirably, and even has a decent shot at crysis. I like my card, but it won't be enough for me for very long. XD gunna give it to mom eventually.
Posted on Reply
#25
DarkMatter
Tom's Hardware already tested it. This is how PCIe bandwidth scales the performance:

www.tomshardware.com/reviews/pci-express-2-0,1915-12.html

I wouldn't say we can talk about bottlenecking, even 4x (8x on PCIe 1.1) offers very good results overall. Meaning that you probably won't NEED PCIe 3.0, just as you don't need PCIe 2.0 right know. That doesn't mean is not necesary, because it does help a bit, the more the better, and as long as it is totally backwards compatible, and seems it will be, what's the problem?

On another note, this interface's 300w are probably for Intel Larrabee, so that POS can run on a computer without requiring a nuclear plant by the side of your PC. :D
Posted on Reply
Add your own comment
Nov 26th, 2024 21:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts