• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Coffee Lake" Platform Detailed - 24 PCIe Lanes from the Chipset

Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
See... and you hang on points like this (though I didn't call you a fanboy, I just said you lean one way, note) while not addressing the counterpoints (the usefulness of HBM now for the next couple of years/ PCIe lanes making a huge difference)... you are running out of steam and the straw man arguments are getting old.
 
Last edited:
Joined
Feb 16, 2017
Messages
494 (0.17/day)
See... and you hang on points like (I didn't call you a fanboy, I just said you lean one way, note) this while not addressing the counterpoints (the usefulness of HBM now for the next couple of years/ PCIe lanes making a huge difference)... you are running out of steam and the straw man arguments are getting old.
Throw a rock at an argument and you're likely to hit one where this happens.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Bad, evil, hated, does it matter? You all just declared I'm a fanboy against all logic to define that. It goes entirely against the narrative and you just keep on grabbing it and running with it. And every time I see it I'm like, BUT HOOOOOOOW!?!!?!
In this thread, this all started because you complained about the number of PCIe lanes on announced mainstream chips compared to the number of lanes on HEDT nine years ago. Were you expecting thanks?
 
Joined
Dec 15, 2016
Messages
630 (0.22/day)
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
He should just be banned. Very annoying user. He must have nightmares with nvidia and intel
I wouldn't ban him just for being annoying. I'd just like him to open his eyes and post a little more on topic.
 
Joined
Dec 15, 2016
Messages
630 (0.22/day)
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
On other forums I visist you are not allowed to spread hate against some brand constantly. One thing is telling us your opinion with objetive base, other is just spam and show everyone how much you hate some company. And this is what this guy does since Ryzen released. To the point no one takes him seriously anymore.
I don't get the feeling he's spreading hate, as much as praises AMD even when there's little reason to do so. But then again, I haven't been paying much attention until recently when he crossed the line into annoying territory.
 
Joined
Jan 27, 2015
Messages
454 (0.13/day)
System Name Marmo / Kanon
Processor Intel Core i7 9700K / AMD Ryzen 7 5800X
Motherboard Gigabyte Z390 Aorus Pro WiFi / X570S Aorus Pro AX
Cooling Noctua NH-U12S x 2
Memory Corsair Vengeance 32GB 2666-C16 / 32GB 3200-C16
Video Card(s) KFA2 RTX3070 Ti / Asus TUF RX 6800XT OC
Storage Samsung 970 EVO+ 1TB, 860 EVO 1TB / Samsung 970 Pro 1TB, 970 EVO+ 1TB
Display(s) Dell AW2521HFA / U2715H
Case Fractal Design Focus G / Pop Air RGB
Audio Device(s) Onboard / Creative SB ZxR
Power Supply SeaSonic Focus GX 650W / PX 750W
Mouse Logitech MX310 / G1
Keyboard Logitech G413 / G513
Software Win 11 Ent
Due to the integrated graphics, Intel only have some much die space before the chips get too costly to make.

Although they're not perfect comparisons, you can see that the I/O takes up a lot more space on the latter and part of this is the PCIe root complex. So there are some trade-offs to be done when it comes to die space used up by whatever part you want to stick inside a chip.

Likewise AMD compromised on Ryzen, although we get 20 usable lanes from the CPU, the chipset is instead crippled by only offering PCIe 2.0 lanes. The performance difference for NVMe between Ryzen and Intel (at least in my case using an Plextor M8PeG drive) is actually in favour of Intel in most tests, surprisingly and this was using a Z170 board.

Regardless, it would be nice to see Intel adding another 4-8 PCIe lanes to the CPU that could be used for storage and say 10Gbps Ethernet.

Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,772 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Well, the 6950X had a 40-lane PCIe root complex. If it is halved, I'm pretty sure Intel can squeeze 20 lanes onto 7700K and still make the die size reasonaly cost effective.

That's what they do, 16 for GPU's, 4 for DMI...
 
Joined
Aug 22, 2010
Messages
764 (0.15/day)
Location
Germany
System Name Acer Nitro 5 (AN515-45-R715)
Processor AMD Ryzen 9 5900HX
Motherboard AMD Promontory / Bixby FCH
Cooling Acer Nitro Sense
Memory 32 GB
Video Card(s) AMD Radeon Graphics (Cezanne) / NVIDIA RTX 3080 Laptop GPU
Storage WDC PC SN530 SDBPNPZ
Display(s) BOE CQ NE156QHM-NY3
Software Windows 11 beta channel
Joined
Jan 27, 2015
Messages
454 (0.13/day)
System Name Marmo / Kanon
Processor Intel Core i7 9700K / AMD Ryzen 7 5800X
Motherboard Gigabyte Z390 Aorus Pro WiFi / X570S Aorus Pro AX
Cooling Noctua NH-U12S x 2
Memory Corsair Vengeance 32GB 2666-C16 / 32GB 3200-C16
Video Card(s) KFA2 RTX3070 Ti / Asus TUF RX 6800XT OC
Storage Samsung 970 EVO+ 1TB, 860 EVO 1TB / Samsung 970 Pro 1TB, 970 EVO+ 1TB
Display(s) Dell AW2521HFA / U2715H
Case Fractal Design Focus G / Pop Air RGB
Audio Device(s) Onboard / Creative SB ZxR
Power Supply SeaSonic Focus GX 650W / PX 750W
Mouse Logitech MX310 / G1
Keyboard Logitech G413 / G513
Software Win 11 Ent
That's what they do, 16 for GPU's, 4 for DMI...

Ahh, my bad. Forgot about the 4 lanes for DMI 3.0. Anyway to add another 4 for storage would only increase the total to 24, still significantly less than 40.
 
Joined
May 28, 2005
Messages
4,994 (0.70/day)
Location
South of England
System Name Box of Distraction
Processor Ryzen 7 1800X
Motherboard Crosshair VI Hero
Cooling Custom watercooling
Memory G.Skill TridentZ 2x8GB @ 3466MHz CL14 1T
Video Card(s) EVGA 1080Ti FE. WC'd & TDP limit increased to 360W.
Storage Samsung 960 Evo 500GB & WD Black 2TB storage drive.
Display(s) Asus ROG Swift PG278QR 27" 1440P 165hz Gsync
Case Phanteks Enthoo Pro M
Audio Device(s) Phillips Fidelio X2 headphones / basic Bose speakers
Power Supply EVGA Supernova 750W G3
Mouse Logitech G602
Keyboard Cherry MX Board 6.0 (mx red switches)
Software Win 10 & Linux Mint
Benchmark Scores https://hwbot.org/user/infrared
I think things have got back on track, but on the off chance anyone wants to derail the topic again I'll post this warning - No more petty squabbling please, I don't mod this section so can't clean up, but I will be issuing thread bans to anyone who can't keep it impersonal and on topic.

I'm looking forward to seeing how coffee lake does, this whole debate about pcie lanes and bandwidth is daft, it's got plenty for mainstream use. As others have said having multiple m.2 drives, multiple gpus and 10gbit ethernet isn't common and absolutely qualifies as enthusiast. That's not the market segment this is aimed at.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Even if you had all that...

2 GPUs - 32 lanes
2 m.2 PCI-E drives - 8 lanes
10GbE - 4 lanes

Which brings us to a grand total of 44 lanes. Now... 16 lanes by CPU, 24 by chipset, so... that's 40 lanes. You can drop the lanes requirement here by running the GPUs in 8x/8x, which is plenty even for the most powerful GPUs. Now you need only 28 lanes for all that. Or, you could even run 16x/8x and you would need 36 lanes. Still 4 to go before you hit the total of 40 offered by this platform. Is running one GPU in 8x mode really gonna hurt that bad? I think not.
 

Сhris

New Member
Joined
Sep 14, 2017
Messages
1 (0.00/day)
No NVMe Raid? Not viable as the data passes through DMI 3.0?

Well, AMD took care of that.

You lost me, Intel.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Like many people need or want that on a mainstream platform?? Well, you i see. :)

Depends... some boards funnel at least one m.2 through cpu avoiding dmi anyway. Just do some reasearch. ;)
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I'm still wondering why we are all talking about PCIe SSD's when most people are not ever going to see the increased performance of it compared to SATA SSD and it still is more expensive storage. The vast majority doesn't even know it exists. It's a mainstream platform, and it has always had its limitations, its concessions even, this has also always gone down to even how the PCIe lanes are routed.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
You're sharing the equivalent of 4 PCIe 3.0 lanes by using DMI 3.0 through the PCH though, so a single NVMe device could saturate available bandwidth provided by the PCH. 24 PCIe lanes is nice but, not when it's driven by only 4 lanes worth of bandwidth and NVMe RAID would literally run worse because it would strangle DMI.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Correct, when using chipset attached lanes (though it doesnt run worse, there are gains which saturate the DMI bandwidth).

Not with cpu connected lanes, though. Pcie riser card comes to mind. Or a mixed RAID with a cpu connected drive and chipset attached drive (which would cap lower than using all cpu attached lanes
 

nofear2017

New Member
Joined
Oct 13, 2017
Messages
1 (0.00/day)
Honest question, if I just wanted to have a full 16 lane GPU and one modern high-speed nvme drive, I should be fine with this chipset right?


Long answer short, yes.

For daily usage, 24PCH+16PCIE is more than enough for 2GPU with one M.2 (960PRO) + SATA SSD (850 EVO) setup, but anything goes beyond that configuration I would suggest the HEDT platform.
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
@boe, it sounds like Threadripper is the platform you need.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.
There's no foul play. Instead of dictating a fixed split between lanes, the manufacturers can configure them however they want (more or less). You'd probably be more unhappy if Intel dictated a fixed configuration instead.

Plus, you're really misinformed. Yes, the number if lanes hasn't gone up much, but the speed of each lane did. And since you can split lanes, you can actually connect a lot more PCIe 2.0 devices at one than you could a few years ago. But all in all, PCIe lanes have already become a scare resource with the advent of NVMe. Luckily we don't need NVMe atm, but I expect this will change in a few years, so we'd better get more PCIe lanes by then.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Is it some weird cabal involving Intel and motherboard manufacturers that purposely make the whole PCIe lanes thing confusing? Why make motherboards with 4 16x PCIe slots if the CPUs can't handle them? I'm about to build my next combo gaming/storage server. I'll have a 16x 1180 video card, a 8x pcie 4GB cache raid controller and a 4x10gb 8x pcie network card. Technically all I know is I need 32 PCIe lanes for my cards. No, I don't want to run my video card at 8x any more than I want to drive at in rush hour traffic when the 405 is cut down to 2 lanes. I'm also very curious why there has been virtually no innovation on the standard processor for PCIe lanes. I'm not sure why intel plays games with their PCIe lanes but I'm starting to take some schadenfreude at their latest issues - viruses that attack their processors, no luck in the new lower nm fabrication and falling market share to AMD. Hard to feel a lot of sympathy or loyalty to a company without any transparency to their own customer base.

With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
With the exception of the video card, the other devices don't need to be directly connected to the CPU. The minor latency introduced by going through the chipset first isn't noticed with storage controllers and NIC cards.
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
One thing I never figured out is when I have two NVMe drives connected to the "southbridge", can they talk directly to each other or do they still have to go through the CPU?

That is the beauty if DMA, it allows devices to talk directly to each other with very minimal interaction with the CPU or System RAM.

Yes, the link back to the CPU can become a bottleneck, but that is a 4GB/s bottleneck. If you have a few NVMe SSDs in RAID the link to the CPU could be the limiting factor. But, will you notice it during actual use? Not likely. You won't be able to get those sweet sweet benchmark scores for maximum sequential read/write. But normal use isn't sequential read/write, so it doesn't really matter. And even still, 4GB/s of read/write speed is still damn fast.

But DMA means that data doesn't have to flow up to the CPU all the time. If you have a 10Gb/s NIC, and a NVMe SSD, data will flow directly from the SSD to the NIC.
 
Last edited:
Top