• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Coffee Lake" Platform Detailed - 24 PCIe Lanes from the Chipset

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
There's no foul play. Instead of dictating a fixed split between lanes, the manufacturers can configure them however they want (more or less). You'd probably be more unhappy if Intel dictated a fixed configuration instead.

Plus, you're really misinformed. Yes, the number if lanes hasn't gone up much, but the speed of each lane did. And since you can split lanes, you can actually connect a lot more PCIe 2.0 devices at one than you could a few years ago. But all in all, PCIe lanes have already become a scare resource with the advent of NVMe. Luckily we don't need NVMe atm, but I expect this will change in a few years, so we'd better get more PCIe lanes by then.

I get that 3.0 is faster than 2.0 but since all my equipment is 3.0 I need more lanes and even if they made it 4.0 my 3.0 equipment wouldn't perform any faster with insufficient lanes. It seems I still don't understand the situation though. Let's say I got a 8700k CPU. On intel's website - it says I have 16PCIe lanes - some web sites it says there are 28 and some say 40. Some people are talking about CPUs having more that are for the motherboard. I don't know how many more there are for the motherboard which go to the slots (if any) and how do I know they aren't used up for resources like USB ports, onboard sata and raid controllers, on board sound cards, on board wifi, on board nic ports. My guess is those alone might be using at least a dozen pcie lanes. So again unless some manufacturers tell us how many lanes are available fixed or otherwise for the slots it seems like a crap shoot at best.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I get that 3.0 is faster than 2.0 but since all my equipment is 3.0 I need more lanes and even if they made it 4.0 my 3.0 equipment wouldn't perform any faster with insufficient lanes.
Yes, it would. You'd split one 4.0 into several 3.0 lanes and you'd be good.

That is the beauty if DMA, it allows devices to talk directly to each other with very minimal interaction with the CPU or System RAM.
I know, but I never figured out how these PCIe complexes are built. If, like you say, they use DMA to talk to each other "directly", that would be great. And I suspect that's what's going on, but I never confirmed it.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
There is some very interesting "lane management" going on with the AMD X470 platform, check out the videos from L1 tech using the two on-board M.2 slots - one "steals" lanes from the slots, but it is pretty zippy.

More lanes is nice to have though and Threadripper is really the only game in town at that level/price.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
I get that 3.0 is faster than 2.0 but since all my equipment is 3.0 I need more lanes and even if they made it 4.0 my 3.0 equipment wouldn't perform any faster with insufficient lanes. It seems I still don't understand the situation though. Let's say I got a 8700k CPU. On intel's website - it says I have 16PCIe lanes - some web sites it says there are 28 and some say 40. Some people are talking about CPUs having more that are for the motherboard. I don't know how many more there are for the motherboard which go to the slots (if any) and how do I know they aren't used up for resources like USB ports, onboard sata and raid controllers, on board sound cards, on board wifi, on board nic ports. My guess is those alone might be using at least a dozen pcie lanes. So again unless some manufacturers tell us how many lanes are available fixed or otherwise for the slots it seems like a crap shoot at best.

That is why it is important that the chipset provide 24 lanes. The total Intel platform provides 40 lanes. 16 are attached directly to the CPU(or more specifically the northbridge inside the CPU die). The other 24 come from the PCH(southbridge) chip on the motherboard, this chip is then linked to the CPU with a PCI-E x4 link. But that x4 link only becomes an issue when transferring from a storage device to system memory(opening a program/game).

Most manufacturers make it pretty clear where the PCI-E lanes are coming from, or it is pretty easy to figure it out. The 16 lanes from the CPU are supposed to only be used for graphics. The first PCI-E x16 slot is almost always connected to the CPU. If there is a second slot PCI-E x16 slot, then almost always the first slot will become an x8 and the second will be an x8 as well, because they are sharing the 16 lanes from the CPU. The specs of the motherboard will tell you this. You'll see something in the specs like "single at x16 ; dual at x8 / x8" Some even say "single at x16 ; dual at x8 / x8 ; triple at x8 / x4 / x4". In that case, all 3 PCI-E x16 slots are actually connected to the CPU, but when multiple are used, they run at x8 or x4 speed.

Any other slot that doesn't share bandwidth like this, is pretty much guaranteed to be using the chipset lanes and not the ones directly connected to the CPU.

I know, but I never figured out how these PCIe complexes are built. If, like you say, they use DMA to talk to each other "directly", that would be great. And I suspect that's what's going on, but I never confirmed it.

I remember back in the days when the CPU had to handle data transfers, it was so slow. Does anyone else remember the days when burning a CD would max out your CPU, and if you tried to open anything else on the computer, the burn would fail? That was because DMA wasn't a thing(and buffer underrun protection wasn't a thing yet either).

More lanes is nice to have though and Threadripper is really the only game in town at that level/price.

Threadripper isn't a perfect solution either. In fact, it introduces a new set of issues. The fact that Threadripper is really a MCM, and the PCI-E lanes coming from the CPU are actually split up like they are on two different CPUs, leads to issues. If a device connected to one CPU wants to talk to another, it has to be done over the Infinity Fabric, which introduces latency. And it really isn't that much better than going over Intel's DMI link from the PCH to the CPU. It also had issues with RAID, due to the drives essentially being connected to two different storage controllers, but I think AMD has finally worked that one out.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I remember back in the days when the CPU had to handle data transfers, it was so slow. Does anyone else remember the days when burning a CD would max out your CPU, and if you tried to open anything else on the computer, the burn would fail? That was because DMA wasn't a thing(and buffer underrun protection wasn't a thing yet either).

Yeah, UDMA was a big deal back in the day. However PCIe is different. It's a hierarchical structure. And idk whether leaves can talk to each other without going all the way to the root. The engineer in me says you don't come up with a tree-like structure unless you want the branches to be able to work on their own. But the same engineer needs a spec/paper that says that is indeed the case.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
@newtekie, I remember CPUs being maxed for many tasks that are considered "light" today. Including installing whopping 32MB hard drives into PCs back in 1988. ;)
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
That is why it is important that the chipset provide 24 lanes. The total Intel platform provides 40 lanes. 16 are attached directly to the CPU(or more specifically the northbridge inside the CPU die). The other 24 come from the PCH(southbridge) chip on the motherboard, this chip is then linked to the CPU with a PCI-E x4 link. But that x4 link only becomes an issue when transferring from a storage device to system memory(opening a program/game).

Most manufacturers make it pretty clear where the PCI-E lanes are coming from, or it is pretty easy to figure it out. The 16 lanes from the CPU are supposed to only be used for graphics. The first PCI-E x16 slot is almost always connected to the CPU. If there is a second slot PCI-E x16 slot, then almost always the first slot will become an x8 and the second will be an x8 as well, because they are sharing the 16 lanes from the CPU. The specs of the motherboard will tell you this. You'll see something in the specs like "single at x16 ; dual at x8 / x8" Some even say "single at x16 ; dual at x8 / x8 ; triple at x8 / x4 / x4". In that case, all 3 PCI-E x16 slots are actually connected to the CPU, but when multiple are used, they run at x8 or x4 speed.

Any other slot that doesn't share bandwidth like this, is pretty much guaranteed to be using the chipset lanes and not the ones directly connected to the CPU.

So you say it is clear but it isn't clear to me - maybe I'm just obtuse or maybe my pants are too big. If I have a 16x video card, 8x raid card and 8x nic - all of which are PCIe 3.0 it sounds like my max speed for anything other than my 16x video card is 4x and I don't know if I have even 4x to spare for the other 2 cards as I don't know how many PCIe lanes are used by the onboard sound, raid, sata, wifi, usb ports, etc.

I appreciate your help on clarifying this as I'd really like to know. I can pick a processor and specific motherboard if it helps you to give me an answer that helps me go in the right direction. I use my PC for gaming, work and as a HTPC. I have 150TB of storage and often transfer it to my other PC which has 100TB of storage so I hammer my nic and raid controller about once per week -usually about 4TB of transfer with updates and changes.

Getting a clean PCIe answer has been as challenging for me as finding out when I can actually use my frequent flyer miles.
 
Last edited:
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
You could look at a motherboard with built-in 5 or 10Gbe interface and put a similar NIC in the other PC as an add-in board? Then just connect them with a crossover cable. Use the original port for internet access.

Some of the Asrock boards have an Aquantia 5 or 10Gbe as well as a 1Gbe (or two) on them.

Once the NIC saturates it doesn't matter how fast the drives on either end are or what lanes they are connected to.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.11/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
So you say it is clear but it isn't clear to me - maybe I'm just obtuse or maybe my pants are too big. If I have a 16x video card, 8x raid card and 8x nic - all of which are PCIe 3.0 it sounds like my max speed for anything other than my 16x video card is 4x and I don't know if I have even 4x to spare for the other 2 cards as I don't know how many PCIe lanes are used by the onboard sound, raid, sata, wifi, usb ports, etc.

I appreciate your help on clarifying this as I'd really like to know. I can pick a processor and specific motherboard if it helps you to give me an answer that helps me go in the right direction. I use my PC for gaming, work and as a HTPC. I have 150TB of storage and often transfer it to my other PC which has 100TB of storage so I hammer my nic and raid controller about once per week -usually about 4TB of transfer with updates and changes.

Getting a clean PCIe answer has been as challenging for me as finding out when I can actually use my frequent flyer miles.

For server grade tasks maybe you should look at server grade products. Both amd and Intel offer products that easily fulfill those needs with plenty of pcie lanes.
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
You could look at a motherboard with built-in 5 or 10Gbe interface and put a similar NIC in the other PC as an add-in board? Then just connect them with a crossover cable. Use the original port for internet access.

Some of the Asrock boards have an Aquantia 5 or 10Gbe as well as a 1Gbe (or two) on them.

Once the NIC saturates it doesn't matter how fast the drives on either end are or what lanes they are connected to.

I use 1 1GB nic for the internet, 1 1gb nic for configuring switches and firewalls and the remainder 4x 10gb ports as a poor mans 10gb switch to go to my other PCs.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Yeah, UDMA was a big deal back in the day. However PCIe is different. It's a hierarchical structure. And idk whether leaves can talk to each other without going all the way to the root. The engineer in me says you don't come up with a tree-like structure unless you want the branches to be able to work on their own. But the same engineer needs a spec/paper that says that is indeed the case.

It has to go back to the root, but in the case, the root is the PCH(Southbridge) or the CPU(Northbridge on die). There are two roots, and those two roots can talk to each other over the DMI link between the CPU and PCH.

So you say it is clear but it isn't clear to me - maybe I'm just obtuse or maybe my pants are too big. If I have a 16x video card, 8x raid card and 8x nic - all of which are PCIe 3.0 it sounds like my max speed for anything other than my 16x video card is 4x and I don't know if I have even 4x to spare for the other 2 cards as I don't know how many PCIe lanes are used by the onboard sound, raid, sata, wifi, usb ports, etc.

I appreciate your help on clarifying this as I'd really like to know. I can pick a processor and specific motherboard if it helps you to give me an answer that helps me go in the right direction. I use my PC for gaming, work and as a HTPC. I have 150TB of storage and often transfer it to my other PC which has 100TB of storage so I hammer my nic and raid controller about once per week -usually about 4TB of transfer with updates and changes.

Getting a clean PCIe answer has been as challenging for me as finding out when I can actually use my frequent flyer miles.

It is going to vary between different boards. Some motherboards wire all the x16 slots to the CPU, this means they will have to split the 16 lanes for those slots. Others will only have the first two slots connected to the CPU, and others still only connect the first to the CPU.

What CPU and board are you looking at?
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
I use 1 1GB nic for the internet, 1 1gb nic for configuring switches and firewalls and the remainder 4x 10gb ports as a poor mans 10gb switch to go to my other PCs.

So you are using a 4 port 10Gbe card in each machine? What model, that sounds expensive. If not, I might want a couple. ;)
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It has to go back to the root, but in the case, the root is the PCH(Southbridge) or the CPU(Northbridge on die). There are two roots, and those two roots can talk to each other over the DMI link between the CPU and PCH.

Yes, that's the part I'm missing: confirmation that the PCH root can handle things on its own (when both parties are connected to the PCH root, of course) and it doesn't have to go to the CPU.
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
I was considering something like the 8700k combined with the Z370 Taichi Asrock motherboard -never used an Asrock before. I'm not married to any particular brand of motherboard or chipset (although I've had a number of MSI boards die so I don't think I'd want another one of those - I replace with gigabyte boards that have lasted 3 times as long and are still running) - I typically use gigabyte and might consider the Z370 AORUS Gaming 7.

I'm definitely going with the
NVidia 1180 Video card (probably asus strix)
X710-T4 network card
9460-16i raid controller
I'm also using a butt load of usb devices and the onboard nvme if it matters.
I will occasionally use wifi just for testing - not frequently.

So you are using a 4 port 10Gbe card in each machine? What model, that sounds expensive. If not, I might want a couple. ;)

In my new machine I'll have quad port that I'll dual port trunk to two machines. In my old machines I have the old intel dual 10g nics -ports (x540). I only use them for backups so I'll have 20gbit which is more than enough as my raid will max out well before that. It exceeds 10gbit but I don't know by how much as currently I'm only connected at 10g.
 
Last edited:
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
So why not look at e.g. the Asrock X470 (+ Ryzen 2xxx) with a 10Gbe on board and 2 x M.2s that you can RAID for >6GB/s? Or as mentioned, threadripper? Is it a budget issue?

What NICs are you using?
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
So why not look at e.g. the Asrock X470 (+ Ryzen 2xxx) with a 10Gbe on board and 2 x M.2s that you can RAID for >6GB/s? Or as mentioned, threadripper? Is it a budget issue?

What NICs are you using?

The ryzen - even ryzen 2 have less FPS for gaming than the Intel 8700k (ryzen 2 about 10% slower in gaming with a 1080ti - not sure how much it will be with the 1180. I don't do any video editing or photoshop so I don't care if it is 4 core or 400 core). I use it for gaming as well so I want the best gaming performance. X710-T4
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
I was considering something like the 8700k combined with the Z370 Taichi Asrock motherboard -never used an Asrock before. I'm not married to any particular brand of motherboard or chipset (although I've had a number of MSI boards die so I don't think I'd want another one of those - I replace with gigabyte boards that have lasted 3 times as long and are still running) - I typically use gigabyte and might consider the Z370 AORUS Gaming 7.

I'm definitely going with the
NVidia 1180 Video card (probably asus strix)
X710-T4 network card
9460-16i raid controller
I'm also using a butt load of usb devices and the onboard nvme if it matters.
I will occasionally use wifi just for testing - not frequently.


So on the Z370 Taichi, the three PCI-E x16 slots all run off the CPU. So, with your setup, your graphics card will get an x8 link in the top slot, the NIC and RAID cards will get x4 links in the other two slots. Everything else on the motherboard runs off the 24 lanes from the chipset.

On the Z370 AORUS Gaming 7, the top two PCI-E x16 slots run off the CPU, the bottom one runs off lanes from the chipsets. So your GPU will get an x8 link, then whatever card you plug into the second slot will get an x8 link as well, and the card in the bottom slot will get an x4 link.
 
  • Like
Reactions: boe

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
Thanks - good to know. I did not realize that so that is very helpful to me. I hate to get a 7900x - partially because of the cost but also because I leave my computer on 24x7 and electricity is expensive here.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.11/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
Thanks - good to know. I did not realize that so that is very helpful to me. I hate to get a 7900x - partially because of the cost but also because I leave my computer on 24x7 and electricity is expensive here.

Cpu at idle difference isn't much. The platform itself can draw more, but considering you have a whole heck of a lot running the difference won't be as much as you think.
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
You are correct. Still leaves me with about a $500 difference just to get more PCIe lanes but the gaming won't be any faster. I'm not saying you are wrong - just some how feels like Intel is asking me to bend over without even giving me breakfast in the morning. I was hoping Intel would have a new lower nm fabrication i9 out this year but that looks extremely unlikely at the moment. No idea if the new z390 chipsets would benefit me for PCIe lanes or what the or x399 does to modernize the expensive intel processor motherboard which usually seem about a generation behind most gaming PCs other than the PCIe lanes.
 
Last edited:

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.11/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
Intel is asking you to buy HEDT or their server platform if you want a server number of gpu lanes.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
The ryzen - even ryzen 2 have less FPS for gaming than the Intel 8700k (ryzen 2 about 10% slower in gaming with a 1080ti - not sure how much it will be with the 1180. I don't do any video editing or photoshop so I don't care if it is 4 core or 400 core). I use it for gaming as well so I want the best gaming performance. X710-T4

But if more cores are used better in the future, the 8 of the Ryzen will work out better in the long run and an extra 10GBe is a nice thing to have IMO. I suppose it depends on how long you plan to keep a system.
 

boe

Joined
Nov 16, 2005
Messages
42 (0.01/day)
You may be right but I sincerely doubt it any time in the next 5 years. I only have 4 cores on my 6700k and it still beats the multicore 2600x. I think an 8700k or 9700k with 6 or 8 cores will probably bury the 2600x. However, the ryzen 3 might have their design improved enough by then that it will outperform an 8700k - not because of a googleplex of cores but because the processor is faster.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
You may be right but I sincerely doubt it any time in the next 5 years. I only have 4 cores on my 6700k and it still beats the multicore 2600x. I think an 8700k or 9700k with 6 or 8 cores will probably bury the 2600x. However, the ryzen 3 might have their design improved enough by then that it will outperform an 8700k - not because of a googleplex of cores but because the processor is faster.
It all comes down to the software you use more. Some apps junst don't multithread that easily (if at all) and those will always run better on fewer faster cores. Those that do will benefit from as many cores as you can throw at them.
At the end of the day, beyond 3D rendering and video editing few applications need tons of threads. I believe game engines only recently (past two years or so) have broken the 4 core barrier. While I can't speak for others, I know my quad core will easily get the job done for a few more years.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
Indeed, in gaming I'm seeing some games + some games - between the competitors: https://www.pcper.com/reviews/Proce...-2600X-Review-Zen-Matures/1440p-and-4K-Gaming

It also takes around 10% faster to notice a significant difference and smoothness (consistency) is also important, the "headline" benchmark charts often to not report "perceived gameplay". If you do things like file transfers in the background while gaming, extra cores should help. IMO if you get to the point of a 10% difference but are seeing 100+fps with consistent frame times you'd be hard pressed to feel a difference outside competitive FPS gaming.

More cores is the future IMO.

If I was buying a machine now to last for up to 5 years with maybe only a graphics card swap or 3 I'd go for the better multi-threaded performance. Look what has happened with the 7700k, a year ago it was the gaming king, today I think it's "not so much" versus the more-cored CPUs. Then there are the platform considerations ...
 
Top