• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9 8-core LGA1151 Processor Could Get Soldered IHS, Launch Date Revealed

Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Same socket, in the same product stack = same series. It doesn't matter that they use a different die, the 2 core Intels and 4 core and 6 cores use different dies too, they are still the same series of CPUs. You are not going to be able to successfully argue that the Ryzen 5 2400G and Ryzen 5 2600 are two different series of processors. They might have different cores in them, but AMD has made them the same series. What was said would have been true back when the APUs were separate from the mainstream deaktop processor, on a complete different platform with a completely different naming scheme, but that is no longer the case. AMD has made them the same series as their traditional CPU line.
1) Intel doesn't have 2-core desktop dice, only 4- and 6. The rest are harvested/disabled.
2) The difference between Raven Ridge and Summit/Pinnacle Ridge is far bigger than between any mainstream Intel chips, regardless of differences in core count. The Intel 4+2 and 6+2 dice are largely identical except for the 2 extra cores. All Summit/Pinnacle Ridge chips (and Threadripper) are based off the same 2-CCX iGPU-less die (well, updated/tuned for Pinnacle Ridge and the updated process node, obviously). Raven Ridge is based off an entirely separate die design with a single CCX, an iGPU, and a whole host of other uncore components belonging to that. The difference is comparable to if not bigger than the difference between the ring-bus MSDT and the mesh-interconnect HEDT Intel chips.
3) If "Same socket, in the same product stack" is the rule, do you count Kaby Lake-X as the same series as Skylake-X?
4) "Same product stack" is also grossly misleading. From the way you present this, Intel has one CPU product stack - outside of the weirdly named low-end core-based Pentium and Celerons, that is, which seem to "lag" a generation or two in their numbering. They all use the same numbering scheme, from mobile i3s to HEDT 18-core i9s. But you would agree that the U, H and other suffixes for mobile chips place them in a different product stack, no? Or would you say that Intel has no mobile product stack? 'Cause if you think they do, then you have to agree that the G suffix of the desktop RR APUs also makes that a separate product stack. Not to mention naming: Summit and Pinnacle Ridge are "Ryzen". Then there's "Ryzen Threadripper". Then there's "Ryzen with Vega Graphics". Subsets? Sure. Both are. But still separate stacks.


That isn't how it works with DMA, the data does not have to flow back to the CPU to be moved around. Every bit of data transferred over the PCI-E bus isn't going through the CPU. The data flows through the chipset, so the 4 lane connection back to the CPU is almost never a bottleneck. The only time it is really a bottleneck is for the GPUs, which is why they are wired directly to the CPU and everything else happily flows through the chipset. Have you ever looked at how the HEDT boards are wired? Those extra CPU PCI-E lanes aren't used for storage... The only other time the 4x link between the chipset and CPU is stressed is loading data from a RAID0 M.2 NVMe setup into memory(program loading, game level loading, etc.) But you still get almost 4GB/s of transfer speed from the drives into Memory. Are you really going to notice a faster transfer speed than that? Besides that, in situations where you are loading data from the drives into memory, those are almost always random read/write cases. And even the best drives on the market right now don't even break 1GB/s random read, so even if you had two in RAID0, you're not coming close to a bottleneck on the DMI link between the chipset and the CPU.
You're right that DMA alleviates this somewhat, but that depends on the workload. Is all you do with your SSDs copying stuff between them? If not, the data is going to go to RAM or CPU. If you have a fast NIC, have you made sure that the drive you're downloading to/uploading from is connected off the PCH and not the CPU? 'Cause if not, you're - again - using that QPI link. And so on, and so on. The more varied your load, the more that link is being saturated. And again, removing the bottleneck almost entirely would not be difficult at all - Intel would just have to double the lanes for the uplink. This would require a tiny increase in die space on the CPUs and PCHes, and somewhat more complex wiring in the motherboard, but I'm willing to bet the increase in system cost would be negligible.


Bull. SATA, USB, and LAN are all provided by the chipset without using any of the 24 PCI-E lanes. All the extra peripherals likely would never need 12 PCI-E 3.0 lanes, even on a high end board. You've got a sound card taking up 1 lane, maybe another LAN port taking up another, perhaps a wifi card taking up 1 more, and them maybe they add a USB3.1 controller taking 1 or maybe 2 more. Perhaps they even want to use an extra SATA controller taking 1 more. So the extras taking maybe 5 lanes, call it 6 to be safe? Certainly not half of the 24 provided.
Apparently you're not familiar with Intel HSIO/Flex-IO or the feature sets of their chipsets. You're partially right that USB is provided - 2.0 and 3.0, but not 3.1 except for the 300-series excepting the Z370 (which is really just a rebranded Z270). Ethernet is done through separate controllers over PCIe, and SATA shares lanes with PCIe. Check out the HSIO lane allocation chart from AnandTech's Z170 walkthrough from the Skylake launch - the only major difference between this and Z270/370 is the addition of a sixth PCIe 3.0x4 controller, for 4 more HSIO lanes. How they can be arranged/split (and crucially, how they can not) works exactly the same. Note that Intel's PCH spec sheets (first picture here) always say "up to" X number of USB ports/PCIe lanes and so on - due to them being interchangeable. Want more than 6 USB 3.0 ports? That takes away an equivalent amount of PCIe lanes. Want SATA ports? All of those occupy RST PCIe lanes, though at least some can be grouped on the same controller. Want dual Ethernet? Those will eat PCIe lanes too. And so on. The moral of the story: An implemented Intel chipset does not have the amount of available PCIe lanes that they advertise that it has.
That jives pretty well with der8auers recent look into the question of "can you solder an IHS yourself?", but with one major caveat: the difference in complexity and cost between doing a one-off like the process shown there and doing the same on an industrial scale should really not be underestimated. Intel already knows how to to this. They already own the tools and machinery, as they've done this for years. Intel can buy materials at bargain-basement bulk costs. Intel has the engineering expertise to minimize the occurrence of cracks and faults. And it's entirely obvious that an industrial-scale process like this would be fine-tuned to minimize the soldering process causing cracked dice and other failures.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Well lets face it. Overclocking is pretty dead with the clock speeds these chips are pushing from Turbo Boost out of the box. Because of this i dont really give a shit whats between the IHS and die.

You aren't a regular user. You use an alternative cooling method that far surpasses the stock cooler. This ensures you get that advertised turbo boost. The majority of users don't even understand how this works, let alone even know you can install a better cooler to get lower than the throttling high temps they don't even know about.

As for overclocking, we have yet to see what these particular chips can do... but aside from that, the bigger point is that Intel killed overclocking too on all but the most expensive chips. I am confident that if we could overclock anything we wanted to like we used to do up until Sandy Bridge, there would be a lot less users on this forum with the 8600k or 8700k, they would be using the much more affordable 8400 or an even lower model. I'd be perfectly happy with one of those $100ish quad core i3 chips running at 5GHz, but Intel killed that possibility.
 
Joined
Aug 13, 2009
Messages
3,254 (0.58/day)
Location
Czech republic
Processor Ryzen 5800X
Motherboard Asus TUF-Gaming B550-Plus
Cooling Noctua NH-U14S
Memory 32GB G.Skill Trident Z Neo F4-3600C16D-32GTZNC
Video Card(s) AMD Radeon RX 6600
Storage HP EX950 512GB + Samsung 970 PRO 1TB
Display(s) HP Z Display Z24i G2
Case Fractal Design Define R6 Black
Audio Device(s) Creative Sound Blaster AE-5
Power Supply Seasonic PRIME Ultra 650W Gold
Mouse Roccat Kone AIMO Remastered
Software Windows 10 x64
That jives pretty well with der8auers recent look into the question of "can you solder an IHS yourself?", but with one major caveat: the difference in complexity and cost between doing a one-off like the process shown there and doing the same on an industrial scale should really not be underestimated. Intel already knows how to to this. They already own the tools and machinery, as they've done this for years. Intel can buy materials at bargain-basement bulk costs. Intel has the engineering expertise to minimize the occurrence of cracks and faults. And it's entirely obvious that an industrial-scale process like this would be fine-tuned to minimize the soldering process causing cracked dice and other failures.
Um, no. It's not about that. Look at the conclusion.
 
Joined
Nov 3, 2013
Messages
2,141 (0.53/day)
Location
Serbia
Processor Ryzen 5600
Motherboard X570 I Aorus Pro
Cooling Deepcool AG400
Memory HyperX Fury 2 x 8GB 3200 CL16
Video Card(s) RX 6700 10GB SWFT 309
Storage SX8200 Pro 512 / NV2 512
Display(s) 24G2U
Case NR200P
Power Supply Ion SFX 650
Mouse G703 (TTC Gold 60M)
Keyboard Keychron V1 (Akko Matcha Green) / Apex m500 (Gateron milky yellow)
Software W10
Um, no. It's not about that. Look at the conclusion.
"Intel has some of the best engineers in the world when it comes to metallurgy. They know exactly what they are doing and the reason for conventional thermal paste in recent desktop CPUs is not as simple as it seems."
Doesn't change the fact that they used probably the worst possible tim from Skylake onwards. We probably wouldn't be having this entire TIM vs Solder debate if Intel used higher quality paste.
Intel has some of the best engineers, they also have some of the best people for catering to the investors. The later are higher in hierarchy.

"Micro cracks in solder preforms can damage the CPU permanently after a certain amount of thermal cycles and time."
Nehalem is soldered. It was released 10 years ago. People are still pushing those CPUs hard to this day. They still work fine.

"Thinking about the ecology it makes sense to use conventional thermal paste."
Coming from the guy known for extreme overclocking in which your setup consumes close to 1kW.
 
Joined
Apr 12, 2013
Messages
7,564 (1.77/day)
Um, no. It's not about that. Look at the conclusion.
Hmm nope, look at the last two pages. Look at AMD, look at Xeon. There is no scientific evidence that soldering harms the CPU, just some minor observations by der8auer.
 

phill

Moderator
Staff member
Joined
Jun 8, 2011
Messages
17,009 (3.43/day)
Location
Somerset, UK
System Name Not so complete or overkill - There are others!! Just no room to put! :D
Processor Ryzen Threadripper 3970X
Motherboard Asus Zenith 2 Extreme Alpha
Cooling Lots!! Dual GTX 560 rads with D5 pumps for each rad. One rad for each component
Memory Viper Steel 4 x 16GB DDR4 3600MHz not sure on the timings... Probably still at 2667!! :(
Video Card(s) Asus Strix 3090 with front and rear active full cover water blocks
Storage I'm bound to forget something here - 250GB OS, 2 x 1TB NVME, 2 x 1TB SSD, 4TB SSD, 2 x 8TB HD etc...
Display(s) 3 x Dell 27" S2721DGFA @ 7680 x 1440P @ 144Hz or 165Hz - working on it!!
Case The big Thermaltake that looks like a Case Mods
Audio Device(s) Onboard
Power Supply EVGA 1600W T2
Mouse Corsair thingy
Keyboard Razer something or other....
VR HMD No headset yet
Software Windows 11 OS... Not a fan!!
Benchmark Scores I've actually never benched it!! Too busy with WCG and FAH and not gaming! :( :( Not OC'd it!! :(
Yes, but even with SLI and multiple M.2 drives, the 40 lanes that an 8700K on a Z370 motherboard provides is more than enough. Even if you've got two GPUs, and two high speed M.2 drives, that's only 24 lanes total, leaving another 16 for other devices. There is that much more you can put on a board that needs 16 lanes of PCI-E bandwidth.

Please do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)

I've only ever used single GPUs on the Zxx/Z1xx boards, but the X99 and such I've used dual up to quad GPUs.. But things where different back in the days of 920 D0's and the Classified motherboards I had then I think..
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Um, no. It's not about that. Look at the conclusion.
What point in that conclusion don't I address? I suppose the environmental one (which is important! I really, really want electronics production to pollute less!), but how much gold and indium solder is required per CPU? An almost immeasurably small amount of gold (1-3 atoms thick layers don't weigh much), and perhaps a gram or two of indium. Plus teeny-tiny amounts of the other stuff too. Even producing hundreds of thousands of CPUs, the amounts of material required would be very very small for an industrial scale.
Please do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)

I've only ever used single GPUs on the Zxx/Z1xx boards, but the X99 and such I've used dual up to quad GPUs.. But things where different back in the days of 920 D0's and the Classified motherboards I had then I think..
Mainstream motherboards with SLI/CF support split the single PCIe x16 "PCIe Graphics" lane allocation they have available to x8+x8 when two GPUs are installed in the correct motherboard slots, simply because there are no more PCIe lanes to allocate. If there were more, this wouldnt' happen, but that is only the case on HEDT platforms. As such, CF/SLI on mainstream platforms is (and has always been) x8+x8. CF can even go down to x8+x4+x4.

GPUs can, in essence, run on however many PCIe lanes you allocate to them (up to their maximum, which is 16). As such, you can run a GPU off an x1 slot if you wanted to - and it would work! - but it would be bottlenecked beyond belief (except for cryptomining, which is why they use cheapo x1 PCIe risers to connect as many GPUs as possible to their rigs). However, the difference between x8 and x16 is barely measurable in the vast majority of games, let alone noticeable. It's usually in the 1-2% range.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64

Point still stand, different dies, same series.

2) The difference between Raven Ridge and Summit/Pinnacle Ridge is far bigger than between any mainstream Intel chips, regardless of differences in core count. The Intel 4+2 and 6+2 dice are largely identical except for the 2 extra cores. All Summit/Pinnacle Ridge chips (and Threadripper) are based off the same 2-CCX iGPU-less die (well, updated/tuned for Pinnacle Ridge and the updated process node, obviously). Raven Ridge is based off an entirely separate die design with a single CCX, an iGPU, and a whole host of other uncore components belonging to that. The difference is comparable to if not bigger than the difference between the ring-bus MSDT and the mesh-interconnect HEDT Intel chips.

The CPU core design is still identical. The die has one CCX removed and a GPU added, but the CPU cores are identical to Ryzen cores.

3) If "Same socket, in the same product stack" is the rule, do you count Kaby Lake-X as the same series as Skylake-X?

Yep.

4) "Same product stack" is also grossly misleading. From the way you present this, Intel has one CPU product stack - outside of the weirdly named low-end core-based Pentium and Celerons, that is, which seem to "lag" a generation or two in their numbering. They all use the same numbering scheme, from mobile i3s to HEDT 18-core i9s. But you would agree that the U, H and other suffixes for mobile chips place them in a different product stack, no? Or would you say that Intel has no mobile product stack? 'Cause if you think they do, then you have to agree that the G suffix of the desktop RR APUs also makes that a separate product stack. Not to mention naming: Summit and Pinnacle Ridge are "Ryzen". Then there's "Ryzen Threadripper". Then there's "Ryzen with Vega Graphics". Subsets? Sure. Both are. But still separate stacks.

The product stack is the current generation of processors on the same socket.

Intel's current product stack on the 1151(300) socket ranges from the Celeron G4900 all the way up to the 8700K. The mobile processors are a completely different series and product stack.

You're right that DMA alleviates this somewhat, but that depends on the workload. Is all you do with your SSDs copying stuff between them? If not, the data is going to go to RAM or CPU. If you have a fast NIC, have you made sure that the drive you're downloading to/uploading from is connected off the PCH and not the CPU? 'Cause if not, you're - again - using that QPI link. And so on, and so on. The more varied your load, the more that link is being saturated. And again, removing the bottleneck almost entirely would not be difficult at all - Intel would just have to double the lanes for the uplink. This would require a tiny increase in die space on the CPUs and PCHes, and somewhat more complex wiring in the motherboard, but I'm willing to bet the increase in system cost would be negligible.

Yes, removing the limit would be easy, but it hasn't become necessary. Even on their HEDT platform, the link hasn't become an issue. The fact is with the exception of some very extreme fringe cases, the QPI link between the chipset and the CPU isn't a bottleneck. The increased cost would be negligible, but so would the increase in performance.

The drive is never connected to the CPU on the mainstream platform. Data will flow from the NIC directly to the drive through the PCH. The QPI link to the CPU never comes into play. And even if it did, a 10Gb NIC isn't coming close to maxing out the QPI link. It would use about 1/4th of the QPI link.

Apparently you're not familiar with Intel HSIO/Flex-IO or the feature sets of their chipsets. You're partially right that USB is provided - 2.0 and 3.0, but not 3.1 except for the 300-series excepting the Z370 (which is really just a rebranded Z270). Ethernet is done through separate controllers over PCIe, and SATA shares lanes with PCIe. Check out the HSIO lane allocation chart from AnandTech's Z170 walkthrough from the Skylake launch - the only major difference between this and Z270/370 is the addition of a sixth PCIe 3.0x4 controller, for 4 more HSIO lanes. How they can be arranged/split (and crucially, how they can not) works exactly the same. Note that Intel's PCH spec sheets (first picture here) always say "up to" X number of USB ports/PCIe lanes and so on - due to them being interchangeable. Want more than 6 USB 3.0 ports? That takes away an equivalent amount of PCIe lanes. Want SATA ports? All of those occupy RST PCIe lanes, though at least some can be grouped on the same controller. Want dual Ethernet? Those will eat PCIe lanes too. And so on. The moral of the story: An implemented Intel chipset does not have the amount of available PCIe lanes that they advertise that it has.

Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Um, no. It's not about that. Look at the conclusion.

The only valid point there is the rarity of Indium, and environmental impacts and whatever other crap likely goes into sourcing it. The point about the longevity of solder, as well as the difficulty in doing it, makes zero sense. Intel has been soldering CPUs (which still work) for decades. They can do it right and there aren't adverse effects from it.

Please do excuse my ignorance, but don't GPUs use 16 lanes each (depending on the motherboard if you have dual GPUs in?)

More like "up to" 16 lanes. 16 lanes is optimal, but x8 or even x4 (if PCI-E 3.0) works just as well. You can take away lanes from the GPU to allocate them elsewhere... like that mountain of NICs and SSDs everybody seems to need.

What point in that conclusion don't I address? I suppose the environmental one (which is important! I really, really want electronics production to pollute less!), but how much gold and indium solder is required per CPU? An almost immeasurably small amount of gold (1-3 atoms thick layers don't weigh much), and perhaps a gram or two of indium. Plus teeny-tiny amounts of the other stuff too. Even producing hundreds of thousands of CPUs, the amounts of material required would be very very small for an industrial scale.

As I said before, a valid concern, likely the only one. That said, if they were gonna use paste, they could have done better than the garbage that's on them now. Surely they could have struck a deal with the CoolLab people or something. While inconvenient, I can always do that myself. I don't think I should have to and I would still prefer solder, but at least I can remedy that. I can also install a better cooler, like the h70 I'm using now. But there's still a very large group full of regular people who aren't really getting what they should be thanks to a nasty cocktail consisting of crappy stock cooling solutions, crappy paste rather than solder (or at least a superior paste), and sneaky marketing schemes including the words "up to".

There's no good reason to deliberately make them run hot. Sure it might "work", it might be "in spec", but they can do better than that. I've seen many times in the past that the lifetime of electronics is positively affected when they run nice and cool. I've also seen the lifetime of electronics affected negatively due to poorly designed, hot running and/or insufficiently cooled garbage.
 
Joined
Sep 25, 2012
Messages
2,074 (0.46/day)
Location
Jacksonhole Florida
System Name DEVIL'S ABYSS
Processor i7-4790K@4.6 GHz
Motherboard Asus Z97-Deluxe
Cooling Corsair H110 (2 x 140mm)(3 x 140mm case fans)
Memory 16GB Adata XPG V2 2400MHz
Video Card(s) EVGA 780 Ti Classified
Storage Intel 750 Series 400GB (AIC), Plextor M6e 256GB (M.2), 13 TB storage
Display(s) Crossover 27QW (27"@ 2560x1440)
Case Corsair Obsidian 750D Airflow
Audio Device(s) Realtek ALC1150
Power Supply Cooler Master V1000
Mouse Ttsports Talon Blu
Keyboard Logitech G510
Software Windows 10 Pro x64 version 1803
Benchmark Scores Passmark CPU score = 13080
Why not just buy an AMD CPU.......
Because the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Because the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).

Ha! It's funny that people think high end and overclocking products make up a large portion of the products sold...

Solder vs TIM doesn't matter to 99% of the buyers.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Ha! It's funny that people think high end and overclocking products make up a large portion of the products sold...

Solder vs TIM doesn't matter to 99% of the buyers.

Well, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:

 
Joined
Sep 7, 2017
Messages
3,244 (1.22/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Well, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:


I was actually surprised during some recent shopping that Intel actually has a 2017 heatsink in their catalog, put on the suggestions pages for the Core-X. No way in hell was I going to buy that thing... even though I'm not doing any overclocking.
 

phill

Moderator
Staff member
Joined
Jun 8, 2011
Messages
17,009 (3.43/day)
Location
Somerset, UK
System Name Not so complete or overkill - There are others!! Just no room to put! :D
Processor Ryzen Threadripper 3970X
Motherboard Asus Zenith 2 Extreme Alpha
Cooling Lots!! Dual GTX 560 rads with D5 pumps for each rad. One rad for each component
Memory Viper Steel 4 x 16GB DDR4 3600MHz not sure on the timings... Probably still at 2667!! :(
Video Card(s) Asus Strix 3090 with front and rear active full cover water blocks
Storage I'm bound to forget something here - 250GB OS, 2 x 1TB NVME, 2 x 1TB SSD, 4TB SSD, 2 x 8TB HD etc...
Display(s) 3 x Dell 27" S2721DGFA @ 7680 x 1440P @ 144Hz or 165Hz - working on it!!
Case The big Thermaltake that looks like a Case Mods
Audio Device(s) Onboard
Power Supply EVGA 1600W T2
Mouse Corsair thingy
Keyboard Razer something or other....
VR HMD No headset yet
Software Windows 11 OS... Not a fan!!
Benchmark Scores I've actually never benched it!! Too busy with WCG and FAH and not gaming! :( :( Not OC'd it!! :(
Well, it kinda does, just not in such an obvious way. 99% of the buyers probably aren't even aware this "solder vs TIM" debacle even exists... but they are affected by it when their overheating, Colgate covered processor complete with dinky coaster cooler thermal throttles to "base" clocks or worse. "Mister Upto, I see you're up to no good again!"

That said, I reiterate that though I've been slamming Intel pretty hard over the thermal paste, they should also include better coolers. I think, even if soldered, that coaster cooler is okay for the really low power chips like Celeron or Pentium, but once you get up to i3 territory, they should at least use the full height cooler.

It's a shame... Intel has some great silicon, but they ruin it with their poor design choices when it comes to cooling. It's like they made a Ferrari engine, and put it in this:


That would be one proper sleeper car as well!! :) Intel ain't going to be getting my money....
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
I was actually surprised during some recent shopping that Intel actually has a 2017 heatsink in their catalog, put on the suggestions pages for the Core-X. No way in hell was I going to buy that thing... even though I'm not doing any overclocking.

Interesting. No cooler included with those HEDT chips, but they're offering to sell one separately? I'm sure you could do better by far with a Hyper 212 or something. Seems to be the economical choice for improved cooling these days, much like the Arctic Freezer 64 Pro (AMD)/7(intel) was when I got into the game.

That would be one proper sleeper car as well!! :) Intel ain't going to be getting my money....

I know... I was rolling around with the idea of upgrading to an i5 8400, or more likely, an 8600k. I don't like their business practices, but as I mentioned earlier, it's still damn good silicon... and I'd eat the K series shit sandwich to stay off the locked platform (not just for more MHz, but faster RAM as well, which also never used to be an issue)... but AMD is looking damn good anymore. The Ryzen refresh currently available seems to do better with memory, where the OG Ryzen lineup suffered from memory compatibility issues... and with "Zen 2" around the corner, which should close, eliminate, or maybe even surpass the current gap in raw performance... yeah, AMD's looking good. Even if Intel reverses everything I've been slamming them for, AND increases performance even more AMD just might get my money anyway, just out of principle.
 
Joined
Sep 7, 2017
Messages
3,244 (1.22/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Interesting. No cooler included with those HEDT chips, but they're offering to sell one separately? I'm sure you could do better by far with a Hyper 212 or something. Seems to be the economical choice for improved cooling these days, much like the Arctic Freezer 64 Pro (AMD)/7(intel) was when I got into the game.



I know... I was rolling around with the idea of upgrading to an i5 8400, or more likely, an 8600k. I don't like their business practices, but as I mentioned earlier, it's still damn good silicon... and I'd eat the K series shit sandwich to stay off the locked platform (not just for more MHz, but faster RAM as well, which also never used to be an issue)... but AMD is looking damn good anymore. The Ryzen refresh currently available seems to do better with memory, where the OG Ryzen lineup suffered from memory compatibility issues... and with "Zen 2" around the corner, which should close, eliminate, or maybe even surpass the current gap in raw performance... yeah, AMD's looking good. Even if Intel reverses everything I've been slamming them for, AND increases performance even more AMD just might get my money anyway, just out of principle.

Doh. Wait, I was wrong. It's a Thermal Solution Specification. I read it wrong.

Still though, they listed a dinky thermal product for Kaby Lake, if I recall. Which is still crazy.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
It's too late for me to answer your post in full (I'll do that tomorrow), but for now I'll say this: look into how these lanes can (and can't) be split. There are not 16 or 17 free lanes, as you can't treat them as individually addressable when other lanes on the controller are occupied. At best, you have 3 x4 groups free. That's the best case scenario. Which, of course, is enough. But that's not usually the case. Often, some of those lanes are shared between motherboard slots, m.2 slots or SATA slots for example, and you'll have to choose which one you want. Want NVMe? That disables the two SATA ports shared with it. Want an AIC in the x1 slot on your board? Too bad, you just disabled an m.2 slot. And so on.
 
Last edited:
Joined
Sep 15, 2016
Messages
484 (0.16/day)
You aren't a regular user. You use an alternative cooling method that far surpasses the stock cooler. This ensures you get that advertised turbo boost. The majority of users don't even understand how this works, let alone even know you can install a better cooler to get lower than the throttling high temps they don't even know about.

Those coolers are available for $20.

The majority of users don't even know what CPU they have.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Sure, a $20 cooler could likely do the job, but as you say, the majority of users don't even know what CPU they have... let alone what temp it's running at, or the fact that the cooler they have is likely insufficient and their CPU is throttling, or at least not boosting to where it should be.
 
Joined
Jun 28, 2014
Messages
2,388 (0.62/day)
Location
Shenandoah Valley, Virginia USA
System Name Home Brewed
Processor i9-7900X and i7-8700K
Motherboard ASUS ROG Rampage VI Extreme & ASUS Prime Z-370 A
Cooling Corsair 280mm AIO & Thermaltake Water 3.0
Memory 64GB DDR4-3000 GSKill RipJaws-V & 32GB DDR4-3466 GEIL Potenza
Video Card(s) 2X-GTX-1080 SLI & 2 GTX-1070Ti 8GB G1 Gaming in SLI
Storage Both have 2TB HDDs for storage, 480GB SSDs for OS, and 240GB SSDs for Steam Games
Display(s) ACER 28" B286HK 4K & Samsung 32" 1080P
Case NZXT Source 540 & Rosewill Rise Chassis
Audio Device(s) onboard
Power Supply Corsair RM1000 & Corsair RM850
Mouse Generic
Keyboard Razer Blackwidow Tournament & Corsair K90
Software Win-10 Professional
Benchmark Scores yes
In any case, I'm starting to enjoy decapitating Intel's latest CPU's : )
Ha! Me too!

But I was sweating bullets when I opened my 7900X. The delid turned out great.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,747 (3.29/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
The only benefit there is to being able to do it is you can run direct die if you're brave enough... I could delid a CPU without worrying about it too much, but going direct die I'd have a hard time with.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
It's too late for me to answer your post in full (I'll do that tomorrow), but for now I'll say this: look into how these lanes can (and can't) be split. There are not 16 or 17 free lanes, as you can't treat them as individually addressable when other lanes on the controller are occupied. At best, you have 3 x4 groups free. That's the best case scenario. Which, of course, is enough. But that's not usually the case. Often, some of those lanes are shared between motherboard slots, m.2 slots or SATA slots for example, and you'll have to choose which one you want. Want NVMe? That disables the two SATA ports shared with it. Want an AIC in the x1 slot on your board? Too bad, you just disabled an m.2 slot. And so on.

It's so refreshing to find someone else who understands how PCIe lanes and FlexIO actually works on Intel chipsets. I guess AnandTech can produce as many easy-to-comprehend articles as they want, but people will still choose to not read those articles, then argue from a point of ignorance.
 
Joined
Feb 3, 2017
Messages
3,831 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Joined
Jan 31, 2005
Messages
2,098 (0.29/day)
Location
gehenna
System Name Commercial towing vehicle "Nostromo"
Processor 5800X3D
Motherboard X570 Unify
Cooling EK-AIO 360
Memory 32 GB Fury 3666 MHz
Video Card(s) 4070 Ti Eagle
Storage SN850 NVMe 1TB + Renegade NVMe 2TB + 870 EVO 4TB
Display(s) 25" Legion Y25g-30 360Hz
Case Lian Li LanCool 216 v2
Audio Device(s) Razer Blackshark v2 Hyperspeed / Bowers & Wilkins Px7 S2e
Power Supply HX1500i
Mouse Harpe Ace Aim Lab Edition
Keyboard Scope II 96 Wireless
Software Windows 11 23H2 / Fedora w. KDE
Because the soldered i9 will be much faster than anything AMD has, for at least the next year. It should be the best upgrade since Sandy. Possible OC to 5.5 stable on water? I think the i9-9900K will be the best-selling CPU in 2018 (definitely if solder is true, and maybe even with paste).

May well be - we will have to see - it will be far more interesting to see the price/performance ratio.

It is a bit funny (IMO) - this hole debate, in particular the part, that Intel is so and so much faster. Yes, may be, in synthetic test and benchmarks - but when it comes to real life,
who the f... can see, if the game is running with 2 fps more on an Intel vs. AMD?? It´s like all people are doing nothing but running benchmarks all day long.....
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The CPU core design is still identical. The die has one CCX removed and a GPU added, but the CPU cores are identical to Ryzen cores.

The product stack is the current generation of processors on the same socket.

Intel's current product stack on the 1151(300) socket ranges from the Celeron G4900 all the way up to the 8700K. The mobile processors are a completely different series and product stack.
Well, it seems like you've chosen where to draw your arbitrary delineator (socket+a very specific understanding of architecture), and can't be convinced otherwise. Oh well. I don't see a problem with two different product stacks existing on the same socket/platform - you do. That's your right, no matter how weird I find it. But at least be consistent: You say KBL-X is a part of a singular HEDT product stack, yet it's based on a completely different die, with fundamental design differences (including core layouts and low-level caches, plus the switch from a ring bus to a mesh interconnect between cores for SKL-X). SKL-X is not the same architecture as KBL-X. KBL-X is the same architecture as regular Skylake (just with some optimizations), but SKL and SKL-X are quite different. It's starting to sound more and more like your definition is "same socket, same numbered generation", with the architecture and feature set being irrelevant. I strongly disagree with that.



Yes, removing the limit would be easy, but it hasn't become necessary. Even on their HEDT platform, the link hasn't become an issue. The fact is with the exception of some very extreme fringe cases, the QPI link between the chipset and the CPU isn't a bottleneck. The increased cost would be negligible, but so would the increase in performance.

The drive is never connected to the CPU on the mainstream platform. Data will flow from the NIC directly to the drive through the PCH. The QPI link to the CPU never comes into play. And even if it did, a 10Gb NIC isn't coming close to maxing out the QPI link. It would use about 1/4th of the QPI link.
You're right, I got AMD and Intel mixed up for a bit there, as only AMD has CPU PCIe lanes for storage. Still, you have a rather weird stance here, essentially arguing that "It's not Intel's (or any other manufacturer's) job to push the envelope on performance." Should we really wait until this becomes a proper bottleneck, then complain even more loudly, before Intel responds? That's an approach that only makes sense if you're a corporation which has profit maximization as its only interest. Do you have the same approach to CPU or GPU performance too? "Nah, they don't need to improve anything before all my games hit 30fps max."

As for a 10GbE link not saturating the QPI link, you're right, but need to take into account overhead and imperfect bandwidth conversions between standards (10GbE controllers have a max transfer speed of ~1000MB/s, which only slightly exceeds the 985MB/s theoretical max of a PCI 3.0 x1 connection, but the controllers require a x4 connection still - there's a difference between internal bandwidth requirements and external transfer speeds) and bandwidth issues when transferring from multiple sources across the QPI link. You can't simply bunch all the data going from the PCH to the CPU or RAM together regardless of its source - the PCH is essentially a PCIe switch, not some magical data-packing controller (as that would add massive latency and all sorts of decoding issues). If the QPI link is transferring data from one x4 device, it'll have to wait to transfer data from any other device. Of course, switching happens millions if not billions of times a second, so "waiting" is a weird term to use, but it's the same principle as non-MU-MIMO Wifi: you might have a 1,7Gb/s max theoretical speed, but when the connection is constantly rotating between a handful of devices, performance drops significantly both due to each device only having access to a fraction of that second, and also some performance being lost in the switching. PCIe is quite latency-sensitive, so PCIe switches prioritize fast switching over fancy data-packing methods.

And, again: unless your main workload is copying data back and forth between drives, it's going to have to cross the QPI link to reach RAM and the CPU. If you're doing video or photo editing, that's quite a heavy load. Same goes for any type of database work, compiling code, and so on. For video work, a lot of people use multiple SSDs for increased performance - not necessarily in RAID, but as scratch disks and storage disks, and so on. We might not be quite at the point where we're seeing the limitations of the QPI link, but we are very, very close.

My bet is that Intel is hoping to ride this out until PCIe 4.0 or 5.0 reach the consumer space (the latter seems to be the likely one, given that it's arriving quite soon after 4.0, which has barely reached server parts), so that they won't have to spend any extra money on doubling the lane count. And they still might make it, but they're riding a very fine line here.

Yes, you are correct, I was wrong about those bits not using PCI-E lanes from the PCH. But it still leaves 16 or 17 dedicated PCI-E lanes coming off the PCH even with SATA, USB3.0, and Gb NIC. So combined with the lanes from the CPU you still get 32 PCI-E lanes, more than enough for two graphics card, two M.2 drives, and extra crap.
As I said in my previous post: you can't just lump all the lanes together like that. The PCH has six x4 controllers, four of which (IIRC) support RST - and as such, either SATA or NVMe. The rest can support NVMe drives, but are never routed as such (as an m.2 slot without RST support, yet still coming off the chipset would be incredibly confusing). If your motherboard only has four SATA ports, then that occupies one of these RST ports, with three left. If there's more (1-4), two are gone - you can't use any remaining PCIe lanes for an NVMe drive when there are SATA ports running off the controller. A 10GbE NIC would occupy one full x4 controller - but with some layout optimization, you could hopefully keep that off the RST-enabled controllers. But most boards with 10GbE also have a normal (lower power) NIC, which also needs PCIe - which eats into your allocation. Then there's USB 3.1G2 - which needs a controller on Z370, is integrated on every other *3** chipset, but requires two PCIe lanes per port (most controllers are 2-port, PCIe x4) no matter what (the difference is whether the controller is internal or external). Then there's WiFi, which needs a lane, too.

In short: laying out all the connections, making sure nothing overlaps too badly, that everything has bandwidth, and that the trade-offs are understandable (insert an m.2 in slot 2, you disable SATA4-7, insert an m.2 in slot 3, you disable PCIe_4, and so on) is no small matter. The current PCH has what I would call the bare minimum for a full-featured high-end mainstream motherboard today. They definitely don't have PCIe to spare. And all of these devices need to communicate with and transfer data to and from the CPU and RAM.[/QUOTE]
 
Top