- Joined
- May 2, 2017
- Messages
- 7,762 (2.78/day)
- Location
- Back in Norway
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
1) Intel doesn't have 2-core desktop dice, only 4- and 6. The rest are harvested/disabled.Same socket, in the same product stack = same series. It doesn't matter that they use a different die, the 2 core Intels and 4 core and 6 cores use different dies too, they are still the same series of CPUs. You are not going to be able to successfully argue that the Ryzen 5 2400G and Ryzen 5 2600 are two different series of processors. They might have different cores in them, but AMD has made them the same series. What was said would have been true back when the APUs were separate from the mainstream deaktop processor, on a complete different platform with a completely different naming scheme, but that is no longer the case. AMD has made them the same series as their traditional CPU line.
2) The difference between Raven Ridge and Summit/Pinnacle Ridge is far bigger than between any mainstream Intel chips, regardless of differences in core count. The Intel 4+2 and 6+2 dice are largely identical except for the 2 extra cores. All Summit/Pinnacle Ridge chips (and Threadripper) are based off the same 2-CCX iGPU-less die (well, updated/tuned for Pinnacle Ridge and the updated process node, obviously). Raven Ridge is based off an entirely separate die design with a single CCX, an iGPU, and a whole host of other uncore components belonging to that. The difference is comparable to if not bigger than the difference between the ring-bus MSDT and the mesh-interconnect HEDT Intel chips.
3) If "Same socket, in the same product stack" is the rule, do you count Kaby Lake-X as the same series as Skylake-X?
4) "Same product stack" is also grossly misleading. From the way you present this, Intel has one CPU product stack - outside of the weirdly named low-end core-based Pentium and Celerons, that is, which seem to "lag" a generation or two in their numbering. They all use the same numbering scheme, from mobile i3s to HEDT 18-core i9s. But you would agree that the U, H and other suffixes for mobile chips place them in a different product stack, no? Or would you say that Intel has no mobile product stack? 'Cause if you think they do, then you have to agree that the G suffix of the desktop RR APUs also makes that a separate product stack. Not to mention naming: Summit and Pinnacle Ridge are "Ryzen". Then there's "Ryzen Threadripper". Then there's "Ryzen with Vega Graphics". Subsets? Sure. Both are. But still separate stacks.
You're right that DMA alleviates this somewhat, but that depends on the workload. Is all you do with your SSDs copying stuff between them? If not, the data is going to go to RAM or CPU. If you have a fast NIC, have you made sure that the drive you're downloading to/uploading from is connected off the PCH and not the CPU? 'Cause if not, you're - again - using that QPI link. And so on, and so on. The more varied your load, the more that link is being saturated. And again, removing the bottleneck almost entirely would not be difficult at all - Intel would just have to double the lanes for the uplink. This would require a tiny increase in die space on the CPUs and PCHes, and somewhat more complex wiring in the motherboard, but I'm willing to bet the increase in system cost would be negligible.That isn't how it works with DMA, the data does not have to flow back to the CPU to be moved around. Every bit of data transferred over the PCI-E bus isn't going through the CPU. The data flows through the chipset, so the 4 lane connection back to the CPU is almost never a bottleneck. The only time it is really a bottleneck is for the GPUs, which is why they are wired directly to the CPU and everything else happily flows through the chipset. Have you ever looked at how the HEDT boards are wired? Those extra CPU PCI-E lanes aren't used for storage... The only other time the 4x link between the chipset and CPU is stressed is loading data from a RAID0 M.2 NVMe setup into memory(program loading, game level loading, etc.) But you still get almost 4GB/s of transfer speed from the drives into Memory. Are you really going to notice a faster transfer speed than that? Besides that, in situations where you are loading data from the drives into memory, those are almost always random read/write cases. And even the best drives on the market right now don't even break 1GB/s random read, so even if you had two in RAID0, you're not coming close to a bottleneck on the DMI link between the chipset and the CPU.
Apparently you're not familiar with Intel HSIO/Flex-IO or the feature sets of their chipsets. You're partially right that USB is provided - 2.0 and 3.0, but not 3.1 except for the 300-series excepting the Z370 (which is really just a rebranded Z270). Ethernet is done through separate controllers over PCIe, and SATA shares lanes with PCIe. Check out the HSIO lane allocation chart from AnandTech's Z170 walkthrough from the Skylake launch - the only major difference between this and Z270/370 is the addition of a sixth PCIe 3.0x4 controller, for 4 more HSIO lanes. How they can be arranged/split (and crucially, how they can not) works exactly the same. Note that Intel's PCH spec sheets (first picture here) always say "up to" X number of USB ports/PCIe lanes and so on - due to them being interchangeable. Want more than 6 USB 3.0 ports? That takes away an equivalent amount of PCIe lanes. Want SATA ports? All of those occupy RST PCIe lanes, though at least some can be grouped on the same controller. Want dual Ethernet? Those will eat PCIe lanes too. And so on. The moral of the story: An implemented Intel chipset does not have the amount of available PCIe lanes that they advertise that it has.Bull. SATA, USB, and LAN are all provided by the chipset without using any of the 24 PCI-E lanes. All the extra peripherals likely would never need 12 PCI-E 3.0 lanes, even on a high end board. You've got a sound card taking up 1 lane, maybe another LAN port taking up another, perhaps a wifi card taking up 1 more, and them maybe they add a USB3.1 controller taking 1 or maybe 2 more. Perhaps they even want to use an extra SATA controller taking 1 more. So the extras taking maybe 5 lanes, call it 6 to be safe? Certainly not half of the 24 provided.
That jives pretty well with der8auers recent look into the question of "can you solder an IHS yourself?", but with one major caveat: the difference in complexity and cost between doing a one-off like the process shown there and doing the same on an industrial scale should really not be underestimated. Intel already knows how to to this. They already own the tools and machinery, as they've done this for years. Intel can buy materials at bargain-basement bulk costs. Intel has the engineering expertise to minimize the occurrence of cracks and faults. And it's entirely obvious that an industrial-scale process like this would be fine-tuned to minimize the soldering process causing cracked dice and other failures.