It depends on what you consider a "series". uArch or naming number...
To me saying Raven Ridge and Summit Ridge are the same series of cpus is like saying the same for 7700K and 7900X. They both start with a 7, but they couldn't be any more different...
No, they're not. Sure, they share a model number series. But so does Threadripper. As does Intel HEDT with Intel MSDT, for that matter. Are those also the same series? No. RR is a separate series from standard Ryzen, as shown by the entirely different die used. Intel doesn't have a comparable series (due to having iGPUs across all MSDT chips), but that doesn't change the fact that RR is not the same as other Ryzen.
Same socket, in the same product stack = same series. It doesn't matter that they use a different die, the 2 core Intels and 4 core and 6 cores use different dies too, they are still the same series of CPUs. You are not going to be able to successfully argue that the Ryzen 5 2400G and Ryzen 5 2600 are two different series of processors. They might have different cores in them, but AMD has made them the same series. What was said would have been true back when the APUs were separate from the mainstream deaktop processor, on a complete different platform with a completely different naming scheme, but that is no longer the case. AMD has made them the same series as their traditional CPU line.
For demanding uses, you really can't say that Z/H370-based systems have "40 lanes". That's only true as long as no more than 4 of the 24 from the PCH are used at one time. Sure, this is an edge case not relevant to the vast majority of users, but doubling the QPI speed would make this bottleneck go away for even heavy users with 10GbE NICs and multiple SSDs. Also, the lack of support for lane bifurcation makes those lanes (including the CPU ones) far less flexible than they ought to be.
Another one drinking the "40 lanes" Kool-Aid. On Z270 there are 16 lanes directly from the CPU and 24 lanes from the chipset, and the chipset talks to the CPU over a 4-lane link. That means that if you have two 4-lane M.2 SSDs hanging off the chipset, and you try to access them both at the same time (example: RAID), they are going to be bottlenecked.
That isn't how it works with DMA, the data does not have to flow back to the CPU to be moved around. Every bit of data transferred over the PCI-E bus isn't going through the CPU. The data flows through the chipset, so the 4 lane connection back to the CPU is almost never a bottleneck. The only time it is really a bottleneck is for the GPUs, which is why they are wired directly to the CPU and everything else happily flows through the chipset. Have you ever looked at how the HEDT boards are wired? Those extra CPU PCI-E lanes aren't used for storage... The only other time the 4x link between the chipset and CPU is stressed is loading data from a RAID0 M.2 NVMe setup into memory(program loading, game level loading, etc.) But you still get almost 4GB/s of transfer speed from the drives into Memory. Are you really going to notice a faster transfer speed than that? Besides that, in situations where you are loading data from the drives into memory, those are almost always random read/write cases. And even the best drives on the market right now don't even break 1GB/s random read, so even if you had two in RAID0, you're not coming close to a bottleneck on the DMI link between the chipset and the CPU.
Furthermore, of the 24 chipset lanes at least half will already be taken by other peripherals (SATA, USB, LAN) so you will get maybe 12 lanes max off there... maybe.
Bull. SATA, USB, and LAN are all provided by the chipset
without using any of the 24 PCI-E lanes. All the extra peripherals likely would never need 12 PCI-E 3.0 lanes, even on a high end board. You've got a sound card taking up 1 lane, maybe another LAN port taking up another, perhaps a wifi card taking up 1 more, and them maybe they add a USB3.1 controller taking 1 or maybe 2 more. Perhaps they even want to use an extra SATA controller taking 1 more. So the extras taking maybe 5 lanes, call it 6 to be safe? Certainly not half of the 24 provided.
I said Haswell, GT2 (i7) is 177mm^2 while GT3(Iris) is 264 mm^2 so not much smaller than SR. What about the discrepancy with Intel HEDT, since they're much bigger dies?
Cheaper IMO, why would it be easier given they still make soldered CPU's & seemingly are going back to solder for the upcoming i9.
My point is that Ivy Bridge was the issue. My opinion of what happened is that when they were getting Ivy Bridge ready they ran into problems with the solder, and decided to just switch to TIM instead of trying to find an engineering solution to allow them to use solder. Then they just never bothered to switch back, either because they were too lazy or they just noticed the difference in the bottom line and liked the slightly heavier wallets. The fact that they switch the HEDT over to TIM too kind of points to them liking their heavy wallets. But my point was, originally, it was because of the very much smaller die of Ivy Bridge and the challenges it presented.