Sunday, June 28th 2020

Intel "Alder Lake-S" Confirmed to Introduce LGA1700 Socket, Technical Docs Out for Partners

Intel's Core "Alder Lake-S" desktop processor, which succeeds the 11th generation "Rocket Lake-S," is confirmed to introduce a new CPU socket, LGA1700. This new socket has been churning in the rumor mill since 2019. The LGA1700 socket is Intel's biggest mainstream desktop processor package change since LGA1156, in that the package is now physically larger, and may be cooler-incompatible with LGA115x sockets (Intel H# sockets). The enlargement in package size is seen as an attempt by Intel to give itself real-estate to build future multi-chip modules; while the increased pin-count points to the likelihood of more I/O centralization to the processor package.

The "Alder Lake-S" silicon is rumored to be Intel's first 10 nm-class mainstream desktop processor, combining a hybrid core setup of a number of "Golden Cove" high-performance CPU cores, and a number of "Gracemont" low-power cores. The processor's I/O feature-set is expected to include dual-channel DDR5 memory, PCI-Express gen 4.0, and possibly preparation for gen 5.0 on the motherboard-side. In related news, Intel put out technical documentation for the "Alder Lake-S" microarchitecture and LGA1700 socket. Access however, is restricted to Intel's industrial partners. The company also put out documentation for "Rocket Lake-S."
Add your own comment

34 Comments on Intel "Alder Lake-S" Confirmed to Introduce LGA1700 Socket, Technical Docs Out for Partners

#26
InVasMani
Yeah IDK what makes the most sense given the scheduler isn't perfect in the first place in terms of leveraging flexibility perfectly to adapt to best case user scenario's. They need to come up with some kind of practical perk to utilize a big LITTLE design if that's what they are aiming at leveraging. If they could improve hyper threading via a 2nd package maybe that's a option, but if it's practical or possible I'm not certain and I'm certainly not a technical design engineer on the matter. I mean if they took some instruction sets off one package and placed them on the other and used that die space area to leverage the remaining things it already does well I can see that being possibility perhaps. In a scenario like that say you have 4 CPU die packages with some different instructions between them though some might have some universal instruction sets that they all share while some might only have specific ones for example perhaps SSE 4.1/4.2/AVX 2/FMA3 only go on one package while the other lacks those, but makes up for it on other ways. Perhaps Intel puts a new FML instruction set on one package that cover security flaws with the chips designs who knows it's Intel the rabbit holes the limit.
Posted on Reply
#27
efikkan
InVasManiYeah IDK what makes the most sense given the scheduler isn't perfect in the first place in terms of leveraging flexibility perfectly to adapt to best case user scenario's. They need to come up with some kind of practical perk to utilize a big LITTLE design if that's what they are aiming at leveraging. If they could improve hyper threading via a 2nd package maybe that's a option, but if it's practical or possible I'm not certain and I'm certainly not a technical design engineer on the matter.
My issue with big-little designs is the state of OS' schedulers (the ancient Windows scheduler in particular), and how far should we expect OS schedulers to be optimized for specific microarchitectures.

Just balancing HT is bad enough, hopefully if Intel chooses a big-little design on some or all CPUs they will drop HT, the combination of the two would be a scheduling nightmare. If anything, big-little might be easier to balance than HT, if done properly. HT also have complex security considerations, as we've come to learn the past couple of years, and HT sometimes cause latency issues and cache pollution, which does negatively impact some tasks.
InVasManiI mean if they took some instruction sets off one package and placed them on the other and used that die space area to leverage the remaining things it already does well I can see that being possibility perhaps. In a scenario like that say you have 4 CPU die packages with some different instructions between them though some might have some universal instruction sets that they all share while some might only have specific ones for example perhaps SSE 4.1/4.2/AVX 2/FMA3 only go on one package while the other lacks those, but makes up for it on other ways. Perhaps Intel puts a new FML instruction set on one package that cover security flaws with the chips designs who knows it's Intel the rabbit holes the limit.
I'm very skeptical about having different instruction sets on different cores. I don't know if executables have all ISA features flagged in their header, but this would be a requirement.
An alternative would be to implement slower FPUs which uses fewer transistors and more clocks for the little cores, but retain ISA compatibility.
Posted on Reply
#28
InVasMani
efikkanMy issue with big-little designs is the state of OS' schedulers (the ancient Windows scheduler in particular), and how far should we expect OS schedulers to be optimized for specific microarchitectures.

Just balancing HT is bad enough, hopefully if Intel chooses a big-little design on some or all CPUs they will drop HT, the combination of the two would be a scheduling nightmare. If anything, big-little might be easier to balance than HT, if done properly. HT also have complex security considerations, as we've come to learn the past couple of years, and HT sometimes cause latency issues and cache pollution, which does negatively impact some tasks.


I'm very skeptical about having different instruction sets on different cores. I don't know if executables have all ISA features flagged in their header, but this would be a requirement.
An alternative would be to implement slower FPUs which uses fewer transistors and more clocks for the little cores, but retain ISA compatibility.
To that I'll argue that I think we should certainly expect OS schedulers to improve in particular the ancient Windows one. I think HT is likely on it's way to being phased back out in favor more physical cores to do what HT was a stop gap solution to in the first place, but a convoluted scheduling mess especially on a OS like Windows that's poorly optimized in that area. I see HT as adding a layer of complexity that doesn't even achieve what it sets out to in the first place. When it works it's fine, but when it doesn't it's a mess. HT takes up some die space I'm sure as well that might be better to just use for more legitimate resources. I think the bigger issue with the Windows scheduler is scaling moving forward clearly looks to be at a bit of impasse at the very high end for some of these extremely multi-core AMD chips. Basically AMD has pushed the core count much higher than Microsoft seemingly anticipated and have been caught with it's pants down. It's to the point where the HT on the AMD chips are a real bottleneck and you're better off outright disabling them to avoid all the thread contention or that was my take away from some Linus's benchmarks on one of those AMD Uber FX chips.

I think with all the thread contention in mind getting rid of HT entirely could make more sense going forward especially as we're able to utilize more legitimate physical cores now today anyway. It's my belief that it'll lead to more consistent and reliable performance as a whole. There are of course middle ground solutions like taking a single HT and spreading it adjacently between two CPU core's that could be utilize in a round robin nature on a need be basis. By doing it that way AMD/Intel could diminish the overall scheduler contention issue in extreme chip core count scenario's til or if Microsoft is able to better resolve those concerns and issues.

I think the big thing is different options needs to be on the table presented and considered the CPU has evolve if it wishes to improve. I think big LITTLE certainly presents itself as a option to inserted somewhere in the overall grand scheme of things going forward, but where it injects itself is hard to say and the first designing on something radically different is always the biggest learning curve.
Posted on Reply
#29
efikkan
InVasManiI think HT is likely on it's way to being phased back out in favor more physical cores to do what HT was a stop gap solution to in the first place, but a convoluted scheduling mess especially on a OS like Windows that's poorly optimized in that area. I see HT as adding a layer of complexity that doesn't even achieve what it sets out to in the first place. When it works it's fine, but when it doesn't it's a mess. HT takes up some die space I'm sure as well that might be better to just use for more legitimate resources.
At the time, adding HT only costed a few percent extra transistors, and allowed to utilize some of the stalled clock cycles for other threads. As CPUs have grown more efficient, this waste has been reduced, so there are less and less free cycles to use. Additionally CPUs are only growing more reliant on cache and prefetching, so having two threads share this can certainly hurt performance. Thirdly, the ever-advancing CPU front-ends results in more and more complexity to handle HT/SMT safely (which they failed to do). I believe we're at the point where it should be cut, as it makes less and less sense for non-server workloads.

One interesting thing is the rumors of AMD moving to 4-way SMT. I do sincerely hope this is either untrue or limited to server CPUs. This is the wrong move.
Posted on Reply
#30
Raevenlord
News Editor
I think Big-Little makes a lot of sense, expecially considering the work apple did with the M1, which smokes Intel's previous offerings on the platform and runs circles around most - if not all - solutions currently on the market when running native apps. A non-symmetrical core design seems the way to go to improve both power efficiency and performance. And if Apple could do it and implement in iOS, I don't see why Microsoft couldn't.
Posted on Reply
#31
yotano211
RaevenlordI think Big-Little makes a lot of sense, expecially considering the work apple did with the M1, which smokes Intel's previous offerings on the platform and runs circles around most - if not all - solutions currently on the market when running native apps. A non-symmetrical core design seems the way to go to improve both power efficiency and performance. And if Apple could do it and implement in iOS, I don't see why Microsoft couldn't.
I guess you dont know anything about apple because they control everything from the hardware to the software.
I guess you dont know anything about Microsoft either, they only control the software. Kinda hard for Microsoft and/or intel to do something like apple did with the M1, all companies would have to sit down and agree to a joint multiple company agreement, good luck with that.
On your next post, I would advice on teaching yourself more about tech companies.
Posted on Reply
#32
thesmokingman
yotano211I guess you dont know anything about apple because they control everything from the hardware to the software.
I guess you dont know anything about Microsoft either, they only control the software. Kinda hard for Microsoft and/or intel to do something like apple did with the M1, all companies would have to sit down and agree to a joint multiple company agreement, good luck with that.
On your next post, I would advice on teaching yourself more about tech companies.
Yeap. AMD wrote about this as the main problem to big/little... it's useless on windows due to the scheduler not knowing how to manage or make use of it.
Posted on Reply
#33
yotano211
thesmokingmanYeap. AMD wrote about this as the main problem to big/little... it's useless on windows due to the scheduler not knowing how to manage or make use of it.
Its funny how a TPU "news editor" would write that and think it would be easy.
Posted on Reply
#34
thesmokingman
yotano211Its funny how a TPU "news editor" would write that and think it would be easy.
Well it does make sense but it's not practical given this is MS we are talking about and their scheduler. And for AMD's part they were talking about developing a way of doing it in hardware since... well MSFT. lol
Posted on Reply
Add your own comment
Dec 23rd, 2024 01:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts