Sunday, June 28th 2020

Intel "Alder Lake-S" Confirmed to Introduce LGA1700 Socket, Technical Docs Out for Partners

Intel's Core "Alder Lake-S" desktop processor, which succeeds the 11th generation "Rocket Lake-S," is confirmed to introduce a new CPU socket, LGA1700. This new socket has been churning in the rumor mill since 2019. The LGA1700 socket is Intel's biggest mainstream desktop processor package change since LGA1156, in that the package is now physically larger, and may be cooler-incompatible with LGA115x sockets (Intel H# sockets). The enlargement in package size is seen as an attempt by Intel to give itself real-estate to build future multi-chip modules; while the increased pin-count points to the likelihood of more I/O centralization to the processor package.

The "Alder Lake-S" silicon is rumored to be Intel's first 10 nm-class mainstream desktop processor, combining a hybrid core setup of a number of "Golden Cove" high-performance CPU cores, and a number of "Gracemont" low-power cores. The processor's I/O feature-set is expected to include dual-channel DDR5 memory, PCI-Express gen 4.0, and possibly preparation for gen 5.0 on the motherboard-side. In related news, Intel put out technical documentation for the "Alder Lake-S" microarchitecture and LGA1700 socket. Access however, is restricted to Intel's industrial partners. The company also put out documentation for "Rocket Lake-S."
Add your own comment

34 Comments on Intel "Alder Lake-S" Confirmed to Introduce LGA1700 Socket, Technical Docs Out for Partners

#1
watzupken
While it is not unexpected that Intel will eventually need to switch to a bigger socket, I think the introduction of a stop gap LGA 1200 which may last for less than 2 years is quite annoying for the enthusiasts segment. Also, I am not convinced that the big/little core strategy makes sense on a desktop. Moreover, it adds another layer of complexity to switch between low and high performance cores. I feel this switching is also dependent on the OS.
Posted on Reply
#2
micropage7
another processor with another sockets, sometimes i feel Intel just take the shortcut, why we need to stay in that socket? just release new socket and less hassle for us
Posted on Reply
#3
thesmokingman
It's like they are in the business of producing sockets... smh.
Posted on Reply
#4
yotano211
I have no such problems over on the laptop side, who needs a new socket, just get an entirely new laptop
hahahaaa
Posted on Reply
#5
ToxicTaZ
Alder Lake is unified mobile and desktop together in big.Little

Yes Intel is moving forward with (H6 LGA 1700 socket)....
Vs
AMD is also changing their socket too! AMD (AM5 socket)....

Intel first mainstream PCIe 5.0 is Meteor Lake-S Intel first 7nm+ (700 Series) H6 LGA 1700 socket.

Alder Lake is Intel second generation PCIe 4.0 board (600 Series) H6 LGA 1700 socket.

Intel only has two years on PCIe 4.0 technology with Rocket Lake-S and Alder Lake-S

The last Intel release two desktops CPUs in the same year was the 7700K (Q1 2017) and the 8700K (Q4 2017).....

Now Rocket-S Lake launch time is (Q1 2021) with Alderlake-S (Q4 2021)

With Meteor Lake-S (Q4 2022) replacing Alder Lake-S one year later....

Next 3 years going to very interesting from both AMD and Intel with new sockets and PCIe 5.0 DDR5 USB4 WiFi-6E 5G on AMD AM5 and Intel H6 LGA 1700 socket Motherboards...

Very exciting news!
Posted on Reply
#6
InVasMani
watzupkenWhile it is not unexpected that Intel will eventually need to switch to a bigger socket, I think the introduction of a stop gap LGA 1200 which may last for less than 2 years is quite annoying for the enthusiasts segment. Also, I am not convinced that the big/little core strategy makes sense on a desktop. Moreover, it adds another layer of complexity to switch between low and high performance cores. I feel this switching is also dependent on the OS.
Yeah can't imagine that the people that made such a huge deal out of Ryzen CCX latency would be fond of the big/little approach at all honestly. To be fair I don't consider big/little a outright terrible thing at all myself.
Posted on Reply
#7
1d10t
Are these even shocking anymore?
Posted on Reply
#8
Tomorrow
watzupkenI think the introduction of a stop gap LGA 1200 which may last for less than 2 years is quite annoying
2? More like less than one. Launched in May 2020 and will likely get the last upgrade in Q3-Q4 2020 with Rocket Lake. After that it's a dead end platform.
Posted on Reply
#9
cucker tarlson
how about one socket for the big cores and another one for smaller cores.
Posted on Reply
#10
ppn
1200 is a good socket for its purposes. If you buy 8 core 10700F now, the usability is at least 10 years. That is not bad at all. Well surely better not do that because Rocket lake might bring much better IPC, just like Sandy. Sandy got replaced by 22nm pretty quick but who cared. 14nm is fine mostly if you don't get into the overclocking and power spiral thing.

So 1700 just takes over from there. adds more PCIe lanes, and maybe the chipset moves to the CPU. There is no need for a chipset to sit on the motherboard.
Posted on Reply
#11
cucker tarlson
ppn1200 is a good socket for its purposes. If you buy 8 core 10700F now, the usability is at least 10 years. That is not bad at all. Well surely better not do that because Rocket lake might be much better, just like Sandy. Sandy got replaced by 22nm pretty quick but who cared. So 14nm is fine mostly if you dont get into the overclocking and power thing.

So 1700 just takes over from there. adds more PCIe lanes, and maybe the chipset moves to the CPU. There is no need for a chipset to sit on the motherboard.
10700f sure is a swell buy for gaming.
they sell at 3700x price here.

I was considering one but decided to stick to a 10th gen i5 cause the savings are big and gaming performance is still prev gen i7/current gen Ryzen 7.when I need more I'd like it to come with pci-e 4.0 too.
Posted on Reply
#12
bonehead123
thesmokingmanIt's like they are in the business of producing sockets... smh.
AND they are also in the business of making LOTS of MONEY, which means they are experts at milking every single part of the pc ecosystem every single time they come up with even the smallest chance to squeeze people out of moar & moar of their hard earned moolah :)

The 2 year lifespan is exactly why I am skipping LGA1200/Z490/no pcie 4 or other real improvements nonsense etc etc...
Posted on Reply
#13
Logoffon
I wondered if there would be someone from one of their partners brave enough to tell us what's on that page :D
Posted on Reply
#14
Tomorrow
ppn1200 is a good socket for its purposes. If you buy 8 core 10700F now, the usability is at least 10 years. That is not bad at all. Well surely better not do that because Rocket lake might bring much better IPC, just like Sandy. Sandy got replaced by 22nm pretty quick but who cared. 14nm is fine mostly if you don't get into the overclocking and power spiral thing.

So 1700 just takes over from there. adds more PCIe lanes, and maybe the chipset moves to the CPU. There is no need for a chipset to sit on the motherboard.
10 years? That's rich. Hey im not debating if it will last that long. Given good cooling and decent PSU it will. But workloads now and workloads 10 years from now may be very different. Not to mention that no one should be buying Intel's 14nm anymore. They have extracted every bit of performance from it and it's riddled with constant security issues. Even if those issues are hard to use in the real world the mandatory security patches are real and decrease performance.

I would not want to be stuck on a platform where i lose performance every year. Not to mention the advances next two-three generations will bring.
A few years from now these 14nm parts will be seen as ancient and slow. Especially if Intel will finally get their IPC increases rolling again. By the end of this decade we will have double or even triple the IPC of these parts. Quadruple the cores in mainstream, 3D stacked memory and new instructions. Optimal use would be 2-5 years. Once you go past 5 you really start to notice the slowdows on an older platform. I used 2500K from 2012 to 2019. The last two years were painful to use a 4c/4t CPU. Made worse by the Meltdown/Spectre patches that decreased performance further.
Posted on Reply
#15
cucker tarlson
Tomorrowno one should be buying Intel's 14nm anymore.
why ?
if that 14nm serves them better for what they do
TomorrowA few years from now these 14nm parts will be seen as ancient and slow.
but 7nm r3000 will magically stay fresh and modern right
TomorrowBy the end of this decade we will have double or even triple the IPC of these parts. Quadruple the cores in mainstream, 3D stacked memory and new instructions.
yeah right
in 2010 an enthusist mainstream platform could have a 970 with 6/12 and ddr3 now a 10900 with 10/20 and ddr4
so yeah,quadriple quintuple shmuple
Posted on Reply
#16
Tomorrow
cucker tarlsonwhy ?
That's like buying 16nm Pascal brand new today despite it being 4 years old and on the cusp of 7nm release from the same company. And yet in the CPU world that's somehow ok?
You think people would buy this 14nm++++ crap if Intel had a faster 10nm or even 7nm product? Ofcourse not. But because they don't people try to justify and find reasons to suggest this 14nm crap.
cucker tarlsonif that 14nm serves them better for what they do
And what would that be? Impercieveable 5% better avg with 2080Ti @ 1080p low-medium settings?
cucker tarlsonbut 7nm r3000 will magically stay fresh and modern right
Better than this "9 month" platform at the very least.
cucker tarlsonyeah right
in 2010 an enthusist mainstream platform could have a 970 with 6/12 and ddr3 now a 10900 with 10/20 and ddr4
so yeah,quadriple quintuple shmuple
You're comparing an Extreme CPU then with mainstream now. Sure if i take 64c/128t 3990X and 5Ghz DDR4 today i too could say in 10 years - look there has not been much progress.
But that would be blatantly false as the perfiormance of 970 from 2010 could be had today (and even better due to much better IPC) for a fraction of what it cost back then.

Double IPC - Intel's own estimations vs Skylake for the next ~4 years.
Quadruple the cores - We had 4c/8t high end mainstream CPU's only 3 years ago. Now we have 16c/32t mainstream high end CPU's. Only in the span of 3 years! If you think this is it and we will sit at 16c/32t for the next decade you need to learn your history. 64c/128t will be the high end mainstream CPU in decade with atleast double the IPC of todays part and likely double as fast DDR5 (vs DDR4) and PCIe 5.0 or 6.0. And that's likely not the highest core/thread count option either.

Except when now it costs 4k for the CPU alone in a decade it will cost 400. That's how progress works.
Posted on Reply
#17
cucker tarlson
TomorrowAnd what would that be? Impercieveable 5% better avg with 2080Ti @ 1080p low-medium settings?
get some serious education in that matter
Posted on Reply
#18
Tomorrow
cucker tarlsonget some serious education in that matter
So what exactly are the workloads that 14nm excels at and that 7nm can't touch?
Im very curious.
Posted on Reply
#19
cucker tarlson
TomorrowSo what exactly are the workloads that 14nm excels at and that 7nm can't touch?
Im very curious.
please post seriously cause you sound like a troll
7nm is a more efficient node for sure but 5% at 1080p low is just not correct.
try 15% in 1080p ultra
stock
www.computerbase.de/2020-05/intel-core-i9-10900k-i5-10600k-test/3/#abschnitt_benchmarks_in_spielen_full_hd_und_uhd


and what is a "7nm workload" for that matter ? or a "14nm workload" ?
Posted on Reply
#20
Tomorrow
Hey im not saying there are not people who want every last FPS even with a totally unreasonable combination of 2080Ti and 1080p. But going from 3600 that is arguably the best bang for buck gaming CPU (god it hate that term) to 10900K means nearly triple the price for a ~20% gain using a 1200$ GPU. Assuming you can even differenciate 100fps gameplay vs 120fps with the Intel CPU if we assume 20%.

That's the very definition of niche and overpaying for those few extra frames.
Intel does have some other small niches like AVX-512 or QuickSync but these are even less useful for most people.

For 2080Ti buyers. Yeah sure - it makes sense to pair it with a 10900K or even 10600K but for everyone else it's just a bad investment/return reward.
Posted on Reply
#21
cucker tarlson
TomorrowHey im not saying there are not people who want every last FPS even with a totally unreasonable combination of 2080Ti and 1080p. But going from 3600 that is arguably the best bang for buck gaming CPU (god it hate that term) to 10900K means nearly triple the price for a ~20% gain using a 1200$ GPU. Assuming you can even differenciate 100fps gameplay vs 120fps with the Intel CPU if we assume 20%.

That's the very definition of niche and overpaying for those few extra frames.
Intel does have some other small niches like AVX-512 or QuickSync but these are even less useful for most people.

For 2080Ti buyers. Yeah sure - it makes sense to pair it with a 10900K or even 10600K but for everyone else it's just a bad investment/return reward.
and a 10400f is a better buy than 3600 or 3700x both


while 3300x is better value than both 3600 and 3700x
if you're gonna make a point,at least make it consistent.

all I'm gonna say.
Posted on Reply
#22
efikkan
cucker tarlsonhow about one socket for the big cores and another one for smaller cores.
So kind of like in the old days with a co-processor? :D

I do wonder what those extra 500 pins will bring, more IO?
What I do dislike with the current lineups of both companies is the fuzzy transition between upper mainstream and HEDT. While many of you might disagree with me, I would have preferred if the i9 and Ryzen 9 CPUs were on their respective HEDT platforms instead of increasing the VRM requirements and costs of the mainstream platforms. I think they pushed it too far (note that I don't think they should have retained just 4 cores).
Posted on Reply
#23
cucker tarlson
efikkanSo kind of like in the old days with a co-processor? :D

I do wonder what those extra 500 pins will bring, more IO?
What I do dislike with the current lineups of both companies is the fuzzy transition between upper mainstream and HEDT. While many of you might disagree with me, I would have preferred if the i9 and Ryzen 9 CPUs were on their respective HEDT platforms instead of increasing the VRM requirements and costs of the mainstream platforms. I think they pushed it too far (note that I don't think they should have retained just 4 cores).
amd is handlig it better with b550 and x570
Posted on Reply
#24
InVasMani
What I think Intel should do is connect the two chips with traces thru the substrate itself and call it hyper tunneling. Basically convert hyper threading into actual physical cores on another package with a chip that matches the base clock performance and activate when turbo boost performances heat throttles. Going further because voltages rise and fall naturally peaks and dips they could make each physical core have 3 threads then sync them to match and put a physical core on each dip representing base clock performance and the peak representing the turbo boost performance. That way when the turbo boost performance throttles the two physical cores on each rising and falling signal take over allowing the turbo performance to cool down and kick back in sooner. Squeeze more turbo cores onto a single package and supplement that performance more base clock cores from another package in the form of hyper threading with the turbo performance sandwiched in between.

The cool thing is the two CPU packages could ping pong the power throttling off and on between inactivity and activity so when one package gets engaged the other can disengage and to reduce heat and energy. If they can do that and sync it well it could be quite effective much the fan profiles on GPU's at least when setup and working right are quite nice from the 0db fan profiles to just when they trigger higher fan RPM's to operate and how long they operate cooling things down and then wind down the fan RPM's after they've lowered the GPU temp's.
Posted on Reply
#25
MxPhenom 216
ASIC Engineer
InVasManiWhat I think Intel should do is connect the two chips with traces thru the substrate itself and call it hyper tunneling. Basically convert hyper threading into actual physical cores on another package with a chip that matches the base clock performance and activate when turbo boost performances heat throttles. Going further because voltages rise and fall naturally peaks and dips they could make each physical core have 3 threads then sync them to match and put a physical core on each dip representing base clock performance and the peak representing the turbo boost performance. That way when the turbo boost performance throttles the two physical cores on each rising and falling signal take over allowing the turbo performance to cool down and kick back in sooner. Squeeze more turbo cores onto a single package and supplement that performance more base clock cores from another package in the form of hyper threading with the turbo performance sandwiched in between.

The cool thing is the two CPU packages could ping pong the power throttling off and on between inactivity and activity so when one package gets engaged the other can disengage and to reduce heat and energy. If they can do that and sync it well it could be quite effective much the fan profiles on GPU's at least when setup and working right are quite nice from the 0db fan profiles to just when they trigger higher fan RPM's to operate and how long they operate cooling things down and then wind down the fan RPM's after they've lowered the GPU temp's.
Im not so sure that'll work as well as you think it might. Plus it'll get expensive from a price per package standpoint.

I think clock skew between the two would be hell and a half to compensate for and manage.
Posted on Reply
Add your own comment
Dec 22nd, 2024 21:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts