Monday, July 1st 2024

Intel Core Ultra "Arrow Lake" Desktop Platform Map Leaked: Two CPU-attached M.2 Slots

Intel's upcoming Core Ultra "Arrow Lake-S" desktop processor introduces a new socket, the LGA1851, alongside the new Intel 800-series desktop chipset. We now have some idea what the 151 additional pins on the new socket are used for, thanks to a leaked platform map on the ChipHell forums, discovered by HXL. Intel is expanding the number of PCIe lanes from the processor. It now puts out a total of 32 PCIe lanes.

From the 32 PCIe lanes put out by the "Arrow Lake-S" processor's system agent, 16 are meant for the PCI-Express 5.0 x16 PEG slot to be used for discrete graphics. Eight are used as chipset bus, technically DMI 4.0 x8 (these are eight lanes that operate at Gen 4 speed for 128 Gbps per direction of bandwidth). There are now not one, but two CPU-attached M.2 NVMe slots possible, just like on the AMD "Raphael" and "Granite Ridge" processors. What's interesting, though, is that not both are Gen 5. One of these is Gen 5 x4, while the other is Gen 4 x4.
The system agent has two kinds of PCIe root complexes, just like it did on Socket LGA1700 processors. The new "Arrow Lake-S" has 20 lanes of Gen 5, and 12 lanes of Gen 4, which is how it's able to put out a Gen 5 x16 PEG, a Gen 5 x4 M.2; a DMI 4.0 x8 chipset bus, and an additional CPU-attached Gen 4 x4 M.2. In comparison, "Alder Lake-S" and "Raptor Lake-S" feature 16 lanes of Gen 5, and 12 lanes of Gen 4, leaving the platform with no CPU-attached Gen 5 M.2 slots, unless you subtract them from the Gen 5 x16 PEG slot, which is what all motherboard designers have done. There will be no such problem with "Arrow Lake-S."

The two CPU-attached x4 links can be wired out as M.2 slots, but it's also possible that the Gen 4 x4 can be used by motherboard designers for certain high-bandwidth devices, such as discrete Thunderbolt 4 or USB4 controllers.

Besides these, the processor has four DDI links for the platform's two Thunderbolt 4 ports (if implemented by the motherboard designer). Intel has updated the display I/O of the platform with HDMI 2.1 and DisplayPort 2.1, although it will probably leave it up to the motherboard designers if they want the latest connectors on even their cheapest motherboard models. There's also an eDP 1.4b connection which should help AIO desktops.

The memory I/O now completely does away with DDR4 support. The platform only supports DDR5, over 2 channels (four sub-channels), along with support for up to two DIMMs per channel. Intel is expected to increase both the native- and overclocked memory speeds for this platform, and we might even see DDR5 memory kits with XMP 3.0 profiles for 10000 MT/s or more. Motherboard vendors can implement standard UDIMMs, compact SO-DIMMs, or even the new CAMM2.

The 800-series chipset in this spy pic, which we're assuming is the top Intel Z890, puts out additional Gen 4 PCIe lanes. There are no more Gen 3-only lanes from the PCH. The chipset also puts out an assortment of USB 3.2 (20 Gbps), USB 3.2 (10 Gbps), and USB 3.2 (5 Gbps) ports, although there's no mention here of 40 Gbps USB4 ports from the platform. And then there are the usual storage and networking I/O, including a few SATA 6 Gbps ports, integrated MACs for 2.5 GbE or 1 GbE wired Ethernet; and Wi-Fi 6E or Wi-Fi 7 CNVi slots.

When they debut in Q4 2024, Intel's Core Ultra "Arrow Lake-S" desktop processors will be accompanied only with the Z890 chipset, since the processor models being launched are expected to be K or KF (unlocked) SKUs. The series will be expanded in early 2025 with non-unlocked processor models, and other chipsets, such as the B860, H870, and H810.
Sources: HXL (Twitter), ChipHell Forums
Add your own comment

36 Comments on Intel Core Ultra "Arrow Lake" Desktop Platform Map Leaked: Two CPU-attached M.2 Slots

#1
john_
With modern motherboards looking more like microATX boards in an ATX form, do we really need PCIe lanes for anything else than M.2 SSDs?
I mean, this is my old AM3+ board

7 expansion slots, the two main PCIe slots having the option to play x8 + x8 for SLi/CrossFire for a total of 42 PCIe 2.0 lanes and here is a modern example of an ATX board

More or less, empty PCB with extra room for M.2 SSDs.
Posted on Reply
#2
TheDeeGee
john_With modern motherboards looking more like microATX boards in an ATX form, do we really need PCIe lanes for anything else than M.2 SSDs?
I mean, this is my old AM3+ board

7 expansion slots, the two main PCIe slots having the option to play x8 + x8 for SLi/CrossFire for a total of 42 PCIe 2.0 lanes and here is a modern example of an ATX board

More or less, empty PCB with extra room for M.2 SSDs.
Nope, all we need is the bottom example.

One top PCI-E 16x slot for the GPU and one full sized PCI-E 1x slot at the very bottom for an expansion card.

Three m.2 slots should be a minimum these days.
Posted on Reply
#3
john_
TheDeeGeeNope, all we need is the bottom example.

One top PCI-E 16x slot for the GPU and one full sized PCI-E 1x slot at the very bottom for an expansion card.
Coming from an era where more expansion slots where the main reason to go with a full ATX board, I do agree with that "nope". Because in this system with the R5 4600G, I only connected NVMe SSDs and nothing else(excluding SATA storage).
Posted on Reply
#4
kondamin
Would have preferred an extra memory channel probably for the next generation
Posted on Reply
#5
Chaitanya
john_With modern motherboards looking more like microATX boards in an ATX form, do we really need PCIe lanes for anything else than M.2 SSDs?
I mean, this is my old AM3+ board

7 expansion slots, the two main PCIe slots having the option to play x8 + x8 for SLi/CrossFire for a total of 42 PCIe 2.0 lanes and here is a modern example of an ATX board

More or less, empty PCB with extra room for M.2 SSDs.
That trend unfortunately has infected high end/WS boards as well where more PCIe connectivity often is a requirement. In general I would much rather see more PCIe slots than M.2 slots(its easy to add M.2 ssds via add-in cards) but adding faster NICs without PCIe slots is impossible(10Gbps NICs on M.2 slot suffer from some instabiltiy thanks to tiny heatsinks).
Posted on Reply
#6
john_
kondaminWould have preferred an extra memory channel probably for the next generation
In my case I would love to see a return of Sideport Memory for higher performance from APUs, by integrating for example one or two memory chips on the motherboard and letting the integrated GPU use that memory in parallel with the system's memory, meaning higher bandwidth.
Posted on Reply
#7
pressing on
A change from Intel's previous Alder Lake/Raptor Lake boards is the addition of a DeSPI connection between the chipset and the CPU. There are also eSPI and SPI connections to the chipset. I know that eSPI stands for Enhanced Serial Peripheral Interface and is some form of bus developed by Intel. But what exactly it does seems to be unclear at this stage.
Posted on Reply
#8
kondamin
john_In my case I would love to see a return of Sideport Memory for higher performance from APUs, by integrating for example one or two memory chips on the motherboard and letting the integrated GPU use that memory in parallel with the system's memory, meaning higher bandwidth.
I think I'd prefer the 50% extra memory bandwidth that can be used by the entire system.
I don't know if it's economically all that interesting to put some memory on a board that might or might not use it, just making the pcb upon which the soc sits a bit bigger and adding it there would make more sense
Posted on Reply
#9
ncrs
pressing onA change from Intel's previous Alder Lake/Raptor Lake boards is the addition of a DeSPI connection between the chipset and the CPU. There are also eSPI and SPI connections to the chipset. I know that eSPI stands for Enhanced Serial Peripheral Interface and is some form of bus developed by Intel. But what exactly it does seems to be unclear at this stage.
SPI and its variants are usually used for communication with the BIOS firmware chip and external peripherals like a dedicated TPM or a server BMC. Edit: an example of such deployment for LGA1700 (via STH)
There were rumors that from Meteor Lake the Intel Management Engine would be moved from chipset to the CPU die, and since it requires firmware to function a SPI connection of some sort would be required to access it. However I haven't seen any official confirmation of this, so it's just my theory.
Posted on Reply
#10
Assimilator
john_With modern motherboards looking more like microATX boards in an ATX form, do we really need PCIe lanes for anything else than M.2 SSDs?
There is no good reason for PCIe slots and M.2 slots to be an "either/or". Manufacturers should offer motherboards with multiple PCIe slots and M.2 drive access should be provided by an add-in PCIe card bundled with said motherboard. That way the consumer can choose to use the PCIe lanes that they paid for in the way that they choose - for example if they want to install two GPUs instead of a bunch of M.2s.

The current trend of putting more and more M.2 slots on motherboards, consuming more and more PCIe lanes forcing those lanes to either be used by M.2s or essentially not exist, is anti-consumer and anti-choice and has only come about because manufacturers are looking for every possible way to cut down on every expense (it's not as if a PCIe add-in card for M.2 drives costs a billion bucks anyway, but MUH PROFIT MARGINS). Yet consumers have bought into this because the manufacturers claim it's "more convenient"... yeah, it's so more convenient to have less choice. Even the so-called workstation motherboards suffer from this same idiotic plague.

There is, of course, nothing stopping manufacturers including both ordinary PCIe slots and M.2 slots on their boards, and switching bandwidth between them as one or the other is used. Except of course the manufacturers don't do that because, again, "it's expensive" - while having consumers roll over and take it in the ass is free.
Posted on Reply
#11
TheLostSwede
News Editor
AssimilatorThere is no good reason for PCIe slots and M.2 slots to be an "either/or". Manufacturers should offer motherboards with multiple PCIe slots and M.2 drive access should be provided by an add-in PCIe card bundled with said motherboard. That way the consumer can choose to use the PCIe lanes that they paid for in the way that they choose - for example if they want to install two GPUs instead of a bunch of M.2s.
There isn't enough PCB space for both though. On top of that, most people are already complaining when an M.2 slot is underneath their GPU. Obviously some boards already share PCIe lanes, but it's not that common. You could always contact the motherboard makers with your suggestions, but I think the response will be cricket noises.
AssimilatorThe current trend of putting more and more M.2 slots on motherboards, consuming more and more PCIe lanes forcing those lanes to either be used by M.2s or essentially not exist, is anti-consumer and anti-choice and has only come about because manufacturers are looking for every possible way to cut down on every expense (it's not as if a PCIe add-in card for M.2 drives costs a billion bucks anyway, but MUH PROFIT MARGINS). Yet consumers have bought into this because the manufacturers claim it's "more convenient"... yeah, it's so more convenient to have less choice. Even the so-called workstation motherboards suffer from this same idiotic plague.
C'mon, stop using political rhetoric just because you're unhappy about current motherboard layouts, it's over the top.
AssimilatorThere is, of course, nothing stopping manufacturers including both ordinary PCIe slots and M.2 slots on their boards, and switching bandwidth between them as one or the other is used. Except of course the manufacturers don't do that because, again, "it's expensive" - while having consumers roll over and take it in the ass is free.
It adds cost, but I guess you don't mind paying another $20 for your motherboard for this, but I'm sure a lot of other people won't be as willing. I'd say 90% of consumers only ever add a graphics card to their motherboard. We're the minority here at TPU.
Posted on Reply
#12
DutchTraveller
I also prefer having PCIe slots instead of M2 connectors. The former take up less space and an M2 on an add-in-card can have much better cooling.
Just one or two M2 slots is enough for most users (with SSD's you can have many threads accessing the same disk without problems unlike HDD's).

MB's these days only cater to gamers and overclockers which also adds costs. There are lot's of users that don't need these 'features'.
Compare this with an X13SAE MB and you will see that the power-supply section can be much smaller.
By the way: this is the only MB I found that comes close to what I would prefer.

As written this virus now also affects WS boards and these are the boards where you need extra slot's the most.
They are often used for a long time and you want to be able to upgrade in the future. E.g. to 10Gbit or 25Gbit.
Having the extra slots means you should be able to use the MB for a much longer time which lowers your TCO.

I would say there is a market for MB's with more slots but manufacturers do not produce them because it's not to their advantage to bring out MB's that can be used for a long time...
Planned obsolesence.

The reason the additional PCIe lanes from the processor are only v4 might be that it's difficult to have 2 M2 slot's close to the processor.
The graphics card also needs to be close or you need redrivers/retimers which is expensive.
Using v4 you can place that 2nd M2 slot a bit further away without incurring extra costs.
Posted on Reply
#13
dgianstefani
TPU Proofreader
TheLostSwedeIt adds cost, but I guess you don't mind paying another $20 for your motherboard for this, but I'm sure a lot of other people won't be as willing. I'd say 90% of consumers only ever add a graphics card to their motherboard. We're the minority here at TPU.
Yep, no need to add that cost because there's no real need to use AIC these days. About the most useful thing you can do with extra PCIE is perhaps add in a RAID card with 4/8 M.2 slots, but if you need that you probably need a workstation platform with more memory channels anyway.

Mobo sound is good, wireless headphones like the Audeze Maxwell are even better, and you can buy motherboards with 5/10 Gbit ethernet for reasonable prices.

Get rid of SATA and replace it with U.2 or more M.2 slots.

Better yet, dump ATX for consumers and replace it entirely with DTX/ITX/mATX and the new backside connector layout. Workstations, enterprise, server, sure, maybe, but they also have their own layouts. What exactly does a consumer need the size of ATX for? SLI? 50 different fan/RGB connectors? Absurd numbers of powerstages from the next Godlike EATX - hilarious that they actually perform worse than a good ITX board 1/3 the price, since 2DPC.

Lots of traditional form factors and connectors that have long been superseded with superior formats but stick around just because it's the way it's always been done.
Posted on Reply
#15
_JP_
john_In my case I would love to see a return of Sideport Memory for higher performance from APUs, by integrating for example one or two memory chips on the motherboard and letting the integrated GPU use that memory in parallel with the system's memory, meaning higher bandwidth.
I was actually going to speculate that one of the M.2 ports was actually going to be a reserved Optane Rebrith/3.0 ordeal, to work with Battlemage for DX12's GPU direct-storage access.
With a new Arc product on the horizon, I really feel like Intel never got over the fact that Optane was never really a thing in the consumer market and attaching it to another product that did gain some traction is a way to "make it stick by necessity".
stimpy88AMD - Take note.
They did, as per the news article:
There are now not one, but two CPU-attached M.2 NVMe slots possible, just like on the AMD "Raphael" and "Granite Ridge" processors.
Posted on Reply
#16
DutchTraveller
dgianstefaniYep, no need to add that cost because there's no real need to use AIC these days. About the most useful thing you can do with extra PCIE is perhaps add in a RAID card with 4/8 M.2 slots, but if you need that you probably need a workstation platform with more memory channels anyway.

Mobo sound is good, wireless headphones like the Audeze Maxwell are even better, and you can buy motherboards with 5/10 Gbit ethernet for reasonable prices.

Get rid of SATA and replace it with U.2 or more M.2 slots.
Not everybody has the same use-case...
I prefer to have lot's of options for future expandablity and to have a futureproof MB that can be used for a long time.
Most non-gamers don't need to upgrade so often because they don't need more processing power.
Having enough memory (I use 128GB in most of my systems) and slots makes it possible to use a MB/processor much longer which means lower costs (TCO).

HDD's for storage in workstations and servers still use Sata and some people also have an existing collection of Sata SSD's ( I do ).
Posted on Reply
#17
dgianstefani
TPU Proofreader
DutchTravellerNot everybody has the same use-case...
I prefer to have lot's of options for future expandablity and to have a futureproof MB that can be used for a long time.
Most non-gamers don't need to upgrade so often because they don't need more processing power.
Having enough memory (I use 128GB in most of my systems) and slots makes it possible to use a MB/processor much longer which means lower costs (TCO).

HDD's for storage in workstations and servers still use Sata and some people also have an existing collection of Sata SSD's ( I do ).
This is true, but the vast majority of people with PCs do not even run a discrete GPU, let alone use more than one PCIE slot.

For the people who want more, they can pay for specialized boards. The rest would prefer to pay less and go without features that go unused in 95%+ of builds, or to have that budget go into higher quality other components instead.
Posted on Reply
#18
john_
kondaminI think I'd prefer the 50% extra memory bandwidth that can be used by the entire system.
I don't know if it's economically all that interesting to put some memory on a board that might or might not use it, just making the pcb upon which the soc sits a bit bigger and adding it there would make more sense
The memory on board wasn't mandatory with Sideport memory. Some boards with integrated graphics had a chip, others didn't. So, motherboard makers can decide if they want to add that memory chip and make their boards desirable options for those looking to use them with an APU.
Going from dual channel to three channel memory, could end up bad for us consumers, because we could end up seeing that third channel only on high end and highly expensive boards.
Posted on Reply
#19
DutchTraveller
dgianstefaniFor the people who want more, they can pay for specialized boards. The rest would prefer to pay less and go without features that go unused in 95%+ of builds, or to have that budget go into higher quality other components instead.
I would if I could. Those MB's don't exist!! The X13SAE comes close but I don't see a viable option for AMD.
The only MB's with enough slots are for server-processors which are total overkill for my use-case (my servers runs on a i3-8350k and is more than fast enough).
Posted on Reply
#20
dgianstefani
TPU Proofreader
DutchTravellerI would if I could. Those MB's don't exist!! The X13SAE comes close but I don't see a viable option for AMD.
The only MB's with enough slots are for server-processors which are total overkill for my use-case (my servers runs on a i3-8350k and is more than fast enough).
Well it would be better than this middle ground compromise we have now, that we can agree on.
Posted on Reply
#21
evernessince
TheLostSwedeThere isn't enough PCB space for both though.
I assume that's why he said m.2 via add-in card.

I wouldn't be against putting m.2 on the back on the motherboard (this would require one of the zero cable standards to take off though as it would require more space behind the board for cooling) or potentially a daughter board either.

There's definitely room for innovation in regards to m.2 drive placement and PCIe lane flexibility. PCIe slots are more flexible but bifurcation can be a pain on some motherboards and board vendors often don't include an add-in card. Signal integrity can also be an issue so redrivers need to be up to snuff (which is a requirement for PCIe 4.0 and 5.0 boards anyways) to ensure the signal won't degrade over the small extra distance m.2 cards add.

It's also a huge PITA cooling any M.2 drive beyond the first as they are often location under or around the GPU. Larger GPUs like the 4090 cover 3-4 slots. Not having a cable standard to supersede SATA has created a lot of issues. A cable connector takes up a lot less space than an M.2. U.2 can't yet replace SATA either as there are both no consumer drives that use it and anything above PCIe 3.0 runs into data errors without a special and expensive redriver.
Posted on Reply
#22
persondb
DutchTravellerNot everybody has the same use-case...
I prefer to have lot's of options for future expandablity and to have a futureproof MB that can be used for a long time.
Most non-gamers don't need to upgrade so often because they don't need more processing power.
Having enough memory (I use 128GB in most of my systems) and slots makes it possible to use a MB/processor much longer which means lower costs (TCO).

HDD's for storage in workstations and servers still use Sata and some people also have an existing collection of Sata SSD's ( I do ).
Pretty much, which is why options for your use case are pretty rare.
Most non-gamers won't have your needs, people aren't raring in to buy a 10gbe or 25gbe NIC, they would have little to no use of it or add in a SATA controller or whatever PCIe card. I would think that just plugging in an extra SSD for extra storage would be the vast majority of cases.
Gamers can add in a capture card so for those use cases at least two slots is needed.
Also, most people don't think on total cost of ownership, that is usually a business view of it.

I think the issue really is that your needs are niche and well, you aren't going to find much in support due to that.
Posted on Reply
#24
Random_User
32 lanes is yet... not much. But that is a stretch. It's rather x20 PCIE 5.0, and x12 of PCIE 4.0.

Saddly though, that doesn't translate to more of older gen lanes from the PCIE5.0. I mean Bifurcation from e.g. x8 PCIE 5.0 to x16 PCIE 4.0.
There are no more Gen 3-only lanes from the PCH
Does it mean the PCH will still be backward compatible with older PCIE generations?
john_With modern motherboards looking more like microATX boards in an ATX form, do we really need PCIe lanes for anything else than M.2 SSDs?
I mean, this is my old AM3+ board

7 expansion slots, the two main PCIe slots having the option to play x8 + x8 for SLi/CrossFire for a total of 42 PCIe 2.0 lanes and here is a modern example of an ATX board

More or less, empty PCB with extra room for M.2 SSDs.
I would take the first variant, without doubts. Streamlined design, no BS, no RGB, plenty of PCIE slots (would prefer all of them being X16, even if they wouldn't be fully accessible due to lanes limit). I would prefer the PCIE card for couple bucks to put the M.2 vertically, rather than put it down on motherboard, with no airflow, whatsoever. I don't know, who in right mind would put PCIE 4.0, let alone PCIE 5.0 furnace SSD under the same furnace hot VGA. I don't even mention accessibility. It takes only one screw to undo, in case of addon card, while the onboard slots require to disassemple half of the motherboard, just to gain access to the single SSD.
But actually having both is more reasonable, for different case scenarios. The middle ground, with a single or couple M.2 slots, and the rest of the slots being usual PCIE. For the price the MB asking, there can be easilly have a couple of addon cards for M.2, that cost couple bucks.
AssimilatorThere is no good reason for PCIe slots and M.2 slots to be an "either/or". Manufacturers should offer motherboards with multiple PCIe slots and M.2 drive access should be provided by an add-in PCIe card bundled with said motherboard. That way the consumer can choose to use the PCIe lanes that they paid for in the way that they choose - for example if they want to install two GPUs instead of a bunch of M.2s.

The current trend of putting more and more M.2 slots on motherboards, consuming more and more PCIe lanes forcing those lanes to either be used by M.2s or essentially not exist, is anti-consumer and anti-choice and has only come about because manufacturers are looking for every possible way to cut down on every expense (it's not as if a PCIe add-in card for M.2 drives costs a billion bucks anyway, but MUH PROFIT MARGINS). Yet consumers have bought into this because the manufacturers claim it's "more convenient"... yeah, it's so more convenient to have less choice. Even the so-called workstation motherboards suffer from this same idiotic plague.

There is, of course, nothing stopping manufacturers including both ordinary PCIe slots and M.2 slots on their boards, and switching bandwidth between them as one or the other is used. Except of course the manufacturers don't do that because, again, "it's expensive" - while having consumers roll over and take it in the ass is free.
Indeed. The usual PCIE slot, even at X8, would suit the storage much better. No need for the Aluminium slabs covering the entire MB surface. Just put the hot SSD on AIC, and the in-take fans would do their job. But then it would require for MB makers to actually invest in better design... However, the empty space under the second and third PCIE-slots make sense, since that is dead zone, that can't really be used for anything, including delicate SSD.
DutchTravellerI also prefer having PCIe slots instead of M2 connectors. The former take up less space and an M2 on an add-in-card can have much better cooling.
Just one or two M2 slots is enough for most users (with SSD's you can have many threads accessing the same disk without problems unlike HDD's).

MB's these days only cater to gamers and overclockers which also adds costs. There are lot's of users that don't need these 'features'.
Compare this with an X13SAE MB and you will see that the power-supply section can be much smaller.
By the way: this is the only MB I found that comes close to what I would prefer.

As written this virus now also affects WS boards and these are the boards where you need extra slot's the most.
They are often used for a long time and you want to be able to upgrade in the future. E.g. to 10Gbit or 25Gbit.
Having the extra slots means you should be able to use the MB for a much longer time which lowers your TCO.

I would say there is a market for MB's with more slots but manufacturers do not produce them because it's not to their advantage to bring out MB's that can be used for a long time...
Planned obsolesence.

The reason the additional PCIe lanes from the processor are only v4 might be that it's difficult to have 2 M2 slot's close to the processor.
The graphics card also needs to be close or you need redrivers/retimers which is expensive.
Using v4 you can place that 2nd M2 slot a bit further away without incurring extra costs.
Exaclty! The more well-thought motherboards, can be in service, for much longer periods. I dare to say, a decade, easilly (especially, when the decent ATX board costs half a grand). And having the lot of PCIE slots, extend that vastly. But this seems works against the MB vendor plans.
Posted on Reply
#25
_JP_
stimpy88"Intel is expected to increase both the native- and overclocked memory speeds for this platform, and we might even see DDR5 memory kits with XMP 3.0 profiles for 10000 MT/s or more"

While AMD languishes away on "possibly" supporting a whopping 6400... :D
I thought you were referring to the M.2 dedicated lanes, hence what I replied. Oops. :laugh:
Posted on Reply
Add your own comment
Nov 21st, 2024 08:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts