Tuesday, October 8th 2019

Intel Readies "The Element" - a Next-Generation of Modular PCs

Yesterday Intel hosted an event in London, where it held a presentation and demonstration of new concept product. Simply called "The Element", this new products tries to introduce the concept of modular computing, where you can basically swap out parts and replace them with ease, to users of PCs who wanted this to happen for a long time.

If anyone remembers Razer's project Christine, which unfortunately didn't take off, this product should be of no surprise to them. The Element is a complete PC consisting out of CPU, RAM and Storage, with a PCIe slot attached to it. Featuring plenty of IO options like Thunderbolt, HDMI, Ethernet, USB, and Wi-Fi, The Element is a complete solution for computing. For the demo Intel soldered a BGA Xeon CPU with room for two SODIMM slots for memory and two M.2 ports for storage expansion, all cooled by a blower fan directly cooling the CPU heatsink. Power is supplied from PCIe slot (75 Watts) and 8 pin connector which would come from a regular PSU. There is also an option for the card to be powered by a 19 V power source if external power brick is provided.
As you can see from the above images, this concept basically wants to put all the computing power on a card that attaches to the single motherboard with a PCIe slot. Such approach would allow for some interesting solution when it comes to the design of a system, where you could build anything from small form factor PCs all the way to the giant builds that feature dozen or so of these units in order to form a server like unit. Possibilities for configuring the system based on "The Element" are almost endless.

While this comes as a PCIe card, there would be no regular AIB partners to release their own PCB design. Instead AIB partners would just be allowed to make small aesthetics adjustments like different cooling type (watercooling anyone?) and different designs of backplates.
While in the development phase for now, this concept is planned for release sometime in Q1 2020, most likely availability will be at the end of Q1 or beginning of Q2. Pricing is still unknown and configurations of this product will be OEM dependant, so that still remains a mystery until further notice.
Source: AnandTech
Add your own comment

41 Comments on Intel Readies "The Element" - a Next-Generation of Modular PCs

#26
notb
evernessinceAMD and Nvidia couldn't eliminate micro-stutter over the PCIe bus, I can only imagine what putting the entire system over it would do. To me this seems far more interesting to server / cloud then it does consumers. Otherwise there are far too many potential drawbacks over the traditional PC platform to make consumers want to switch.
PC componens communicate over PCIe today. This doesn't change much technologically. It's mostly about form factor, standarization and cooling.
I don't understand why there so much resistance in comments. :-D
You have to wonder what the max TDP of the CPU will be as well. You can't exactly hang a CPU cooler off a PCIe slot card, if a blower is all you have for that entire unit, you are definitely looking at lower wattage parts.
The prototype compute unit had an 8-pin connector and - in case you haven't noticed - a decent blower cooler is perfectly capable of taking care of 250W GPU.

It's just a question of noise.
IMO the blower cooler in Titan Xp would be fine for people used to Intel stock CPU fan, but that's about it.
Posted on Reply
#27
FordGT90Concept
"I go fast!1!11!1!"
notbPC componens communicate over PCIe today. This doesn't change much technologically. It's mostly about form factor, standarization and cooling.
I don't understand why there so much resistance in comments. :-D

The prototype compute unit had an 8-pin connector and - in case you haven't noticed - a decent blower cooler is perfectly capable of taking care of 250W GPU.

It's just a question of noise.
IMO the blower cooler in Titan Xp would be fine for people used to Intel stock CPU fan, but that's about it.
Most CPUs have far more than 16 lanes exposed to the motherboard. Because of using PCIe, they're limited to only x16 lanes for everything else. A graphics card will consume at least 8 of them by itself. A NVMe SSD will consume another 4. A SATA controller will use another 1. That gives you a total of 3 remaining to take care of everything else.

That blower looks...pathetic.

Let's also not forget that this kind of setup drastically increases real estate cost in turns of PCB. You not only still need a motherboard to connect the PCI Express to power but also to connect all of the components to each other. Things like M.2 and SATA would be right on the motherboard so for most use cases, doesn't require an AIB.

Oh, and installing two of these modules into one system is a big "hell no" because the duplication of features and horrendously slow PCI Express being used for CPU-to-CPU communication will force the use of many NUMA domains to prohibit them from choking the PCI Express lanes. Oh, and in that case, forget installing any peripherals because the two CPUs need all x16 lanes to communicate.

I just don't see how this ends well. It's sacrifice after sacrifice after sacrifice and where's the big benefit offsetting all those sacrifices?
Posted on Reply
#28
notb
FordGT90ConceptSounds like a mess on the desk and NUCs can already do all that.
Mess or not - that's how people with laptops work today.
But most things are wireless anyway.

Is it more mess than how many gaming desks look? Keyboard and mouse usually on cords. Transparent cases standing next to monitors...
Posted on Reply
#29
FordGT90Concept
"I go fast!1!11!1!"
Most of the components are in the case. For example, external GPUs are really, really rare. So are external drive controllers. Besides HIDs, the only thing commonly outside of a computer anymore is DACs...because computers are electrically too noisy.
Posted on Reply
#30
silentbogo
notbThe goal being: to make it fast and very easy. It should be as simple as attaching an external USB drive, so that anyone could do it.
It's already a complete system on the card, once again, including RAM&SSD expansions and its own I/O. "Upgrading" the compute module is an equivalent of throwing away the entire PC except PSU and chassis. Just because press misunderstood the use of this thing doesn't mean that we have to be that stupid as well. It's a decent enterprise product with tons of real-world uses and practical benefits, but it's definitely not a consumer product for "making PC upgrades easier".
Posted on Reply
#31
FordGT90Concept
"I go fast!1!11!1!"
I'm not sure how it helps enterprise either. :(
Posted on Reply
#32
Aquinus
Resident Wat-man
FordGT90ConceptI'm not sure how it helps enterprise either. :(
Can I run VMs that only run on that one card? That eliminates the NUMA issue by isolating virtual boxes to each node and is a real enterprise use case. If PCIe is only really being used for communication like a network interface would be for distinct servers or VMs, then I don't see an issue. If you have a workload that can use some ridiculous number of cores, I wouldn't expect that to be a good solution for this kind of use case, but for virtualization like in data centers, the real question would be if this costs less. I seriously doubt it.

The point really is that it sounds like each of theses cards are fully contained system with its own CPU, memory, and storage. I would expect it to behave as such with the "PCIe" slot just really being for management, not for CPU-to-CPU communication.
Posted on Reply
#33
FordGT90Concept
"I go fast!1!11!1!"
But you're paying for a whole lot of stuff you don't need/want like:
4 x USB3 ports
1 x HDMI (obvious silicon wasted on IGP)
2 x Thunderbolt ports (USB-C connectors)
1 x L/R out/optical (has audio chip)
2 x RJ45 LAN connections (maybe some logic to this but if it's just hosting VMs and only has like 250w worth of performance, can it really saturate more than one NIC before the CPU gets overburdened?)

On top of that, a huge advantage of VMs is that you can increase and decrease hardware resources according to the demands of the clients (e.g. shift RAM/cores). Can't do that with this.
AquinusThe point really is that it sounds like each of theses cards are fully contained system with its own CPU, memory, and storage. I would expect it to behave as such with the "PCIe" slot just really being for management, not for CPU-to-CPU communication.
Indeed, they're like NUCs in PCIe form factor...but why? I still don't see the point. Who is the target market?
Posted on Reply
#34
Aquinus
Resident Wat-man
FordGT90ConceptOn top of that, a huge advantage of VMs is that you can increase and decrease hardware resources according to the demands of the clients (e.g. shift RAM/cores). Can't do that with this.
That's an advantage if you're working with a single VM, but that's not the general use case in data centers but rather during development and for one-off VMs where you can't horizontally scale. Most businesses who live in the cloud require horizontal scaling in order to scale to load, not vertical scaling. The reason for this is because changing specs of VMs requires shutting the VMs down and restarting them, whereas scaling the number of VMs can happen without service interruption since it only requires spinning up and tearing down VMs.

Let me put it this way, if you're scaled up to 10 or 20 VMs, changing the characteristics of all those VMs is far more complicated than just spinning up another 10 VMs. Cloud resources cost money, so scaling quickly and effectively is very important, otherwise you're just wasting money.
Posted on Reply
#35
FordGT90Concept
"I go fast!1!11!1!"
Even if all you're doing is starting and stopping VMs, the same logic still applies: you're going to buy many core Xeons or Epycs, not these, because you're talking VMs inside of isolated hardware so you have duplication of everything, not just scaling VMs.

I finally came up with a reason for these things (sort of): redundancy. That's still a problem with I/O though because there isn't a means to seamlessly roll over between units.
Posted on Reply
#36
DeathtoGnomes
eidairaman1Close components produce heat
and reduce air flow.
Posted on Reply
#37
notb
silentbogoIt's already a complete system on the card, once again, including RAM&SSD expansions and its own I/O. "Upgrading" the compute module is an equivalent of throwing away the entire PC except PSU and chassis. Just because press misunderstood the use of this thing doesn't mean that we have to be that stupid as well. It's a decent enterprise product with tons of real-world uses and practical benefits, but it's definitely not a consumer product for "making PC upgrades easier".
The compute module is a standalone, fully functional system. Sure. I still don't understand what you mean.
You don't upgrade it. You buy a new one. 100% correct. Even RAM may turn out to be soldered.

And yes, it's a perfect consumer product. Just not for you.
DeathtoGnomesand reduce air flow.
Both correct and irrelevant. :)

PC users have wrong idea about case airflow. Most think that more airflow means better cooling.
It doesn't matter how much air gets through your case. It only matters how much is pushed through and near the radiators.
As such, most of airflow in typical ATX case is totally wasted. Air exiting the case is often just few K over room temperature.

It's very different in servers and laptops, where cooling potential is used as much as possible.
Air exiting most laptops is very hot - it can even get slightly unpleasant if vents are placed badly. But that also means it took A LOT of heat with it.

A cramped space with properly modeled airflow and properly placed radiators will be very efficient compared to what most PCs can achieve.
The only real issue here is noise. They'll have to provide something better than what Nvidia puts in Titan and pro cards.
Posted on Reply
#38
silentbogo
FordGT90ConceptBut you're paying for a whole lot of stuff you don't need/want like:
Same with any/every motherboard or PC: you have tons of unused I/O that costs some production money, like triple-display outputs, extra PCIe, serial headers, RGB, stickers, etc. etc. etc.
Extra I/O never hurts, plus there isn't that much of it: just a dual-LAN, some USBs and HDMI (which is a big plus, if you need to hook it up to KVM console or switch).
notbSure. I still don't understand what you mean.
You don't upgrade it. You buy a new one. 100% correct. Even RAM may turn out to be soldered.
2xSODIMM slots, 2xNVME slots. The only non-upgradeable thing is CPU.
notbAnd yes, it's a perfect consumer product. Just not for you.
If you are thinking that PCIe slot is some magic interconnect to the magic backbone, then it might be very disappointing consumer product. I'm 66.6% sure that they use the same approach as QNAP (e.g. Ethernet PHY backwards).
The only other way is wiring an actual PCIe x16, but then the backbone/motherboard basically becomes a glorified PCIe riser.
Posted on Reply
#40
DeathtoGnomes
notbPC users have wrong idea about case airflow. Most think that more airflow means better cooling.
It doesn't matter how much air gets through your case. It only matters how much is pushed through and near the radiators.
As such, most of airflow in typical ATX case is totally wasted. Air exiting the case is often just few K over room temperature.
please spare me this argument again.
Posted on Reply
#41
notb
silentbogo2xSODIMM slots, 2xNVME slots. The only non-upgradeable thing is CPU.
In this prototype.
If this idea gets any traction, it'll likely follow the laptop route. Some modules will offer this kind of upgrading and some won't.
If you are thinking that PCIe slot is some magic interconnect to the magic backbone, then it might be very disappointing consumer product. I'm 66.6% sure that they use the same approach as QNAP (e.g. Ethernet PHY backwards).
The QNAP thing is a system on a card (coprocessor or not - as noted earlier).
This isn't what we're talking about.

"The Element" is just a NUC with a PCIe output.
The idea is not to buy 4 of these and build a cluster. It's about putting this next to other components connected via the PCIe base. It should be compatible with the stuff we have now (GPUs, PCIe drives etc).

And since you're getting rid of normal ATX motherboard (parallel to PCIe slots, forcing a lot of free space inside a case), the whole system becomes smaller.

A basic gaming configuration could consist of this module, a short GPU and a power supply.
This means that suddenly a "standard" PC is the size of a DAN A4.
The only other way is wiring an actual PCIe x16, but then the backbone/motherboard basically becomes a glorified PCIe riser.
Exactly. That's what it's supposed to be. I've already used the word "riser".
Posted on Reply
Add your own comment
Dec 25th, 2024 04:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts