• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Readies "The Element" - a Next-Generation of Modular PCs

Joined
Jul 13, 2016
Messages
3,341 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
AMD and Nvidia couldn't eliminate micro-stutter over the PCIe bus, I can only imagine what putting the entire system over it would do. To me this seems far more interesting to server / cloud then it does consumers. Otherwise there are far too many potential drawbacks over the traditional PC platform to make consumers want to switch. You have to wonder what the max TDP of the CPU will be as well. You can't exactly hang a CPU cooler off a PCIe slot card, if a blower is all you have for that entire unit, you are definitely looking at lower wattage parts.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
AMD and Nvidia couldn't eliminate micro-stutter over the PCIe bus, I can only imagine what putting the entire system over it would do. To me this seems far more interesting to server / cloud then it does consumers. Otherwise there are far too many potential drawbacks over the traditional PC platform to make consumers want to switch.
PC componens communicate over PCIe today. This doesn't change much technologically. It's mostly about form factor, standarization and cooling.
I don't understand why there so much resistance in comments. :-D
You have to wonder what the max TDP of the CPU will be as well. You can't exactly hang a CPU cooler off a PCIe slot card, if a blower is all you have for that entire unit, you are definitely looking at lower wattage parts.
The prototype compute unit had an 8-pin connector and - in case you haven't noticed - a decent blower cooler is perfectly capable of taking care of 250W GPU.

It's just a question of noise.
IMO the blower cooler in Titan Xp would be fine for people used to Intel stock CPU fan, but that's about it.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
PC componens communicate over PCIe today. This doesn't change much technologically. It's mostly about form factor, standarization and cooling.
I don't understand why there so much resistance in comments. :-D

The prototype compute unit had an 8-pin connector and - in case you haven't noticed - a decent blower cooler is perfectly capable of taking care of 250W GPU.

It's just a question of noise.
IMO the blower cooler in Titan Xp would be fine for people used to Intel stock CPU fan, but that's about it.
Most CPUs have far more than 16 lanes exposed to the motherboard. Because of using PCIe, they're limited to only x16 lanes for everything else. A graphics card will consume at least 8 of them by itself. A NVMe SSD will consume another 4. A SATA controller will use another 1. That gives you a total of 3 remaining to take care of everything else.

That blower looks...pathetic.

Let's also not forget that this kind of setup drastically increases real estate cost in turns of PCB. You not only still need a motherboard to connect the PCI Express to power but also to connect all of the components to each other. Things like M.2 and SATA would be right on the motherboard so for most use cases, doesn't require an AIB.

Oh, and installing two of these modules into one system is a big "hell no" because the duplication of features and horrendously slow PCI Express being used for CPU-to-CPU communication will force the use of many NUMA domains to prohibit them from choking the PCI Express lanes. Oh, and in that case, forget installing any peripherals because the two CPUs need all x16 lanes to communicate.

I just don't see how this ends well. It's sacrifice after sacrifice after sacrifice and where's the big benefit offsetting all those sacrifices?
 
Last edited:
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
Sounds like a mess on the desk and NUCs can already do all that.
Mess or not - that's how people with laptops work today.
But most things are wireless anyway.

Is it more mess than how many gaming desks look? Keyboard and mouse usually on cords. Transparent cases standing next to monitors...
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Most of the components are in the case. For example, external GPUs are really, really rare. So are external drive controllers. Besides HIDs, the only thing commonly outside of a computer anymore is DACs...because computers are electrically too noisy.
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,568 (1.37/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 5700X3D
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 64GB DDR4-3600(4x16)
Video Card(s) MSI RTX 3070 Gaming X Trio
Storage ADATA Legend 2TB
Display(s) Samsung Viewfinity Ultra S6 (34" UW)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 24.04 LTS
The goal being: to make it fast and very easy. It should be as simple as attaching an external USB drive, so that anyone could do it.
It's already a complete system on the card, once again, including RAM&SSD expansions and its own I/O. "Upgrading" the compute module is an equivalent of throwing away the entire PC except PSU and chassis. Just because press misunderstood the use of this thing doesn't mean that we have to be that stupid as well. It's a decent enterprise product with tons of real-world uses and practical benefits, but it's definitely not a consumer product for "making PC upgrades easier".
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I'm not sure how it helps enterprise either. :(
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I'm not sure how it helps enterprise either. :(
Can I run VMs that only run on that one card? That eliminates the NUMA issue by isolating virtual boxes to each node and is a real enterprise use case. If PCIe is only really being used for communication like a network interface would be for distinct servers or VMs, then I don't see an issue. If you have a workload that can use some ridiculous number of cores, I wouldn't expect that to be a good solution for this kind of use case, but for virtualization like in data centers, the real question would be if this costs less. I seriously doubt it.

The point really is that it sounds like each of theses cards are fully contained system with its own CPU, memory, and storage. I would expect it to behave as such with the "PCIe" slot just really being for management, not for CPU-to-CPU communication.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
But you're paying for a whole lot of stuff you don't need/want like:
4 x USB3 ports
1 x HDMI (obvious silicon wasted on IGP)
2 x Thunderbolt ports (USB-C connectors)
1 x L/R out/optical (has audio chip)
2 x RJ45 LAN connections (maybe some logic to this but if it's just hosting VMs and only has like 250w worth of performance, can it really saturate more than one NIC before the CPU gets overburdened?)

On top of that, a huge advantage of VMs is that you can increase and decrease hardware resources according to the demands of the clients (e.g. shift RAM/cores). Can't do that with this.

The point really is that it sounds like each of theses cards are fully contained system with its own CPU, memory, and storage. I would expect it to behave as such with the "PCIe" slot just really being for management, not for CPU-to-CPU communication.
Indeed, they're like NUCs in PCIe form factor...but why? I still don't see the point. Who is the target market?
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
On top of that, a huge advantage of VMs is that you can increase and decrease hardware resources according to the demands of the clients (e.g. shift RAM/cores). Can't do that with this.
That's an advantage if you're working with a single VM, but that's not the general use case in data centers but rather during development and for one-off VMs where you can't horizontally scale. Most businesses who live in the cloud require horizontal scaling in order to scale to load, not vertical scaling. The reason for this is because changing specs of VMs requires shutting the VMs down and restarting them, whereas scaling the number of VMs can happen without service interruption since it only requires spinning up and tearing down VMs.

Let me put it this way, if you're scaled up to 10 or 20 VMs, changing the characteristics of all those VMs is far more complicated than just spinning up another 10 VMs. Cloud resources cost money, so scaling quickly and effectively is very important, otherwise you're just wasting money.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.44/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Even if all you're doing is starting and stopping VMs, the same logic still applies: you're going to buy many core Xeons or Epycs, not these, because you're talking VMs inside of isolated hardware so you have duplication of everything, not just scaling VMs.

I finally came up with a reason for these things (sort of): redundancy. That's still a problem with I/O though because there isn't a means to seamlessly roll over between units.
 
Joined
Jul 16, 2014
Messages
8,220 (2.15/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
It's already a complete system on the card, once again, including RAM&SSD expansions and its own I/O. "Upgrading" the compute module is an equivalent of throwing away the entire PC except PSU and chassis. Just because press misunderstood the use of this thing doesn't mean that we have to be that stupid as well. It's a decent enterprise product with tons of real-world uses and practical benefits, but it's definitely not a consumer product for "making PC upgrades easier".
The compute module is a standalone, fully functional system. Sure. I still don't understand what you mean.
You don't upgrade it. You buy a new one. 100% correct. Even RAM may turn out to be soldered.

And yes, it's a perfect consumer product. Just not for you.

and reduce air flow.
Both correct and irrelevant. :)

PC users have wrong idea about case airflow. Most think that more airflow means better cooling.
It doesn't matter how much air gets through your case. It only matters how much is pushed through and near the radiators.
As such, most of airflow in typical ATX case is totally wasted. Air exiting the case is often just few K over room temperature.

It's very different in servers and laptops, where cooling potential is used as much as possible.
Air exiting most laptops is very hot - it can even get slightly unpleasant if vents are placed badly. But that also means it took A LOT of heat with it.

A cramped space with properly modeled airflow and properly placed radiators will be very efficient compared to what most PCs can achieve.
The only real issue here is noise. They'll have to provide something better than what Nvidia puts in Titan and pro cards.
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,568 (1.37/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 5700X3D
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 64GB DDR4-3600(4x16)
Video Card(s) MSI RTX 3070 Gaming X Trio
Storage ADATA Legend 2TB
Display(s) Samsung Viewfinity Ultra S6 (34" UW)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 24.04 LTS
But you're paying for a whole lot of stuff you don't need/want like:
Same with any/every motherboard or PC: you have tons of unused I/O that costs some production money, like triple-display outputs, extra PCIe, serial headers, RGB, stickers, etc. etc. etc.
Extra I/O never hurts, plus there isn't that much of it: just a dual-LAN, some USBs and HDMI (which is a big plus, if you need to hook it up to KVM console or switch).

Sure. I still don't understand what you mean.
You don't upgrade it. You buy a new one. 100% correct. Even RAM may turn out to be soldered.
2xSODIMM slots, 2xNVME slots. The only non-upgradeable thing is CPU.

And yes, it's a perfect consumer product. Just not for you.
If you are thinking that PCIe slot is some magic interconnect to the magic backbone, then it might be very disappointing consumer product. I'm 66.6% sure that they use the same approach as QNAP (e.g. Ethernet PHY backwards).
The only other way is wiring an actual PCIe x16, but then the backbone/motherboard basically becomes a glorified PCIe riser.
 
Joined
May 13, 2010
Messages
6,084 (1.14/day)
System Name RemixedBeast-NX
Processor Intel Xeon E5-2690 @ 2.9Ghz (8C/16T)
Motherboard Dell Inc. 08HPGT (CPU 1)
Cooling Dell Standard
Memory 24GB ECC
Video Card(s) Gigabyte Nvidia RTX2060 6GB
Storage 2TB Samsung 860 EVO SSD//2TB WD Black HDD
Display(s) Samsung SyncMaster P2350 23in @ 1920x1080 + Dell E2013H 20 in @1600x900
Case Dell Precision T3600 Chassis
Audio Device(s) Beyerdynamic DT770 Pro 80 // Fiio E7 Amp/DAC
Power Supply 630w Dell T3600 PSU
Mouse Logitech G700s/G502
Keyboard Logitech K740
Software Linux Mint 20
Benchmark Scores Network: APs: Cisco Meraki MR32, Ubiquiti Unifi AP-AC-LR and Lite Router/Sw:Meraki MX64 MS220-8P
Ugh we got blade servers already
 
Joined
Jul 16, 2014
Messages
8,220 (2.15/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
PC users have wrong idea about case airflow. Most think that more airflow means better cooling.
It doesn't matter how much air gets through your case. It only matters how much is pushed through and near the radiators.
As such, most of airflow in typical ATX case is totally wasted. Air exiting the case is often just few K over room temperature.
please spare me this argument again.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
2xSODIMM slots, 2xNVME slots. The only non-upgradeable thing is CPU.
In this prototype.
If this idea gets any traction, it'll likely follow the laptop route. Some modules will offer this kind of upgrading and some won't.
If you are thinking that PCIe slot is some magic interconnect to the magic backbone, then it might be very disappointing consumer product. I'm 66.6% sure that they use the same approach as QNAP (e.g. Ethernet PHY backwards).
The QNAP thing is a system on a card (coprocessor or not - as noted earlier).
This isn't what we're talking about.

"The Element" is just a NUC with a PCIe output.
The idea is not to buy 4 of these and build a cluster. It's about putting this next to other components connected via the PCIe base. It should be compatible with the stuff we have now (GPUs, PCIe drives etc).

And since you're getting rid of normal ATX motherboard (parallel to PCIe slots, forcing a lot of free space inside a case), the whole system becomes smaller.

A basic gaming configuration could consist of this module, a short GPU and a power supply.
This means that suddenly a "standard" PC is the size of a DAN A4.
The only other way is wiring an actual PCIe x16, but then the backbone/motherboard basically becomes a glorified PCIe riser.
Exactly. That's what it's supposed to be. I've already used the word "riser".
 
Top