
AAEON Unveils World's First 8L Dual-GPU AI Inference Server, the MAXER-5100
Leading provider of advanced AI solutions AAEON has released a new addition to its AI Inference Server product line, the MAXER-5100 - the world's first 8L AI inference server equipped with two integrated GPUs. The MAXER-5100's default model comes with the 24 core, 32 thread 14th Generation Intel Core i9 processor 14900K and two onboard NVIDIA RTX 2000 Ada GPUs. Meanwhile, a barebone SKU is available, accommodating up to 65 W CPUs from across the 12th, 13th, and 14th Generation Intel Core processor lines as well as PCIe slots for other compatible GPUs, per project need.
Given the processing power and AI performance the system offers, the MAXER-5100 is primarily positioned as a central server for the management of multiple edge devices, particularly with its Certificate Authority (CA) support granting additional security for smart infrastructure, healthcare, and advanced manufacturing applications. Moreover, the MAXER-5100's use of a zero-trust secure tunnel and onboard TPM 2.0 allows for encrypted data transmission between the server and multiple edge devices, as well implement over-the-air updates and remote diagnostics.
Given the processing power and AI performance the system offers, the MAXER-5100 is primarily positioned as a central server for the management of multiple edge devices, particularly with its Certificate Authority (CA) support granting additional security for smart infrastructure, healthcare, and advanced manufacturing applications. Moreover, the MAXER-5100's use of a zero-trust secure tunnel and onboard TPM 2.0 allows for encrypted data transmission between the server and multiple edge devices, as well implement over-the-air updates and remote diagnostics.