News Posts matching #Server

Return to Keyword Browsing

Alibaba Adds New "C930" Server-grade Chip to XuanTie RISC-V Processor Series

Damo Academy—a research and development wing of Alibaba—launched its debut "server-grade processor" design late last week, in Beijing. According to a South China Morning Post (SCMP) news article, the C930 model is a brand-new addition to the e-commerce platform's XuanTie RISC-V CPU series. Company representatives stated that their latest product is designed as a server-level and high-performance computing (HPC) solution. Going back to March 2024, TechPowerUp and other Western hardware news outlets picked up on Alibaba's teasing of the Xuantie C930 SoC, and a related Xuantie 907 matrix processing unit. Fast-forward to the present day; Damo Academy has disclosed that initial shipments—of finalized C930 units—will be sent out to customers this month.

The newly released open-source RISC-V architecture-based HPC chip is an unknown quantity in terms of technical specifications. Damo Academy reps did not provide any detailed information during last Friday's conference (February 28). SCMP's report noted the R&D division's emphasizing of "its role in advancing RISC-V adoption" within various high-end fields. Apparently, the XuanTie engineering team has: "supported the implementation of more than thirty percent of RISC-V high-performance processors." Upcoming additions will arrive in the form of the C908X for AI acceleration, R908A for automotive processing solutions, and an XL200 model for high-speed interconnection. These XuanTie projects are reportedly still deep in development.

SOPHGO Unveils New Products at the 2025 China RISC-V Ecosystem Conference

On February 27-28, the 2025 China RISC-V Ecosystem Conference was grandly held at the Zhongguancun International Innovation Center in Beijing. As a core promoter in the RISC-V field, SOPHGO was invited to deliver a speech and prominently launch a series of new products based on the SG2044 chip, sharing the company's cutting-edge practices in the heterogeneous fusion of AI and RISC-V, and contributing to the vigorous development of the global open-source instruction set ecosystem. During the conference, SOPHGO set up a distinctive exhibition area that attracted many attendees from the industry to stop and watch.

Focusing on AI Integration, Leading Breakthroughs in RISC-V Technology
At the main forum of the conference, the Vice President of SOPHGO RISC-V delivered a speech titled "RISC-V Breakthroughs Driven by AI: Integration + Heterogeneous Innovation," where he elaborated on SOPHGO's innovative achievements in the deep integration of RISC-V architecture and artificial intelligence technology. He pointed out that current AI technological innovations are driving market changes, and the emergence of DeepSeek has ignited a trillion-level computing power market. The innovation of technical paradigms and the penetration of large models into various sectors will lead to an explosive growth in inference demand, resulting in changes in the structure of computing power demand. This will also reshape the landscape of the computing power market, bringing significant business opportunities to domestic computing power enterprises, while RISC-V high-performance computing is entering a fast track of development driven by AI.

SoftBank, ZutaCore and Foxconn Collaborate on Development of Rack-Integrated Solution

SoftBank Corp., ZutaCore and Hon Hai Technology Group ("Foxconn") today announced they implemented ZutaCore's two-phase DLC (Direct Liquid Cooling) technology in an AI server using NVIDIA accelerated computing, making it the world's first implementation of ZutaCore's two-phase DLC (Direct Liquid Cooling) technology using NVIDIA H200 GPUs. In addition, SoftBank designed and developed a rack-integrated solution that integrates each component of the server, including cooling equipment with two-phase DLC technology, on a rack scale, and conducted an operational demonstration and performance evaluation at its data center in February 2025. The demonstration results indicated the solution passed NVIDIA's temperature test (NVQual), thereby confirming the compatibility, stability and reliability of this rack-integrated solution. The solution also achieved pPUE (partial Power Usage Effectiveness) of 1.03 (actual measured value) per rack for cooling efficiency

With the spread and increased adoption of AI, demand for AI servers and other computing resources is expected to expand significantly, further increasing power consumption at data centers. At the same time, reducing power consumption from the perspective of reducing carbon dioxide (CO2) emissions has become a pressing issue worldwide, requiring data centers to become more efficient, consume less power, and introduce innovative heat removal solutions. Since May 2024, SoftBank has been collaborating with ZutaCore, a global leader in the development and business deployment of two-phase DLC technology, to develop solutions optimized for low power consumption at AI data centers.

Dell Technologies Delivers Fourth Quarter and Full-Year Fiscal 2025 Financial Results

Dell Technologies announces financial results for its fiscal 2025 fourth quarter and full year. The company also provides guidance for its fiscal 2026 first quarter and full year.

Full-Year Summary
  • Full-year revenue of $95.6 billion, up 8% year over year
  • Full-year operating income of $6.2 billion, up 15% year over year, and non-GAAP operating income of $8.5 billion, up 8%
  • Record full-year diluted earnings per share of $6.38, up 39% year over year, and record non-GAAP diluted EPS of $8.14, up 10%
  • Cash flow from operations was $4.5 billion
  • Announcing a cash dividend increase of 18% and $10 billion increase in share repurchase authorization
  • FY26 guidance: Full-year revenue growth of 8%, diluted EPS growth of 23% and non-GAAP diluted EPS growth of 14%

Server DRAM and HBM Continue to Drive Growth, 4Q24 DRAM Industry Revenue Increases by 9.9% QoQ

TrendForce's latest research reveals that global DRAM industry revenue surpassed US$28 billion in 4Q24, marking a 9.9% QoQ increase. This growth was primarily driven by rising contract prices for server DDR5 and concentrated shipments of HBM, leading to continued revenue expansion for the top three DRAM suppliers.

Most contract prices across applications were seen to have reversed downward. However, increased procurement of high-capacity server DDR5 by major American CSPs helped sustain price momentum for server DRAM.

Micron Announces Shipment of 1γ (1-gamma) DRAM: Company's First EUV Memory Node

Micron Technology, Inc., today announced it is the first in the industry to ship samples of its 1γ (1-gamma), sixth-generation (10 nm-class) DRAM node-based DDR5 memory designed for next-generation CPUs to ecosystem partners and select customers. This 1γ DRAM milestone builds on Micron's previous 1α (1-alpha) and 1β (1-beta) DRAM node leadership to deliver innovations that will power future computing platforms from the cloud to industrial and consumer applications to Edge AI devices like AI PCs, smartphones and automobiles. The Micron 1γ DRAM node will first be leveraged in its 16 Gb DDR5 DRAM and over time will be integrated across Micron's memory portfolio to meet the industry's accelerating demand for high-performance, energy-efficient memory solutions for AI. Designed to offer speed capabilities of up to 9200 MT/s, the 16 Gb DDR5 product provides up to a 15% speed increase and over 20% power reduction compared to its predecessor.

ASUS Unveils All-New Intel Xeon 6 Server Lineup

ASUS today announced an all-new series of servers powered by the latest Intel Xeon 6 processors, including the Xeon 6900-series, 6500P/6700P-series and 6300-series processors. These powerhouse processors deliver exceptional performance, efficiency and scalability, featuring up to 128 Performance-cores (P-cores) or 288 Efficient-cores (E-cores) per socket, along with native support for PCI Express (PCIe 5.0) and DDR5 6400 MT/s memory speeds. The latest ASUS server solutions also incorporate the updated BMC module within the ASPEED 2600 chipset, providing improved manageability, security and compatibility with a wide range of remote management software - and coincide with the unveiling of the latest Intel Xeon 6 processors.

Redefining efficiency and scalability
Intel Xeon 6 processors are engineered to meet the needs of modern data centers, AI-driven workloads and enterprise computing. Offering a choice between P-core and E-core architectures, these processors provide flexibility for businesses to optimize performance and energy efficiency based on specific workloads.

AEWIN Unveils High Availability Storage Server Powered by Intel Xeon 6 Processors

AEWIN is excited to launch the MIS-5131-2U2, a cutting-edge 2U2N High Availability Storage Server powered by Intel's latest Xeon 6 processors. The horizontal placement enables optimized thermal dissipation to allow CPU running with high TDP of 350 W. Each node is equipped with a single Intel Xeon 6700/6500-series processor with P-cores (R1S), offering up to 80 performance cores, 136 PCIe 5.0 lanes, and 8x high-speed DDR5 RDIMMs with speeds of up to 6400 MT/s. Featuring rich I/O and 24x hot-swap dual port NVMe SSD bays, MIS-5131-2U2 is a high-performance and reliable storage solution for mission-critical applications.

The dual node architecture within a single chassis allows for seamless failover through with the NTB (Non-Transparent Bridge) interconnectivity, two BMC communication, and dual-port NVMe drives. The two nodes of MIS-5131 are linked via NTB running at PCIe Gen 5 speeds (32 GT/s) to enable high speed, redundant storage failover. With NTB, dual port NVMe drives, and sufficient PCIe lanes of Intel Xeon 6 R1S CPU, the system eliminates the need for an additional switch, delivering an optimized HA server solution with the best TCO for continuous operation.

GIGABYTE Launches New Servers Using Intel Xeon 6700 & 6500-series Processors and Provides Updates for Servers Using Xeon 6300-series

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced new GIGABYTE rack servers that are optimized for Intel Xeon 6700/6500-series processors with up to 136 PCIe 5.0 lanes. Additionally, GIGABYTE enterprise products support new Intel Xeon 6700/6500-series with P-cores and 6300-series.

New Servers Supporting Platform with Up to 136 PCIe Lanes
The new Intel Xeon 6700 and 6500-series have processor SKUs that are designed to support either up to 136 PCIe 5.0 lanes (referred to as R1S) or 88 PCIe lanes. Optimized for R1S processors, the new GIGABYTE rack servers (R264-SG2, R264-SG3, R264-SG5, R164-SG5, R164-SG6) make use of the additional PCIe lanes for diverse storage options, dual-slot GPUs, and additional expansion slots including support for OCP NIC 3.0. These servers will be deployed in applications such as storage, telecom, edge, and more. The new servers support Intel Xeon 6 processors using LGA 4710; however, they are further optimized to take advantage of the additional PCIe lanes when compared to other Intel Xeon 6 processors. A new server that exemplifies this advantage, the R264-SG5 supports up to twenty-eight E3.S Gen 5 NVMe drives, yet it has support for a dual-slot (Gen 5 x16) GPU.

Advantech Unveils New AI, Industrial and Network Edge Servers Powered by Intel Xeon 6 Processors

Advantech, a global leader in industrial and embedded computing, today announced the launch of seven new server platforms built on Intel Xeon 6 processors, optimized for industrial, transportation and communications applications. Designed to meet the increasing demands of complex AI storage and networking workloads, these innovative edge servers and network appliances provide superior performance, reliability, and scalability for system integrators, solutions, and service provider customers.

Intel Xeon 6 - Exceptional Performance for the Widest Range of Workloads
Intel Xeon 6 processors feature advanced performance and efficiency cores, delivering up to 86 cores per CPU, DDR5 memory support with speeds up to 6400 MT/s, and PCIe Gen 5 lanes for high-speed connectivity. Designed to optimize both compute-intensive and scale-out workloads, these processors ensure seamless integration across a wide array of applications.

MSI Announces New Server Platforms Supporting Intel Xeon 6 Family of Processors

MSI introduces new server platforms powered by the latest Intel Xeon 6 family of processors with the Performance Cores (P-Cores). Engineered for high-density performance, seamless scalability, and energy-efficient operations, these servers deliver exceptional throughput, dynamic workload flexibility, and optimized power efficiency. Optimized for AI-driven applications, modern data centers, and cloud-native workloads, MSI's new platforms help lower total cost of ownership (TCO) while maximizing infrastructure efficiency and resource optimization.

"As data-driven transformation accelerates across industries, businesses require solutions that not only deliver performance but also enable sustainable growth and operational agility," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our Intel Xeon 6 processor-based servers are designed to support this shift by offering high-core scalability, energy-efficient performance, and dynamic workload optimization. These capabilities empower organizations to maximize compute density, streamline their digital ecosystems, and respond to evolving market demands with greater speed and efficiency."

MITAC Computing Announces Intel Xeon 6 CPU-powered Next-gen AI & HPC Server Series

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, today announced the launch of its latest server systems and motherboards powered by the latest Intel Xeon 6 with P-core processors. These industry-leading processors are designed for compute-intensive workloads, providing up to twice the performance for the widest range of workloads including AI and HPC.

Driving Innovation in AI and High-Performance Computing
"For over a decade, MiTAC Computing has collaborated with Intel to push the boundaries of server technology, delivering cutting-edge solutions optimized for AI and high-performance computing (HPC)," said Rick Hwang, President of MiTAC Computing Technology Corporation. "With the integration of the latest Intel Xeon 6 P-core processors our servers now unlock groundbreaking AI acceleration, boost computational efficiency, and scale cloud operations to new heights. These innovations provide our customers with a competitive edge, empowering them to tackle demanding workloads with superior empower our customers with a competitive edge through superior performance and an optimized total cost of ownership."

Senao Networks Unveils AI Driven Computing at MWC Barcelona 2025

Senao Networks Inc. (SNI), a global leader in AI computing and networking solutions, will be exhibiting at 2025 Mobile World Congress (MWC) in Barcelona. At the event, SNI will showcase its latest AI-driven innovations, including AI Servers, AI Cameras, AIPCs, Cloud Solutions, and Titanium Power Supply, reinforcing its vision of "AI Everywhere."

Senao Networks continues to advance AI computing with new products designed to enhance security, efficiency, and connectivity.

Arm to Develop In-House Server CPUs, Signs Meta as First Customer

Reports from Financial Times suggest Arm has plans to create its own CPU, set to hit the market in 2025 with Meta Platforms said to be one of the first customers. The chip is said to be a CPU for data center servers, with TSMC handling the manufacturing. However, when the Financial Times asked about this, SoftBank (the majority owner of Arm) and Meta stayed quiet, while Arm didn't give a statement. A Nikkei report from May 2024 suggested that a prototype AI processor chip would be completed by spring 2025 and available for sale by fall 2025, so the latest information from the Financial Times report feels like a confirmation of previous rumors.

Right now, Arm makes money by letting others use its instruction set and core designs to make their own chips. This new move could mean Arm will compete with its current customers. Sources in the industry say Arm is trying to win business from Qualcomm, with rumors that Arm has been bringing in executives from companies it works with to help develop this chip. While Qualcomm had talked in the past about giving Meta a data center CPU using Arm's design, it looks like Arm has won at least some of that deal. However, no technical or specification details are available currently for Arm's 1st in-house server CPU.

OnLogic Reveals the Axial AX300 Edge Server

OnLogic, a leading provider of edge computing solutions, has launched the Axial AX300, a highly customizable and powerful edge server. The AX300 is engineered to help businesses of any size better leverage their on-site data and unlock the potential of AI by placing powerful computing capabilities on-site.

The Axial AX300 empowers organizations to seamlessly move computing resources closer to the data source, providing significant advantages in performance, latency, operational efficiency, and total cost of ownership over cloud-based data management. With its robust design, flexible configuration options, and advanced security features, the Axial AX300 is the ideal platform for a wide range of highly-impactful edge computing applications, including:
  • AI/ML inference and training: Leveraging the power of AI/ML at the edge for real-time insights, predictive maintenance, and improved decision-making.
  • Data analytics: Processing and analyzing data generated by IoT devices and sensors in real-time to improve operational efficiency.
  • Virtualization: Consolidating multiple workloads onto a single server, optimizing resource utilization and simplifying deployment and management.

HPE Introduces Next-Generation ProLiant Servers

Hewlett Packard Enterprise today announced eight new HPE ProLiant Compute Gen12 servers, the latest additions to a new generation of enterprise servers that introduce industry-first security capabilities, optimize performance for complex workloads and boost productivity with management features enhanced by artificial intelligence (AI). The new servers will feature upcoming Intel Xeon 6 processors for data center and edge environments.

"Our customers are tackling workloads that are overwhelmingly data-intensive and growing ever-more demanding," said Krista Satterthwaite, senior vice president and general manager, Compute at HPE. "The new HPE ProLiant Compute Gen12 servers give organizations - spanning public sector, enterprise and vertical industries like finance, healthcare and more - the horsepower and management insights they need to thrive while balancing their sustainability goals and managing costs. This is a modern enterprise platform engineered for the hybrid world, designed with innovative security and control capabilities to help companies prevail over the evolving threat landscape and performance challenges that their legacy hardware cannot address."

Intel Xeon Server Processor Shipments Fall to a 13-Year Low

Intel's data center business has experienced a lot of decline in recent years. Once the go-to choice for data center buildout, nowadays, Xeon processors have reached a 13-year low. According to SemiAnalysis analyst Sravan Kundojjala on X, the once mighty has fallen to a 13-year low number, less than 50% of its CPU sales in the peak observed in 2021. In a chart that is indexed to 2011 CPU volume, the analysis gathered from server volume and 10K fillings shows the decline that Intel has experienced in recent years. Following the 2021 peak, the volume of shipped CPUs has remained in free fall, reaching less than 50% of its once-dominant position. The main cause for this volume contraction is attributed to Intel's competitors gaining massive traction. AMD, with its EPYC CPUs, has been Intel's primary competitor, pushing the boundaries on CPU core count per socket and performance per watt, all at an attractive price point.

During a recent earnings call, Intel's interim c-CEO leadership admitted that Intel is still behind the competition with regard to performance, even with Granite Rapids and Clearwater Forest, which promised to be their advantage in the data center. "So I think it would not be unfathomable that I would put a data center product outside if that meant that I hit the right product, the right market window as well as the right performance for my customers," said Intel co-CEO Michelle Johnston Holthaus, adding that "Intel Foundry will need to earn my business every day, just as I need to earn the business of my customers." This confirms that the company is now dedicated to restoring its product leadership, even if its internal foundry is not doing okay. It will take some time before Intel CPU volume shipments recover, and with AMD executing well in data center, it is becoming a highly intense battle.

Microsoft Announces its FY25 Q2 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended December 31, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $69.6 billion and increased 12%
  • Operating income was $31.7 billion and increased 17% (up 16% in constant currency)
  • Net income was $24.1 billion and increased 10%
  • Diluted earnings per share was $3.23 and increased 10%
"We are innovating across our tech stack and helping customers unlock the full ROI of AI to capture the massive opportunity ahead," said Satya Nadella, chairman and chief executive officer of Microsoft. "Already, our AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-over-year."

NVIDIA Outlines Cost Benefits of Inference Platform

Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform—a full stack comprising world-class silicon, systems and software—is the key to delivering high-throughput and low-latency inference and enabling great user experiences while lowering cost. NVIDIA's advancements in inference software optimization and the NVIDIA Hopper platform are helping industries serve the latest generative AI models, delivering excellent user experiences while optimizing total cost of ownership. The Hopper platform also helps deliver up to 15x more energy efficiency for inference workloads compared to previous generations.

AI inference is notoriously difficult, as it requires many steps to strike the right balance between throughput and user experience. But the underlying goal is simple: generate more tokens at a lower cost. Tokens represent words in a large language model (LLM) system—and with AI inference services typically charging for every million tokens generated, this goal offers the most visible return on AI investments and energy used per task. Full-stack software optimization offers the key to improving AI inference performance and achieving this goal.

Ultra Accelerator Link Consortium (UALink) Welcomes Alibaba, Apple and Synopsys to Board of Directors

Ultra Accelerator Link Consortium (UALink) has announced the expansion of its Board of Directors with the election of Alibaba Cloud Computing Ltd., Apple Inc., and Synopsys Inc. The new Board members will leverage their industry knowledge to advance development and industry adoption of UALink - a high-speed, scale-up interconnect for next-generation AI cluster performance.

"Alibaba Cloud believes that driving AI computing accelerator scale-up interconnection technology by defining core needs and solutions from the perspective of cloud computing and applications has significant value in building the competitiveness of intelligent computing supernodes," said Qiang Liu, VP of Alibaba Cloud, GM of Alibaba Cloud Server Infrastructure. "The UALink consortium, as a leader in the interconnect field of AI accelerators, has brought together key members from the AI infrastructure industry to work together to define interconnect protocol which is natively designed for AI accelerators, driving innovation in AI infrastructure. This will strongly promote the innovation of AI infrastructure and improve the execution efficiency of AI workloads, contributing to the establishment of an open and innovative industry ecosystem."

NVIDIA's GB200 "Blackwell" Racks Face Overheating Issues

NVIDIA's new GB200 "Blackwell" racks are running into trouble (again). Big cloud companies like Microsoft, Amazon, Google, and Meta Platforms are cutting back their orders because of heat problems, Reuters reports, quoting The Information. The first shipments of racks with Blackwell chips are getting too hot and have connection issues between chips, the report says. These tech hiccups have made some customers who ordered $10 billion or more worth of racks think twice about buying.

Some are putting off their orders until NVIDIA has better versions of the racks. Others are looking at buying older NVIDIA AI chips instead. For example, Microsoft planned to set up GB200 racks with no less than 50,000 Blackwell chips at one of its Phoenix sites. However, The Information reports that OpenAI has asked Microsoft to provide NVIDIA's older "Hopper" chips instead pointing to delays linked to the Blackwell racks. NVIDIA's problems with its Blackwell GPUs housed in high-density racks are not something new; in November 2024, Reuters, also referencing The Information, uncovered overheating issues in servers that housed 72 processors. NVIDIA has made several changes to its server rack designs to tackle these problems, however, it seems that the problem was not entirely solved.

Supermicro Begins Volume Shipments of Max-Performance Servers Optimized for AI, HPC, Virtualization, and Edge Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is commencing shipments of max-performance servers featuring Intel Xeon 6900 series processors with P-cores. The new systems feature a range of new and upgraded technologies with new architectures optimized for the most demanding high-performance workloads including large-scale AI, cluster-scale HPC, and environments where a maximum number of GPUs are needed, such as collaborative design and media distribution.

"The systems now shipping in volume promise to unlock new capabilities and levels of performance for our customers around the world, featuring low latency, maximum I/O expansion providing high throughput with 256 performance cores per system, 12 memory channels per CPU with MRDIMM support, and high performance EDSFF storage options," said Charles Liang, president and CEO of Supermicro. "We are able to ship our complete range of servers with these new application-optimized technologies thanks to our Server Building Block Solutions design methodology. With our global capacity to ship solutions at any scale, and in-house developed liquid cooling solutions providing unrivaled cooling efficiency, Supermicro is leading the industry into a new era of maximum performance computing."

InWin Introduces New Server & IPC Equipment at CES 2025

InWin has showcased several new server chassis models at CES—these new introductions form part of the company's efforts to expand regional IPC, server, and systems assembly operations going into 2025. New manufacturing facilities in the USA and Malaysia were brought online last year, and new products have sprung forth. TechPowerUp staffers were impressed by InWin's RG650B model—this cavernous rackmount GPU server has been designed with AI and HPC applications in mind. Its 6.5U dual-chamber design is divided into two sections with optimized and independent heat dissipation systems—GPU accelerators are destined for the 4.5U space, while the motherboard and CPUs go into the 2U chamber.

The RG650B's front section is dominated by the nine pre-installed hot swappable 80 x 30 mm (12,000 RPM max. rated) PWM fans. This array should provide plenty of cooling for any contained hardware; these components will be powered by an 80 Plus Titanium CRPS 3200 W PSU (with four 12V-2x6 pin connectors). InWin's spec sheet states that their RG650B supports 18 FHFL PCI-Express slots with four PCI-Express riser cables—granting plenty of potential for the installation of add-in boards.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Advantech Introduces Its GPU Server SKY-602E3 With NVIDIA H200 NVL

Advantech, a leading global provider of industrial edge AI solutions, is excited to introduce its GPU server SKY-602E3 equipped with the NVIDIA H200 NVL platform. This powerful combination is set to accelerate the offline LLM for manufacturing, providing unprecedented levels of performance and efficiency. The NVIDIA H200 NVL, requiring 600 W passive cooling, is fully supported by the compact and efficient SKY-602E3 GPU server, making it an ideal solution for demanding edge AI applications.

Core of Factory LLM Deployment: AI Vision
The SKY-602E3 GPU server excels in supporting large language models (LLMs) for AI inference and training. It features four PCIe 5.0 x16 slots, delivering high bandwidth for intensive tasks, and four PCIe 5.0 x8 slots, providing enhanced flexibility for GPU and frame grabber card expansion. The half-width design of the SKY-602E3 makes it an excellent choice for workstation environments. Additionally, the server can be equipped with the NVIDIA H200 NVL platform, which offers 1.7x more performance than the NVIDIA H100 NVL, freeing up additional PCIe slots for other expansion needs.
Return to Keyword Browsing
Mar 4th, 2025 16:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts