News Posts matching #CXL

Return to Keyword Browsing

Imec Develops New CXL Buffer Memory That Could Surpass DRAM Bit Density

This week, at the 2024 IEEE International Electron Devices Meeting (IEDM), imec, a world-leading research and innovation hub in nanoelectronics and digital technologies, proposes a novel 3D integrated charge-coupled device (CCD) that can operate as a block-addressable buffer memory, in support of data-intensive compute applications. Memory operation is demonstrated on a planar proof-of-concept CCD structure which can store 142 bits. Implementing an oxide semiconductor channel material (such as IGZO) ensures sufficiently long retention time and enables 3D integration in a cost-efficient, 3D NAND-like architecture. Imec expects the 3D CCD memory density to scale far beyond the DRAM limit.

The recent introduction of the compute express link (CXL) memory interface provides opportunities for new memories to complement DRAM in data-intensive compute applications like AI and ML. One example is the CXL type-3 buffer memory, envisioned as an off-chip pool of memories that 'feeds' the various processor cores with large data blocks via a high-bandwidth CXL switch. This class of memories meets different specifications than byte-addressable DRAM, which increasingly struggles to maintain the cost-per-bit-trend scaling line.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

CXL Consortium Announces Compute Express Link 3.2 Specification Release

The CXL Consortium, an industry standard body advancing coherent connectivity, announces the release of its Compute Express Link (CXL) 3.2 Specification. The 3.2 Specification optimizes CXL Memory Device monitoring and management, enhances functionality of CXL Memory Devices for OS and Applications, and extends security with the Trusted Security Protocol (TSP).

"We are excited to announce the release of the CXL 3.2 Specification to advance the CXL ecosystem by providing enhancements to security, compliance, and functionality of CXL Memory Devices," said Larrie Carr, CXL Consortium President. "The Consortium continues to develop an open, coherent interconnect and enable an interoperable ecosystem for heterogeneous memory and computing solutions."

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

Phison Unveils Pascari D-Series PCIe Gen 5 128TB Data Center SSDs

Phison Electronics, a leading innovator in NAND Flash technologies, today announced the newest addition and highest available capacity of the Pascari D-Series data center-optimized SSDs to be showcased at SC24. The Pascari D205V drive is the first PCIe Gen 5 128 TB data center class SSD available for preorder to address shifting storage demands across use cases including AI, media and entertainment (M&E), research and beyond. In a single drive the Pascari D205V offers 122.88 TB of storage, creating a four-to-one capacity advantage over traditional cold storage hard drives while shrinking both physical footprint and OPEX costs.

While the exponential data-deluge continues to strain data center infrastructure, organizations face a tipping point to maximize investment while remaining conscious of footprint, cost efficiency and power consumption. The Pascari D205V read-intensive SSD combines Phison's industry-leading X2 controller and the latest 2 Tb 3D QLC technology engineered to enable unequaled 14,600 MB/s sequential read and 3,000K IOPS random read performance. By doubling both the read speeds against Gen 4 as well as the capacity against the 61.44 TB enterprise SSDs currently available on the market today, the Pascari D205V allows customers to upgrade to larger datasets per server, top-tier capacity-per-watt utilization and unparalleled read performance.

Kioxia Adopted for NEDO Project to Develop Manufacturing Technology for Innovative Memory Under Post-5G System Infrastructure Project

Kioxia Corporation, a world leader in memory solutions, today announced that it has been adopted by Japan's national research and development agency, New Energy and Industrial Technology Development Organization (NEDO), for its groundbreaking proposal on the Development of Manufacturing Technology for Innovative Memory to enhance the post-5G information and communication system infrastructure.

In the post-5G information and communication era, AI is estimated to generate an unprecedented volume of data. This surge will likely escalate the data processing demands of data centers and increase power consumption. To address this, it is crucial that the next-generation memories facilitate rapid data transfer with high-performance processors while increasing capacity and reducing power consumption.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Credo Announces PCI Express 6/7, Compute Express Link CXL 3.x Retimers, and AEC PCI Express Product Line at OCP Summit 2024

Credo Technology Group Holding Ltd (Credo), an innovator in providing secure, high-speed connectivity solutions that deliver improved energy efficiency as data rates and corresponding bandwidth requirements increase throughout the data infrastructure market, is excited to announce the company's first Toucan PCI Express (PCIe) 6, Compute Express Link (CXL) 3.x and Magpie PCIe 7, CXL 4.x retimers and OSFP-XD 16x 64GT/s (1 Tb) PCIe 6/CXL HiWire AECs. Credo will demonstrate the Toucan PCIe 6 retimers and HiWire AECs at the upcoming Open Compute Project (OCP) Summit October 15-17 in Booth 31 and the OCP Innovation Center.

Building on Credo's renowned Serializer/Deserializer (SerDes) technology, the new PCIe 6 and PCIe 7 retimers deliver industry-leading performance and power efficiency while being built on lower cost, more mature process nodes than competing devices. Credo will also include enhanced diagnostic tools, including an embedded logic analyzer and advanced SerDes tools driven by a new GUI designed to enable rapid bring up and debug of customer systems.

MSI Launches AMD EPYC 9005 Series CPU-Based Server Solutions

MSI, a leading global provider of high-performance server solutions, today introduced its latest AMD EPYC 9005 Series CPU-based server boards and platforms, engineered to tackle the most demanding data center workloads with leadership performance and efficiency.

Featuring AMD EPYC 9005 Series processors with up to 192 cores and 384 threads, MSI's new server platforms deliver breakthrough compute power, unparalleled density, and exceptional energy efficiency, making them ideal for handling AI-enabled, cloud-native, and business-critical workloads in modern data centers.

ScaleFlux Announces Two New SSD Controllers and One CXL Controller

In the past 13 years, global data production has surged, increasing an estimated 74 times. (1) Looking forward, McKinsey projects AI to spur 35% annual growth in enterprise SSD capacity demand, from 181 Exabytes (EB) in 2024 to 1,078EB in 2030. (2) To address this growing demand, ScaleFlux, a leader in data storage and memory technology, is announcing a significant expansion of its product portfolio. The company is introducing cutting-edge controllers for both NVMe SSDs and Compute Express Link (CXL) modules, reinforcing its leadership in innovative technology for the data pipeline. "With the release of three new ASIC controllers and key updates to its existing lineup, ScaleFlux continues to push the boundaries of SSD and memory performance, power efficiency, and data integrity," points out Hao Zhong, CEO and Co-Founder of the company.

Three New SoC Controllers to Transform Data Center Storage
ScaleFlux is proud to unveil three new SoC controllers designed to enhance data center, AI and enterprise infrastructure:

JEDEC Adds Two New Standards Supporting Compute Express Link (CXL) Technology

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of two new standards supporting Compute Express Link (CXL ) technology. These additions complete a comprehensive family of four standards that provide the industry with unparalleled flexibility to develop a wide range of CXL memory products. All four standards are available for free download from the JEDEC website.

JESD319: JEDEC Memory Controller Standard - for Compute Express Link (CXL ) defines the overall specifications, interface parameters, signaling protocols, and features for a CXL Memory Controller ASIC. Key aspects include pinout reference information and a functional description that includes CXL interface, memory controller, memory RAS, metadata, clocking, reset, performance, and controller configuration requirements. JESD319 focuses on the CXL 3.1 based direct attached memory expansion application, providing a baseline of standardized functionality while allowing for additional innovations and customizations.

SK hynix Applies CXL Optimization Solution to Linux

SK hynix Inc. announced today that the key features of its Heterogeneous Memory Software Development Kit (HMSDK) are now available on Linux, the world's largest open source operating system. HMSDK is SK hynix's proprietary software for optimizing the operation of Compute Express Link (CXL), which is gaining attention as a next-generation AI memory technology along with High Bandwidth Memory (HBM). Having received global recognition for HMSDK's performance, SK hynix is now integrating it with Linux. This accomplishment marks a significant milestone for the company as it highlights the company's competitiveness in software, adding to the recognition for its high-performance memory hardware such as HBM.

In the future, developers around the world working on Linux will be able to use SK hynix's technology as the industry standard for CXL memory, putting the company in an advantageous position for global collaboration on next-generation memory. SK hynix's HMSDK enhances memory package's bandwidth by over 30% without modifying existing applications. It achieves this by selectively allocating memory based on the bandwidth between existing memory and expanded CXL memory. Additionally, the software improves performance by more than 12% over conventional systems through optimization based on access frequency, a feature which relocates frequently accessed data to faster memory.

Innodisk Unveils Advanced CXL Memory Module to Power AI Servers

Innodisk, a leading global AI solution provider, continues to push the boundaries of innovation with the launch of its cutting-edge Compute Express Link (CXL) Memory Module, which is designed to meet the rapid growth demands of AI servers and cloud data centers. As one of the few module manufacturers offering this technology, Innodisk is at the forefront of AI and high-performance computing.

The demand for AI servers is rising quickly, with these systems expected to account for approximately 65% of the server market by 2024, according to Trendforce (2024). This growth has created an urgent need for greater memory bandwidth and capacity, as AI servers now require at least 1.2 TB of memory to operate effectively. Traditional DDR memory solutions are increasingly struggling to meet these demands, especially as the number of CPU cores continues to multiply, leading to challenges such as underutilized CPU resources and increasing latency between different protocols.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

SiFive Announces Performance P870-D RISC-V Datacenter Processor

Today SiFive, Inc., the gold standard for RISC-V computing, announced its new SiFive Performance P870-D datacenter processor to meet customer requirements for highly parallelizable infrastructure workloads including video streaming, storage, and web appliances. When used in combination with products from the SiFive Intelligence product family, datacenter architects can also build an extremely high-performance, energy efficient compute subsystem for AI-powered applications.

Building on the success of the P870, the P870-D supports the open AMBA CHI protocol so customers have more flexibility to scale the number of clusters. This scalability allows customers to boost performance while minimizing power consumption. By harnessing a standard CHI bus, the P870-D enables SiFive's customers to scale up to 256 cores while harnessing industry-standard protocols, including Compute Express Link (CXL) and CHI chip to chip (C2C), to enable coherent high core count heterogeneous SoCs and chiplet configurations.

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

MSI Showcases CXL Memory Expansion Server at FMS 2024 Event

MSI, a leading global server provider, is showcasing its new CXL (Compute Express Link)-based server platform powered by 4th Gen AMD EPYC processors at The Future of Memory and Storage 2024, at the Samsung booth (#407) and MemVerge booth (#1251) in the Santa Clara Convention Center from August 6-8. The CXL memory expansion server is designed to enhance In-Memory Database, Electronic Design Automation (EDA), and High Performance Computing (HPC) application performance.

"By adopting innovative CXL technology to expand memory capacity and bandwidth, MSI's CXL memory expansion server integrates cutting-edge technology from AMD EPYC processors, CXL memory devices, and advanced management software," said Danny Hsu, General Manager of Enterprise Platform Solutions. "In collaboration with key players in the CXL ecosystem, including AMD, Samsung, and MemVerge, MSI and its partners are driving CXL technology to meet the demands of high-performance data center computing."

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year

Samsung Electronics Co. is putting a lot of effort into securing its involvement in next-generation memory technology, CXL (Compute Express Link). In a media briefing on Thursday, Jangseok Choi, vice president of Samsung's new business planning team, announced plans to mass-produce 256 GB DRAM supporting CXL 2.0 by the end of this year. CXL technology promises to significantly enhance the efficiency of high-performance server systems by providing a unified interface for accelerators, DRAM, and storage devices used with CPUs and GPUs.

The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

JEDEC Publishes Compute Express Link (CXL) Support Standards

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of JESD405-1B JEDEC Memory Module Label - for Compute Express Link (CXL ) V1.1. JESD405-1B joins JESD317A JEDEC Memory Module Reference Base Standard - for Compute Express Link (CXL ) V1.0, first introduced in March 2023, in defining the function and configuration of memory modules that support CXL specifications, as well as the standardized content for labels for these modules. JESD405-1B and JESD317A were developed in coordination with the Compute Express Link standards organization. Both standards are available for free download from the JEDEC website.

JESD317A provides detailed guidelines for CXL memory modules including mechanical, electrical, pinout, power and thermal, and environmental guidelines for emerging CXL Memory Modules (CMMs). These modules conform to SNIA (Storage Networking Industry Association) EDSFF form factors E1.S and E3.S to provide end-user friendly hot pluggable assemblies for data centers and similar server applications.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Next-Gen Computing: MiTAC and TYAN Launch Intel Xeon 6 Processor-Based Servers for AI, HPC, Cloud, and Enterprise Workloads at COMPUTEX 2024

The subsidiary of MiTAC Holdings Corp, MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership," said Rick Hwang, President of MiTAC Computing Technology Corporation.
Return to Keyword Browsing
Dec 21st, 2024 21:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts