News Posts matching #Xeon 6

Return to Keyword Browsing

Supermicro Delivers Direct-Liquid-Optimized NVIDIA Blackwell Solutions

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing the highest-performing SuperCluster, an end-to-end AI data center solution featuring the NVIDIA Blackwell platform for the era of trillion-parameter-scale generative AI. The new SuperCluster will significantly increase the number of NVIDIA HGX B200 8-GPU systems in a liquid-cooled rack, resulting in a large increase in GPU compute density compared to Supermicro's current industry-leading liquid-cooled NVIDIA HGX H100 and H200-based SuperClusters. In addition, Supermicro is enhancing the portfolio of its NVIDIA Hopper systems to address the rapid adoption of accelerated computing for HPC applications and mainstream enterprise AI.

"Supermicro has the expertise, delivery speed, and capacity to deploy the largest liquid-cooled AI data center projects in the world, containing 100,000 GPUs, which Supermicro and NVIDIA contributed to and recently deployed," said Charles Liang, president and CEO of Supermicro. "These Supermicro SuperClusters reduce power needs due to DLC efficiencies. We now have solutions that use the NVIDIA Blackwell platform. Using our Building Block approach allows us to quickly design servers with NVIDIA HGX B200 8-GPU, which can be either liquid-cooled or air-cooled. Our SuperClusters provide unprecedented density, performance, and efficiency, and pave the way toward even more dense AI computing solutions in the future. The Supermicro clusters use direct liquid cooling, resulting in higher performance, lower power consumption for the entire data center, and reduced operational expenses."

ASRock Rack Brings End-to-End AI and HPC Server Portfolio to SC24

ASRock Rack Inc., a leading innovative server company, today announces its presence at SC24, held at the Georgia World Congress Center in Atlanta from November 18-21. At booth #3609, ASRock Rack will showcase a comprehensive high-performance portfolio of server boards, systems, and rack solutions with NVIDIA accelerated computing platforms, helping address the needs of enterprises, organizations, and data centers.

Artificial intelligence (AI) and high-performance computing (HPC) continue to reshape technology. ASRock Rack is presenting a complete suite of solutions spanning edge, on-premise, and cloud environments, engineered to meet the demand of AI and HPC. The 2U short-depth MECAI, incorporating the NVIDIA GH200 Grace Hopper Superchip, is developed to supercharge accelerated computing and generative AI in space-constrained environments. The 4U10G-TURIN2 and 4UXGM-GNR2, supporting ten and eight NVIDIA H200 NVL PCIe GPUs respectively, are aiming to help enterprises and researchers tackle every AI and HPC challenge with enhanced performance and greater energy efficiency. NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for AI and HPC workloads regardless of size.

GIGABYTE Showcases a Leading AI and Enterprise Portfolio at Supercomputing 2024

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, shows off at SC24 how the GIGABYTE enterprise portfolio provides solutions for all applications, from cloud computing to AI to enterprise IT, including energy-efficient liquid-cooling technologies. This portfolio is made more complete by long-term collaborations with leading technology companies and emerging industry leaders, which will be showcased at GIGABYTE booth #3123 at SC24 (Nov. 19-21) in Atlanta. The booth is sectioned to put the spotlight on strategic technology collaborations, as well as direct liquid cooling partners.

The GIGABYTE booth will showcase an array of NVIDIA platforms built to keep up with the diversity of workloads and degrees of demands in applications of AI & HPC hardware. For a rack-scale AI solution using the NVIDIA GB200 NVL72 design, GIGABYTE displays how seventy-two GPUs can be in one rack with eighteen GIGABYTE servers each housing two NVIDIA Grace CPUs and four NVIDIA Blackwell GPUs. Another platform at the GIGABYTE booth is the NVIDIA HGX H200 platform. GIGABYTE exhibits both its liquid-cooling G4L3-SD1 server and an air-cooled version, G593-SD1.

MSI Brings NVIDIA MGX AI Server to SC24

MSI, a leading global provider of high-performance server solutions, unveiled its AI server based on the NVIDIA MGX architecture at Supercomputing 2024 (SC24) from November 19-21 at booth 3655. Purpose-built to maximize compute density, energy efficiency, and modular flexibility, MSI's MGX-based AI server is designed to handle the intensive demands of AI, HPC, and data-heavy applications, offering the scalable performance and resilience that data centers need to stay ahead in today's high-performance computing landscape.

According to Danny Hsu, General Manager of MSI's Enterprise Platform Solutions, "MSI's latest innovations mark a significant leap in computational power and efficiency, enabling organizations to maximize performance, adapt seamlessly to evolving needs, and drive efficiency, building a robust foundation for future growth in high-performance computing."

MSI Powers AI & HPC at SC24 with NVIDIA MGX AI Servers and Intel Xeon 6 DC-MHS Server Solutions

MSI, a leading global provider of high-performance server solutions, unveiled its AI server based on the NVIDIA MGX architecture and DC-MHS server lineup powered by Intel Xeon 6 processors at Supercomputing 2024 (SC24) from November 19-21 at booth 3655. Purpose-built to maximize compute density, energy efficiency, and modular flexibility, MSI's latest offerings are designed to handle the intensive demands of AI, HPC, and data-heavy applications. With Intel Xeon 6 DC-MHS servers built on flexible DC-MHS architecture, MSI delivers scalable performance and resilience needed for data centers to stay ahead of evolving HPC demands.

According to Danny Hsu, General Manager of MSI's Enterprise Platform Solutions, "MSI's latest innovations mark a significant leap in computational power and efficiency, enabling organizations to maximize performance, adapt seamlessly to evolving needs, and drive efficiency, building a robust foundation for future growth in high-performance computing."

New Ultrafast Memory Boosts Intel Data Center Chips

While Intel's primary product focus is on the processors, or brains, that make computers work, system memory (that's DRAM) is a critical component for performance. This is especially true in servers, where the multiplication of processing cores has outpaced the rise in memory bandwidth (in other words, the memory bandwidth available per core has fallen). In heavy-duty computing jobs like weather modeling, computational fluid dynamics and certain types of AI, this mismatch could create a bottleneck—until now.

After several years of development with industry partners, Intel engineers have found a path to open that bottleneck, crafting a novel solution that has created the fastest system memory ever and is set to become a new open industry standard. The recently introduced Intel Xeon 6 data center processors are the first to benefit from this new memory, called MRDIMMs, for higher performance—in the most plug-and-play manner imaginable.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

NEC to Build Japan's Newest Supercomputer Based on Intel Xeon 6900P and AMD Instinct MI300A

NEC Corporation (NEC; TSE: 6701) has received an order for a next-generation supercomputer system from Japan's National Institutes for Quantum Science and Technology (QST), under the National Research and Development Agency, and the National Institute for Fusion Science (NIFS), part of the National Institutes of Natural Sciences under the Inter-University Research Institute Corporation. The new supercomputer system is scheduled to be operational from July 2025. The next-generation supercomputer system will feature multi-architecture with the latest CPUs and GPUs and will consist of large storage capacity and a high-speed network. This system is expected to be used for various research and development in the field of fusion science research.

Specifically, the system will be used for precise prediction of experiments and creation of operation scenarios in the ITER project, which is being promoted as an international project, and the Satellite Tokamak (JT-60SA) project, which is being promoted as a Broader Approach activity, and for design of DEMO reactors. The DEMO project promotes large-scale numerical calculations for DEMO design and R&D to accelerate the realization of a DEMO reactor that contributes to carbon neutrality. In addition, NIFS will conduct numerical simulation research using the supercomputer for multi-scale and multi-physics systems, including fusion plasmas, to broadly accelerate research on the science and applications of fusion plasmas, and as an Inter-University Research Institute, will provide universities and research institutes nationwide with opportunities for collaborative research using the state-of-the-art supercomputer.

MSI Unveils AI Servers Powered by NVIDIA MGX at OCP 2024

MSI, a leading global provider of high-performance server solutions, proudly announced it is showcasing new AI servers powered by the NVIDIA MGX platform—designed to address the increasing demand for scalable, energy-efficient AI workloads in modern data centers—at the OCP Global Summit 2024, booth A6. This collaboration highlights MSI's continued commitment to advancing server solutions, focusing on cutting-edge AI acceleration and high-performance computing (HPC).

The NVIDIA MGX platform offers a flexible architecture that enables MSI to deliver purpose-built solutions optimized for AI, HPC, and LLMs. By leveraging this platform, MSI's AI server solutions provide exceptional scalability, efficiency, and enhanced GPU density—key factors in meeting the growing computational demands of AI workloads. Tapping into MSI's engineering expertise and NVIDIA's advanced AI technologies, these AI servers based on the MGX architecture deliver unparalleled compute power, positioning data centers to maximize performance and power efficiency while paving the way for the future of AI-driven infrastructure.

Lenovo Announces New Liquid Cooled Servers for Intel Xeon and NVIDIA Blackwell Platforms

At Lenovo Tech World 2024, we announced new Supercomputing servers for HPC and AI workloads. These new water-cooled servers use the latest processor and accelerator technology from Intel and NVIDIA.

ThinkSystem SC750 V4
Engineered for large-scale cloud infrastructures and High Performance Computing (HPC), the Lenovo ThinkSystem SC750 V4 Neptune excels in intensive simulations and complex modeling. It's designed to handle technical computing, grid deployments, and analytics workloads in various fields such as research, life sciences, energy, engineering, and financial simulation.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."

Jabil Intros New Servers Powered by AMD 5th Gen EPYC and Intel Xeon 6 Processors

Jabil Inc. announced today that it is expanding its server portfolio with the J421E-S and J422-S servers, powered by AMD 5th Generation EPYC and Intel Xeon 6 processors. These servers are purpose-built for scalability in a variety of cloud data center applications, including AI, high-performance computing (HPC), fintech, networking, storage, databases, and security — representing the latest generation of server innovation from Jabil.

Built with customization and innovation in mind, the design-ready J422-S and J421E-S servers will allow engineering teams to meet customers' specific requirements. By fine-tuning Jabil's custom BIOS and BMC firmware, Jabil can create a competitive advantage for customers by developing the server configuration needed for higher performance, data management, and security. The server platforms are now available for sampling and will be in production by the first half of 2025.

AMD EPYC "Turin" with 192 Cores and 384 Threads Delivers Almost 40% Higher Performance Than Intel Xeon 6

AMD has unveiled its latest EPYC processors, codenamed "Turin," featuring Zen 5 and Zen 5C dense cores. Phoronix's thorough testing reveals remarkable advancements in performance, efficiency, and value. The new lineup includes the EPYC 9575F (64-core), EPYC 9755 (128-core), and EPYC 9965 (192-core) models, all showing impressive capabilities across various server and HPC workloads. In benchmarks, a dual-socket configuration of the 128-core EPYC 9755 Turin outperformed Intel's dual Xeon "Granite Rapids" 6980P setup with MRDIMM-8800 by 40% in the geometric mean of all tests. Surprisingly, even a single EPYC 9755 or EPYC 9965 matched the dual Xeon 6980P in expanded tests with regular DDR5-6400. Within AMD's lineup, the EPYC 9755 showed a 1.55x performance increase over its predecessor, the 96-core EPYC 9654 "Genoa". The EPYC 9965 surpassed the dual EPYC 9754 "Bergamo" by 45%.

These gains come with improved efficiency. While power consumption increased moderately, performance improvements resulted in better overall efficiency. For example, the EPYC 9965 used 32% more power than the EPYC 9654 but delivered 1.55x the performance. Power consumption remains competitive: the EPYC 9965 averaged 275 Watts (peak 461 Watts), the EPYC 9755 averaged 324 Watts (peak 500 Watts), while Intel's Xeon 6980P averaged 322 Watts (peak 547 Watts). AMD's pricing strategy adds to the appeal. The 192-core model is priced at $14,813, compared to Intel's 128-core CPU at $17,800. This competitive pricing, combined with superior performance per dollar and watt, has resonated with hyperscalers. Estimates suggest 50-60% of hyperscale deployments now use AMD processors.

Intel's Flagship 128-Core Xeon 6980P Processor Sets Record $17,800 Flagship Price

The title has no typo, and what you are reading is correct. Intel's flagship 128-core 256-threaded CPU Xeon 6980P compute monster processor carries a substantial $17,800 price point. Intel's Xeon 6 "Granite Rapids" family of processors appears to be its most expensive yet, with the flagship SKU now carrying more than a 50% price increase compared to the previous "Emerald Rapids" generation. However, the economics of computing are more nuanced than simple comparisons. While the last generation Emerald Rapids Xeon 8592+ (64 cores, 128 threads) cost about $181 per core, the new Granite Rapids Xeon 6980P comes in at approximately $139 per core, offering faster cores at a lower per-core cost.

The economics of data centers aren't always tied to the cost of a single product. When building total cost of ownership models, factors such as power consumption, compute density, and performance impact the final assessment. Even with the higher price of this flagship Granite Rapids Xeon processor, the economics of data center deployment may work in its favor. Customers get more cores in a single package, increasing density and driving down cost-per-core per system. This also improves operational efficiency, which is crucial considering that operating expenses account for about 10% of data center costs.

Intel Clearwater Forest Pictured, First 18A Node High Volume Product

Yesterday, Intel launched its Xeon 6 family of server processors based on P-cores manufactured on Intel 3 node. While the early reviews seem promising, Intel is preparing a more advanced generation of processors that will make or break its product and foundry leadership. Codenamed "Clearwater Forest," these CPUs are expected to be the first high-volume production chips based on the Intel 18A node. We have pictures of the five-tile Clearwater Forest processor thanks to Tom's Hardware. During the Enterprise Tech Tour event in Portland, Oregon, Tom's Hardware managed to take a picture of the complex Clearwater Forest design. With compute logic built on 18A, this CPU uses Intel's 3-T process technology, which serves as the foundation for the base die, marking its debut in this role. Compute dies are stacked on this base die, making the CPU building more complex but more flexible.

The Foveros Direct 3D and EMIB technologies enable large-scale integration on a package, achieving capabilities that previous monolithic single-chip designs could not deliver. Other technologies like RibbonFET and PowerVia will also be present for Clearwater Forest. If everything continues to advance according to plan, we expect to see this next-generation CPU sometime next year. However, it is crucial to note that if this CPU shows that the high-volume production of Intel 18A is viable, many Intel Foundry customers would be reassured that Intel can compete with TSMC and Samsung in producing high-performance silicon on advanced nodes at scale.

ASRock Rack Expands Server Portfolio Powered by Intel Xeon 6900 Series Processors

ASRock Rack Inc., a leading innovative server company, today announced the launch of its new server platforms, powered by Intel Xeon 6900 series processors with Performance-Cores (P-Cores). These advanced platforms are designed to deliver exceptional performance across a wide range of demanding workloads, including High-Performance Computing (HPC), Artificial Intelligence (AI), storage, and networking.

The Intel Xeon 6900 Series Processors are optimized for high performance per core and are delivered in a new class of Intel server platform design. They offer up to 12 memory channels, providing greater memory bandwidth to support demanding environments such as cloud, AI, and HPC. Leveraging these processors, ASRock Rack's newly released platforms—the 1U all-flash storage server 1U8E1S-GNRAPDNO and the GNRAPD12DNO server motherboard—fully maximize throughput with unprecedented compute capability.

ASUS Introduces All-New Intel Xeon 6 Processor Servers

ASUS today announced its all-new line-up of Intel Xeon 6 processor-powered servers, ready to satisfy the escalating demand for high-performance computing (HPC) solutions. The new servers include the multi-node ASUS RS920Q-E12, which supports Intel Xeon 6900 series processors for HPC applications; and the ASUS RS720Q-E12, RS720-E12 and RS700-E12 server models, embedded with Intel Xeon 6700 series with E-cores, will also support Intel Xeon 6700/6500 series with P-cores in Q1, 2025, to provide seamless integration and optimization for modern data centers and diverse IT environments.

These powerful new servers, built on the solid foundation of trusted and resilient ASUS server design, offer improved scalability, enabling clients to build customized data centers and scale up their infrastructure to achieve their highest computing potential - ready to deliver HPC success across diverse industries and use cases.

GIGABYTE Intros Performance Optimized Servers Using Intel Xeon 6900-series with P-core

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced its first wave of GIGABYTE servers for Intel Xeon 6 Processors with P-cores. This new Intel Xeon platform is engineered to optimize per-core-performance for compute-intensive and AI intensive workloads, as well as general purpose applications. GIGABYTE servers for these workloads are built to achieve the best possible performance by fine tuning the server design to the chip design and to specific workloads. ⁠

All new GIGABYTE servers support Intel Xeon 6900-series processors with P-cores that have up to 128 cores and up to 96 PCIe Gen 5 lanes. Additionally, for greater performance in memory intensive workloads, the 6900-series expands to 12 channel memory, and makes available up to 64 lanes CXL 2.0. Overall, this modular SOC architecture has great potential with the ability to leverage a shared platform for running both performance and efficiency optimized architecture.⁠

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

MSI's Introduces New Server Platforms with Intel Xeon 6 Processor Featuring P-Cores

MSI, a leading global server provider, today introduced its latest server platforms, powered by Intel Xeon 6 processor with Performance Cores (P-cores). These new products deliver unprecedented performance for compute-intensive tasks, tailored to meet the diverse demands of data center workloads.

"The demand for data center performance has never been greater, driven by compute-intensive AI, HPC applications, and mission-critical database and analytics workloads," said Danny Hsu, General Manager of Enterprise Platform Solutions. "To meet these demands, IT teams need reliable performance across an increasingly diverse array of workloads." MSI's new server platforms, powered by Intel Xeon 6 processors, deliver high performance across a broad range of tasks, meeting diverse requirements for both performance and efficiency.

Pat Gelsinger Writes to Employees on Foundry Momentum, Progress on Plan

All eyes have been on Intel since we announced Q2 earnings. There has been no shortage of rumors and speculation about the company, including last week's Board of Directors meeting, so I'm writing today to provide some updates and outline what comes next. Let me start by saying we had a highly productive and supportive Board meeting. We have a strong Board comprised of independent directors whose job it is to challenge and push us to perform at our best. And we had deep discussions about our strategy, our portfolio and the immediate progress we are making against the plan we announced on August 1.

The Board and I agreed that we have a lot of work ahead to drive greater efficiency, improve our profitability and enhance our market competitiveness—and there are three key takeaways from last week's meeting that I want to focus on:
  • We must build on our momentum in Foundry as we near the launch of Intel 18A and drive greater capital efficiency across this part of our business.
  • We must continue acting with urgency to create a more competitive cost structure and deliver the $10B in savings target we announced last month.
  • We must refocus on our strong x86 franchise as we drive our AI strategy while streamlining our product portfolio in service to Intel customers and partners.
We have several pieces of news to share that support these priorities.

Intel to Produce Custom AI Chips and Xeon 6 Processors for AWS

Intel Corp. and Amazon Web Services. Inc., an Amazon.com company, today announced a co-investment in custom chip designs under a multi-year, multi-billion-dollar framework covering product and wafers from Intel. This is a significant expansion of the two companies' longstanding strategic collaboration to help customers power virtually any workload and accelerate the performance of artificial intelligence (AI) applications.

As part of the expanded collaboration, Intel will produce an AI fabric chip for AWS on Intel 18A, the company's most advanced process node. Intel will also produce a custom Xeon 6 chip on Intel 3, building on the existing partnership under which Intel produces Xeon Scalable processors for AWS.

Intel Xeon 6 Delivers up to 17x AI Performance Gains over 4 Years of MLPerf Results

Today, MLCommons published results of its industry-standard AI performance benchmark suite, MLPerf Inference v4.1. Intel submitted results across six MLPerf benchmarks for 5th Gen Intel Xeon Scalable processors and, for the first time, Intel Xeon 6 processors with Performance-cores (P-cores). Intel Xeon 6 processors with P-cores achieved about 1.9x geomean performance improvement in AI performance compared with 5th Gen Xeon processors.

"The newest MLPerf results show how continued investment and resourcing is critical for improving AI performance. Over the past four years, we have raised the bar for AI performance on Intel Xeon processors by up to 17x based on MLPerf. As we near general availability later this year, we look forward to ramping Xeon 6 with our customers and partners," said Pallavi Mahajan, Intel corporate vice president and general manager of Data Center and AI Software.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.
Return to Keyword Browsing
Nov 18th, 2024 23:26 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts