News Posts matching #Xeon

Return to Keyword Browsing

$30,000 Music Streaming Server is the Next Audiophile Dream Device

Taiko Audio, a Dutch high-end audio manufacturer, has unveiled what might be the most over-engineered music server ever created—the Extreme Server. With a starting price of €28,000 (US$29,600), this meticulously crafted device embodies either the pinnacle of audio engineering or the epitome of audiophile excess. The Extreme's most distinctive feature is its unique dual-processor architecture, using two Intel Xeon Scalable 10-core CPUs. This unusual configuration isn't just for show—Taiko claims it solves a specific audiophile dilemma: the impact of Roon's music management interface on sound quality. By dedicating two processors to Roon and Windows 10 Enterprise LTSC 2019 interface, they've made Roon's processing "virtually inaudible", addressing a concern most music listeners probably never knew existed.

Perhaps the most striking technical achievement is the server's cooling system, or rather, its complete absence of conventional cooling. Taiko designed a custom 240 W passive cooling solution with absolutely no fans or moving parts. The company machined the CPU interface to a mind-boggling precision of 5 microns (0.005 mm) and opted for solid copper heat sinks instead of aluminium, claiming this will extend component life by 4 to 12 years. The attention to detail extends to the memory configuration, where Taiko takes an unconventional approach. The server uses twelve 4 GB custom-made industrial memory modules, each factory pre-selected with components matched to within 1% tolerance. According to Taiko, this reduces the refresh rate burst current by almost 50% and allows for lower operating temperatures. The PSU that powers the PC is a custom 400 W linear power supply, an in-house development designed specifically for the Extreme's unique needs. It combines premium Mundorf and Duelund capacitors for sonic neutrality, Lundahl chokes selected by ear, and extensive vibrational damping using Panzerholz (a compressed wood composite) for durability, low temperature operation, longevity, and exceptional sound quality.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive Lineup for AI and HPC Success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that are powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

ADATA XPG Debuts First AICORE DDR5 Overclocked R-DIMM for High-end Workstations

XPG, a fast-growing provider of systems, components, and peripherals for Gamers, Esports Pros, and Tech Enthusiasts and gaming brand of ADATA Technology, the world's leading brand for memory modules and flash memory, has now expanded its product line and officially entered the field of workstations. Today XPG announced the launch of the first AICORE Overclocked DDR5 R-DIMM, with a maximum speed of 8,000 MT/s and a capacity of 32 GB, making challenging large-capacity workstation memory expansions a breeze. AICORE is built for improving overall system performance, process complex data, multi-tasking more quickly, and maximize the efficiency of AI computing.

Born for High-Speed Computing and AI Development
R-DIMM adopts a Register Clock Driver (RCD) and is characterized by high-speed, low-latency, and enhanced operational stability. AICORE Overclocked R-DIMM memory is designed for high speed and stability and is perfect for large-scale data processing, AI generation, 3D rendering and graphics, video post-production editing, and multitasking. AICORE is designed for securities markets that emphasize real-time data analysis and data science professionals or expert image creators to accelerate work efficiency and quickly complete various large-scale projects. AICORE Overclocked DDR5 R-DIMM memory has a maximum speed of 8,000 MT/s which is 1.6 times faster than standard R-DIMM memory, delivering more powerful performance to meet the rigorous speed requirements of high-end systems.

MSI Showcases Innovation at 2024 OCP Global Summit, Highlighting DC-MHS, CXL Memory Expansion, and MGX-enabled AI Servers

MSI, a leading global provider of high-performance server solutions, is excited to showcase its comprehensive lineup of motherboards and servers based on the OCP Modular Hardware System (DC-MHS) architecture at the OCP Global Summit from October 15-17 at booth A6. These cutting-edge solutions represent a breakthrough in server designs, enabling flexible deployments for cloud and high-density data centers. Featured innovations include CXL memory expansion servers and AI-optimized servers, demonstrating MSI's leadership in pushing the boundaries of AI performance and computing power.

DC-MHS Series Motherboards and Servers: Enabling Flexible Deployment in Data Centers
"The rapidly evolving IT landscape requires cloud service providers, large-scale data center operators, and enterprises to handle expanding workloads and future growth with more flexible and powerful infrastructure. MSI's new rage of DC-MHS-based solutions provides the needed flexibility and efficiency for modern data center environments," said Danny Hsu, General Manager of Enterprise Platform Solutions.

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."

Intel's Flagship 128-Core Xeon 6980P Processor Sets Record $17,800 Flagship Price

The title has no typo, and what you are reading is correct. Intel's flagship 128-core 256-threaded CPU Xeon 6980P compute monster processor carries a substantial $17,800 price point. Intel's Xeon 6 "Granite Rapids" family of processors appears to be its most expensive yet, with the flagship SKU now carrying more than a 50% price increase compared to the previous "Emerald Rapids" generation. However, the economics of computing are more nuanced than simple comparisons. While the last generation Emerald Rapids Xeon 8592+ (64 cores, 128 threads) cost about $181 per core, the new Granite Rapids Xeon 6980P comes in at approximately $139 per core, offering faster cores at a lower per-core cost.

The economics of data centers aren't always tied to the cost of a single product. When building total cost of ownership models, factors such as power consumption, compute density, and performance impact the final assessment. Even with the higher price of this flagship Granite Rapids Xeon processor, the economics of data center deployment may work in its favor. Customers get more cores in a single package, increasing density and driving down cost-per-core per system. This also improves operational efficiency, which is crucial considering that operating expenses account for about 10% of data center costs.

Intel Clearwater Forest Pictured, First 18A Node High Volume Product

Yesterday, Intel launched its Xeon 6 family of server processors based on P-cores manufactured on Intel 3 node. While the early reviews seem promising, Intel is preparing a more advanced generation of processors that will make or break its product and foundry leadership. Codenamed "Clearwater Forest," these CPUs are expected to be the first high-volume production chips based on the Intel 18A node. We have pictures of the five-tile Clearwater Forest processor thanks to Tom's Hardware. During the Enterprise Tech Tour event in Portland, Oregon, Tom's Hardware managed to take a picture of the complex Clearwater Forest design. With compute logic built on 18A, this CPU uses Intel's 3-T process technology, which serves as the foundation for the base die, marking its debut in this role. Compute dies are stacked on this base die, making the CPU building more complex but more flexible.

The Foveros Direct 3D and EMIB technologies enable large-scale integration on a package, achieving capabilities that previous monolithic single-chip designs could not deliver. Other technologies like RibbonFET and PowerVia will also be present for Clearwater Forest. If everything continues to advance according to plan, we expect to see this next-generation CPU sometime next year. However, it is crucial to note that if this CPU shows that the high-volume production of Intel 18A is viable, many Intel Foundry customers would be reassured that Intel can compete with TSMC and Samsung in producing high-performance silicon on advanced nodes at scale.

ASRock Rack Expands Server Portfolio Powered by Intel Xeon 6900 Series Processors

ASRock Rack Inc., a leading innovative server company, today announced the launch of its new server platforms, powered by Intel Xeon 6900 series processors with Performance-Cores (P-Cores). These advanced platforms are designed to deliver exceptional performance across a wide range of demanding workloads, including High-Performance Computing (HPC), Artificial Intelligence (AI), storage, and networking.

The Intel Xeon 6900 Series Processors are optimized for high performance per core and are delivered in a new class of Intel server platform design. They offer up to 12 memory channels, providing greater memory bandwidth to support demanding environments such as cloud, AI, and HPC. Leveraging these processors, ASRock Rack's newly released platforms—the 1U all-flash storage server 1U8E1S-GNRAPDNO and the GNRAPD12DNO server motherboard—fully maximize throughput with unprecedented compute capability.

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

ASRock Industrial Launches IMB-X1900 Motherboard Driven by Intel Xeon W Series Processors

In today's fast-paced business environment, efficiency and performance are paramount. ASRock Industrial introduces the IMB-X1900, a motherboard that combines cutting-edge innovation with unparalleled processing power and AI acceleration to drive the future of high-performance systems. Designed to elevate industry standards and redefine performance in demanding environments, the IMB-X1900 exemplifies ASRock Industrial's commitment to excellence. With powerful processing capabilities, expansive memory support, and advanced connectivity options, this motherboard is built on the robust Intel Xeon W-3500/3400 and W-2500/2400 Series Processors, ensuring it effortlessly manages even the most complex parallelized tasks, delivering exceptional efficiency and speed.

Key Features of the IMB-X1900 Motherboard:
  • Unrivaled Processing Power: Featuring Intel Xeon W-3500/3400 and W-2500/2400 Series Processors with W790 chipset, the IMB-X1900 supports up to 60 cores, making it a powerhouse for computationally intensive workloads. This capability is crucial for industries such as 3D rendering, product simulation, and large-scale AI models training and inference.
  • Expansive Memory Capacity: With support for up to 2 TB of DDR5 RDIMM ECC memory across 8 slots, the IMB-X1900 ensures seamless management of extensive datasets and complex simulations. This inclusion of ECC memory further enhances system reliability and data integrity, essential for industries like financial analysis, scientific research, and mission-critical applications.

Pat Gelsinger Writes to Employees on Foundry Momentum, Progress on Plan

All eyes have been on Intel since we announced Q2 earnings. There has been no shortage of rumors and speculation about the company, including last week's Board of Directors meeting, so I'm writing today to provide some updates and outline what comes next. Let me start by saying we had a highly productive and supportive Board meeting. We have a strong Board comprised of independent directors whose job it is to challenge and push us to perform at our best. And we had deep discussions about our strategy, our portfolio and the immediate progress we are making against the plan we announced on August 1.

The Board and I agreed that we have a lot of work ahead to drive greater efficiency, improve our profitability and enhance our market competitiveness—and there are three key takeaways from last week's meeting that I want to focus on:
  • We must build on our momentum in Foundry as we near the launch of Intel 18A and drive greater capital efficiency across this part of our business.
  • We must continue acting with urgency to create a more competitive cost structure and deliver the $10B in savings target we announced last month.
  • We must refocus on our strong x86 franchise as we drive our AI strategy while streamlining our product portfolio in service to Intel customers and partners.
We have several pieces of news to share that support these priorities.

GIGABYTE Announces New Liquid Cooled Solutions for NVIDIA HGX H200

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced new flagship GIGABYTE G593 series servers supporting direct liquid cooling (DLC) technology to advance green data centers using NVIDIA HGX H200 GPU. As DLC technology is becoming a necessity for many data centers, GIGABYTE continues to increase its product portfolio with new DLC solutions for GPU and CPU technologies, and for these new G593 servers the cold plates are made by CoolIT Systems.

G593 Series - Tailored Cooling
The GPU-centric G593 series is custom engineered to house an 8-GPU baseboard, and its design had foresight for both air and liquid cooling. The compact 5U chassis leads the industry in its readily scalable nature, fitting up to sixty-four GPUs in a single rack and supporting 100kW of IT hardware. This helps to consolidate the IT hardware, and in turn, decrease the data center footprint. The G593 series servers for DLC are in response to the rising customer demand for greater energy efficiency. Liquids have a higher thermal conductivity than air, so they can rapidly and effectively remove heat from hot components to maintain lower operating temperatures. And by relying on water and heat exchangers, the overall energy consumption of the data center is reduced.

ASRock Intros Xeon W-3500 and W-2500 series Support for its Intel W790 Motherboards

Leading global motherboard manufacturer, ASRock has announced that its W790 series workstation motherboards, including the W790 WS and W790 WS R2.0, now support the newly released Intel Xeon W-3500 series and Xeon W-2500 series processors. This enables consumers to experience the superior performance of the latest Intel Xeon series processors.

The ASRock W790 WS series workstation motherboards feature up to 2 TB DDR5 ECC RDIMM support, PCI-Express 5.0 expansion slots, USB4/Thunderbolt 4 and Dual 10 Gbps Ethernet. A flagship-class 14-layer PCB and 20+2 phase CPU VRM ensures ultimate performance and superb reliability, even when subjected to the most demanding sustained workloads.

ASUS Announces ESC N8-E11 AI Server with NVIDIA HGX H200

ASUS today announced the latest marvel in the groundbreaking lineup of ASUS AI servers - ESC N8-E11, featuring the intensely powerful NVIDIA HGX H200 platform. With this AI titan, ASUS has secured its first industry deal, showcasing the exceptional performance, reliability and desirability of ESC N8-E11 with HGX H200, as well as the ability of ASUS to move first and fast in creating strong, beneficial partnerships with forward-thinking organizations seeking the world's most powerful AI solutions.

Shipments of the ESC N8-E11 with NVIDIA HGX H200 are scheduled to begin in early Q4 2024, marking a new milestone in the ongoing ASUS commitment to excellence. ASUS has been actively supporting clients by assisting in the development of cooling solutions to optimize overall PUE, guaranteeing that every ESC N8-E11 unit delivers top-tier efficiency and performance - ready to power the new era of AI.

Supermicro Previews New Max Performance Intel-based X14 Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro's efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro's Early Ship Program or for remote testing with Supermicro JumpStart.

"We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features," said Charles Liang, president and CEO of Supermicro. "Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry's most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO."

Intel Launches Xeon W-3500 and W-2500 Series Workstation Processors

Intel today launched its Xeon W-3500 series and Xeon W-2500 series workstation processors. These chips are based on the "Sapphire Rapids" microarchitecture featuring the enterprise version of "Golden Cove" P-cores. These are a refresh over the Xeon W-3400 series and W-2400 series, as they feature higher CPU core counts, L3 cache, and clock speeds, at given price-points. Intel has also slightly de-cluttered its lineup with this series. The key difference between the W-3500 series and the W-2500 series, is that the former comes with 8-channel DDR5 memory interface and 112 PCI-Express Gen 5 lanes; while the latter offers a 4-channel DDR5 memory interface, along with 64 PCI-Express Gen 5 lanes. The W-2500 series also comes with lower CPU core counts compared to the W-3500, which is somewhat made up for with higher CPU clock speeds. Perhaps the highlight of this refresh is that now Intel sells CPU core counts of up to 60-core/120-thread in the workstation segment. The W-3400 series had topped off at 36-core/72-thread.

The series is led by the Xeon W9-3595X. This beast maxes out the "Sapphire Rapids" chip, with a 60-core/120-thread configuration, with each of the 60 cores featuring 2 MB of dedicated L2 cache, and sharing 112.5 MB of L3 cache. The chip comes with a base frequency of 2.00 GHz, and a maximum boost frequency of 4.80 GHz. The next highest SKU sees a rather steep drop in core-counts, with the Xeon W9-3575X coming in with a 44-core/88-thread configuration, along with 97.5 MB of shared L3 cache, besides the 2 MB of dedicated L2 cache per core. This chip ticks at 2.20 GHz base, along with 4.80 GHz maximum boost. There's yet another steep drop in core-counts with the Xeon W7-3545, featuring a 24-core/48-thread configuration, 67.5 MB of shared L3 cache, 2.70 GHz base frequency, and 4.80 GHz maximum boost.

Gigantic LGA 9324 Socket Test Interposer For Intel's Future "Diamond Rapids-AP" Xeons Spotted

Intel has begun sampling the test tools for their "Oak Stream" platform which will house the "Diamond Rapids" generation of processors sometime in late 2025 or early 2026. Previously rumored to continue using the "Birch Stream" platform LGA 7529 socket that will soon be shipping with the 288-core flavor of the "Sierra Forest" efficiency core Xeons as well as 120-core "Granite Rapids" performance core Xeons, "Diamond Rapids" appears to instead be moving up to a substantially larger LGA 9324 socket. This is Intel's next-next generation of Xeon from what is shipping today, following up on the next-gen Intel 18A based "Clearwater Forest" which was only just reported to be powering on earlier this month. Other than the codename there is almost nothing currently known about "Diamond Rapids" but the rumor mill is already fired up and mentioning things such as increased core counts, 16 DRAM channels (similar to what AMD is expected to introduce with EPYC "Venice") and PCI-E 6.0 support.

The LGA 9324 test interposer for use with Intel's Gen 5 VR Test Tool that appeared on their Design-in Tools storefront before the page went to a 404 error carried a price tag of $900 USD and stipulated that this was a pre-order with an expected shipment date in Q4 2024.

Intel Granite Rapids SKUs Detailed With Up To 128 Cores and 500 W TDP

The newest leak from X (formerly Twitter) has detailed five Intel Granite Rapids SKUs, including the 6980P, 6979P, 6972P, 6952P, and the 6960P. Featuring up to 128 CPU cores and up to 504 MB of cache, these show that Intel Granite Rapids will double the amount of cores compared to the Emerald Rapids SKUs.

The newest leak coming from Jaykihn over at X, following the previous leak that detailed the most powerful SKU. The 6980P will pack 128 cores, pack 504 MB of cache, have a 2.0 GHz base frequency and a massive 500 W TDP rating. The rest of the SKUs have lower core count, ending with the 6960P, which comes with 72 cores, 432 MB of cache, but also a higher 2.7 GHz base frequency.

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Ex-Xeon Chief Lisa Spelman Leaves Intel and Joins Cornelis Networks as CEO

Cornelis Networks, a leading independent provider of intelligent, high-performance networking solutions, today announced the appointment of Lisa Spelman as its new chief executive officer (CEO), effective August 15. Spelman joins Cornelis from Intel Corporation, where she held executive leadership roles for more than two decades, including leading the company's core data center business. Spelman will succeed Philip Murphy, who will assume the role of president and chief operating officer (COO).

"Cornelis is unique in having the products, roadmap, and talent to help customers address this issue. I look forward to joining the team to bring their innovations to even more organizations around the globe."

Intel Xeon Processors Accelerate GenAI Workloads with Aible

Intel and Aible, an end-to-end serverless generative AI (GenAI) and augmented analytics enterprise solution, now offer solutions to shared customers to run advanced GenAI and retrieval-augmented generation (RAG) use cases on multiple generations of Intel Xeon CPUs. The collaboration, which includes engineering optimizations and a benchmarking program, enhances Aible's ability to deliver GenAI results at a low cost for enterprise customers and helps developers embed AI intelligence into applications. Together, the companies offer scalable and efficient AI solutions that draw on high-performing hardware to help customers solve challenges with AI and Intel.

"Customers are looking for efficient, enterprise-grade solutions to harness the power of AI. Our collaboration with Aible shows how we're closely working with the industry to deliver innovation in AI and lowering the barrier to entry for many customers to run the latest GenAI workloads using Intel Xeon processors," said Mishali Naik, Intel senior principal engineer, Data Center and AI Group.

Nfina Technologies Releases 144T Mid-Sized Desktop Tower Server

Nfina announces the release of a single-socket 144T mid-sized desktop tower server engineered for small and medium-sized businesses, entry-level cloud services, and virtualized environments.

The Nfina 144T features the new Intel Xeon E-2400 processors. Compared to the previous generation Xeon E-2300 processors, they deliver an impressive 34% improvement in performance. Additionally, these processors offer enhanced expandability, supporting two channels of DDR5 memory with speeds of up to 4,800 MT/s, along with 16-lane PCIe 5.0 for increased bandwidth.
Return to Keyword Browsing
Dec 20th, 2024 06:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts