News Posts matching #Xeon

Return to Keyword Browsing

Intel Itanium Reaches End of the Road with Linux Kernel Stopping Updates

Today marks the end of support for Itanium's IA-64 architecture in the Linux kernel's 6.7 update—a significant milestone in the winding-down saga of Intel Itanium. Itanium, initially Intel's ambitious venture into 64-bit computing, faced challenges and struggled throughout its existence. It was jointly developed by Intel and HP but encountered delays and lacked compatibility with x86 software, a significant obstacle to its adoption. When AMD introduced x86-64 (AMD64) for its Opteron CPUs, which could run x86 software natively, Intel was compelled to update Xeon, based on x86-64 technology, leaving Itanium to fade into the background.

Despite ongoing efforts to sustain Itanium, it no longer received annual CPU product updates, and the last update came in 2017. The removal of IA-64 support in the Linux kernel will have a substantial impact since Linux is an essential operating system for Itanium CPUs. Without ongoing updates, the usability of Itanium servers will inevitably decline, pushing the (few) remaining Itanium users to migrate to alternative solutions, which are most likely looking to modernize their product stack.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

Velocity Micro Announces ProMagix G480a and G480i, Two GPU Server Solutions for AI and HPC

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of the ProMagix G480a and G480i - two GPU servers optimized for High Performance Computing and Artificial Intelligence. Powered by either dual AMD Epyc 4th Gen or dual 4th Gen Intel Scalable Xeon processors, these 4U form factor servers support up to eight dual slot PCIe Gen 5 GPUs to create incredible compute power designed specifically for the highest demand workflows including simulation, rendering, analytics, deep learning, AI, and more. Shipments begin immediately.

"By putting emphasis on scalability, functionality, and performance, we've created a line of server solutions that tie in the legacy of our high-end brand while also offering businesses alternative options for more specialized solutions for the highest demand workflows," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to introduce a whole new market to what we can do."

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Intel Partners with Submer to Cool 1,000+ Watt Processors using Immersion Cooling

Intel and Submer, a company specializing in immersion cooling, are set to unveil a creative immersion cooling system at the OCP Global Summit. This system can efficiently dissipate 1,000 W of power in a single-phase liquid cooling setup designed for deployment in data centers. Unlike traditional water cooling, immersion cooling systems offer higher efficiency and reliability. The solution developed by Submer and Intel is based on a Forced Convection Heat Sink (FCHS) and leverages a heat exchanger for heat transfer with a second liquid. The primary advantage of immersion cooling is its lack of active components on the cooling element, making it possible for immersed systems to operate without them for extended periods.

In this new system, a copper cooler is housed with two fans at one end to enhance liquid flow through the heat sink using forced convection. However, this active cooling component contradicts the traditional passive concept of immersion cooling based on natural convection. In its initial phase, Submer and Intel utilized Xeon processors with an 800 W TDP, with plans to increase that figure to 1,000 W in the next step. This Forced Convection Heat Sink (FCHS) offers the advantages of easy manufacturing and cost-effective usage while effectively dissipating up to 1,000 W of waste heat, making it a compelling option for immersion cooling. There are even possibilities of being 3D printed, according to Submer, and the plan is to achieve cooling of 1kW+ chip. We expect to hear more about the system during the OCP Global Summit, running from October 17 to 19, as currently, we only have a lower-resolution image from Submer's press release.

GIGABYTE Announces Immersion GPU Servers and Nodes Following Open Rack V3 Specifications at the OCP Global Summit

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced ahead of exhibiting at the OCP Global Summit new OCP ORv3 based solutions, including ones designed for GPU centric workloads in immersion cooling. The new GIGABYTE products include a 2OU node tray, TO25-BT0, and compute nodes: TO25-S10, TO25-S11, TO25-S12, TO25-Z10, TO25-Z11, and TO25-Z12. For immersion cooling server deployments with ORv3 specifications are the new GPU servers: TO15-S40, TO15-S41, and TO15-Z40.

Our commitment to support Open Compute Project's designs dates back almost a decade starting with our OCP v1.0 products, and now we are releasing some well-designed, in demand ORv3 products. Through discussions with several customers, we decided to move forward with the development of these configurations. These designs check the boxes for specifications that are required," said Vincent Wang, Sales VP at Giga Computing. "We will continue supporting ORv3 specifications and we have more products on the horizon in Q1 2024.

Intel 5th Gen Xeon Platinum 8580 CPU Details Leaked

YuuKi_AnS has brought renewed attention to an already leaked Intel "Emerald Rapids" processor—the 5th Gen Xeon Platinum 8580 CPU was identified with 60 cores and 120 threads in a previous post, but a follow up has appeared in the form of an engineering prototype (ES2-Q2SP-A0). Yuuki noted: "samples are for reference only, and the actual performance is subject to the official version." Team Blue has revealed a launch date—December 14 2023—for its 5th Gen Xeon Scalable processor lineup, so it is not surprising to see pre-release examples appear online a couple of months beforehand. This particular ES2 SKU (on A0 silicon) fields an all P-Core configuration consisting of Raptor Cove units, with a dual-chiplet design (30 cores per die). There is a significant bump up in cache sizes when compared to the current "Sapphire Rapids" generation—Wccftech outlines these allocations: "Each core comes with 2 MB of L2 cache for up to 120 MB of L2 cache. The whole chip also features 300 MB of L3 cache which combines to offer a total cache pool of 420 MB."

They bring in some of the competition for comparison: "That's a 2.6x increase in cache versus the existing Sapphire Rapids CPU lineup and while it still doesn't match the 480 MB L3 cache of standard (AMD) Genoa or the 1.5 GB cache pool of Genoa-X, it is a good start for Intel to catch up." Team Blue appears ready to take on AMD on many levels—this week's Innovation Event produced some intriguing announcements including "Sierra Forest vs. Bergamo" and plans to embrace 3D Stacked Cache technology. Yuuki's small batch of screenshots show the Xeon Platinum 8580 CPU's captured clock speeds are far from the finished article—just a touch over 2.0 GHz, so very likely limited to safe margins. An unnamed mainboard utilizing Intel's Eagle Stream platform was logged sporting a dual-socket setup—the test system was running a grand total of 120 cores and 240 threads!

Intel 288 E-core Xeon "Sierra Forest" Out to Eat AMD EPYC Bergamo's Lunch

Intel at the 2023 InnovatiON event unveiled a 288-core extreme core-count variant of the Xeon "Sierra Forest" processor for high-density servers for scale-out, cloud-native environments. It succeeds the current 144-core model. "Sierra Forest" is a server processor based entirely on efficiency cores, or E-cores, based on the "Sierra Glen" core microarchitecture, a server-grade derivative of "Crestmont," Intel's second-generation E-core that's making a client debut with "Meteor Lake."

Xeon "Sierra Forest" is a chiplet-based processor, much like "Meteor Lake" and the upcoming "Emerald Rapids" server processor. It features a total of five tiles—two Compute tiles, two I/O tiles, and a base tile (interposer). Each of the two Compute tiles is built on the Intel 3 foundry node, a more advanced node than Intel 4, featuring higher-density libraries, and an undisclosed performance/Watt increase. Each tile has 36 "Sierra Glen" E-core clusters, 108 MB of shared L3 cache, 6-channel (12 sub-channel) DDR5 memory controllers, and Foveros tile-to-tile interfaces.

Pat Gelsinger Says 3D Stacked Cache Tech Coming to Intel

Intel CEO Pat Gelsinger, in the Q&A session of InnovatiON 2023 Day 1, confirmed that the company is developing 3D-stacked cache technology for its processors. The technology involves expanding the on-die last-level cache (L3 cache) of a processor with an additional SRAM die physically stacked on top, and bonded with the cache's high-bandwidth data fabric. The stacked cache operates at the same speed as the on-die cache, and so the combined cache size is visible to software as a single contiguous addressable block of cache memory.

AMD has used 3D-stacked cache to good effect on its processors. On client processors such as the Ryzen X3D series, the cache provides significant gaming performance uplifts as the larger L3 cache makes more of the game's rendering data immediately accessible to the CPU cores; while on server processors such as EPYC "Milan-X" and "Genoa-X," the added cache provides significant uplifts to memory intensive compute workloads. Intel's approach to 3D-stacked cache will be different at the hardware level compared to AMD's, Gelsinger stated in his response. AMD's tech has been collaboratively developed with TSMC, and hinges on a TSMC-made SoIC packaging tech that facilitates high-density die-to-die wiring between the CCD and cache chiplet. Intel uses its own fabs for processor dies, and will have to use its own IP.

Intel Innovation 2023: Bringing AI Everywhere

As the world experiences a generational shift to artificial intelligence, each of us is participating in a new era of global expansion enabled by silicon. It's the "Siliconomy," where systems powered by AI are imbued with autonomy and agency, assisting us across both knowledge-based and physical-based tasks as part of our everyday environments.

At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads - from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data center AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024 and tools for securely powering AI deployments at the edge.

Supermicro Announces Future Support and Upcoming Early Access for 5th Gen Intel Xeon Processors on the Complete Family of X13 Servers

Intel Innovation 2023 -- Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing future support for the upcoming 5th Gen Intel Xeon processors. In addition, Supermicro will soon offer early shipping and free remote early access testing of the new Systems via its JumpStart Program for qualified customers. To learn more, go to www.supermicro.com/x13 for details. The Supermicro 8x GPU optimized servers, the SuperBlade servers, and the Hyper Series will soon be ready for customers to test their workloads on the new CPU.

"Supermicro's range of Generative High-Performance AI systems, including recently launched GPUs, continues to lead the industry in AI offerings with its broad range of X13 family of servers designed for various workloads, from the edge to the cloud," said Charles Liang, president, and CEO, Supermicro. "Our support for the upcoming 5th Gen Intel Xeon processors, with more cores, an increased performance per watt, and the latest DDR5-5600 MHz memory, will allow our customers to realize even greater application performance and power efficiency for AI, Cloud, 5G Edge, and Enterprise workloads. These new features will help customers accelerate their business and maximize their competitive advantage."

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

Intel Shows Strong AI Inference Performance

Today, MLCommons published results of its MLPerf Inference v3.1 performance benchmark for GPT-J, the 6 billion parameter large language model, as well as computer vision and natural language processing models. Intel submitted results for Habana Gaudi 2 accelerators, 4th Gen Intel Xeon Scalable processors, and Intel Xeon CPU Max Series. The results show Intel's competitive performance for AI inference and reinforce the company's commitment to making artificial intelligence more accessible at scale across the continuum of AI workloads - from client and edge to the network and cloud.

"As demonstrated through the recent MLCommons results, we have a strong, competitive AI product portfolio, designed to meet our customers' needs for high-performance, high-efficiency deep learning inference and training, for the complete spectrum of AI models - from the smallest to the largest - with leading price/performance." -Sandra Rivera, Intel executive vice president and general manager of the Data Center and AI Group

Intel Demos 6th Gen Xeon Scalable CPUs, Core Counts Leaked

Intel's advanced packaging prowess demonstration took place this week—attendees were able to get an early-ish look at Team Blue's sixth Generation Xeon Scalable "Sapphire Rapids" processors. This multi-tile datacenter-oriented CPU family is projected to hit the market within the first half of 2024, but reports suggest that key enterprise clients have recently received evaluation samples. Coincidentally, renowned hardware leaker—Yuuki_AnS—has managed to source more information from industry insiders. This follows their complete blowout of more mainstream Raptor Lake Refresh desktop SKUs.

The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."

Supermicro Launches Industry Leading vSAN HCI Solution

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, today announced a new VMware vSAN solution optimized to run enterprise class hyperconverged virtualized workloads. As virtualized workloads become more advanced, processing power and storage performance requirements increase, requiring greater capacity to meet application SLAs and maximize virtual machine density. This solution also utilizes the latest Intel AMX accelerator for AI workloads.

When compared to the Supermicro X11 BigTwin, benchmark testing conducted by Supermicro showed up to 4.7X higher IO throughput and 8.2X lower latency on the HCIBench benchmark, up to 4.9X faster image classification inference throughput on the ResNet50 model and up to 4X faster natural language processing throughput on BERT-Large model. In addition, the superior power and efficiency of the Supermicro X13 BigTwin architecture can deliver up to 3X cost and performance improvement within the same node footprint compared to a similar deployment based on older generation Supermicro systems, creating a compelling case for organizations to upgrade their aging infrastructure.

IBASE Announces INA8505 Enterprise 1U Edge Server for 5G Open vRAN & MEC

IBASE Technology Inc. (TPEx: 8050), a global leader in network appliances and embedded solutions, proudly announces the release of the INA8505 enterprise 1U edge server. Powered by the Intel Xeon D-2700 processor and offering versatile connectivity options, this state-of-the-art appliance is specifically designed to excel in demanding 5G Open vRAN & MEC applications such as real-time data analytics, autonomous vehicles, and smart city deployments. It enables full control over resource allocation in the RAN and MEC, and has the potential to seamlessly integrate AI capabilities to dynamically optimize network performance in real time at the edge of the 5G network infrastructure.

The INA8505 delivers unmatched performance, scalability, and efficiency with flexible storage, offering two SATA/NVMe 2.5" HDD/SSD slots, 2x M.2 (M-key) SATA/PCI-E storage slots, and one 16 GB/32 GB/64 GB eMMC. With an FHFL PCI-E (x16) Gen 4 (supports 75 W) and an FHFL PCI-E (x8) Gen 4 (supports 75 W) configurable as PCI-E (x16) Gen 4 double FHFL (supports 120 W), the INA8505 adapts effortlessly into different network environments and meet future demands for increased scalability. It boasts a rich array of I/O connectivity options, including a VGA port from BMC (Aspeed 2600, IPMI 2.0 support), two USB2.0 Type-A ports, an RJ45 console port, and four 25 GbE SFP28 ports, ensuring enhanced adaptability to various connectivity needs.

"Downfall" Intel CPU Vulnerability Can Impact Performance By 50%

Intel has recently revealed a security vulnerability named Downfall (CVE-2022-40982) that impacts multiple generations of Intel processors. The vulnerability is linked to Intel's memory optimization feature, exploiting the Gather instruction, a function that accelerates data fetching from scattered memory locations. It inadvertently exposes internal hardware registers, allowing malicious software access to data held by other programs. The flaw affects Intel mainstream and server processors ranging from the Skylake to Rocket Lake microarchitecture. The entire list of affected CPUs is here. Intel has responded by releasing updated software-level microcode to fix the flaw. However, there's concern over the performance impact of the fix, potentially affecting AVX2 and AVX-512 workloads involving the Gather instruction by up to 50%.

Phoronix tested the Downfall mitigations and reported varying performance decreases on different processors. For instance, two Xeon Platinum 8380 processors were around 6% slower in certain tests, while the Core i7-1165G7 faced performance degradation ranging from 11% to 39% in specific benchmarks. While these reductions were less than Intel's forecasted 50% overhead, they remain significant, especially in High-Performance Computing (HPC) workloads. The ramifications of Downfall are not restricted to specialized tasks like AI or HPC but may extend to more common applications such as video encoding. Though the microcode update is not mandatory and Intel provides an opt-out mechanism, users are left with a challenging decision between security and performance. Executing a Downfall attack might seem complex, but the final choice between implementing the mitigation or retaining performance will likely vary depending on individual needs and risk assessments.

Supermicro Announces High Volume Production of E3.S All-Flash Storage Portfolio with CXL Memory Expansion

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is delivering a high-throughput, low latency E3.S storage solutions supporting the industry's first PCIe Gen 5 drives and CXL modules to meet the demands of large AI Training and HPC clusters, where massive amounts of unstructured data must be delivered to the GPUs and CPUs to achieve faster results.

Supermicro's Petascale systems are a new class of storage servers supporting the latest industry standard E3.S (7.5 mm) Gen 5 NVMe drives from leading storage vendors for up to 256 TB of high throughput, low latency storage in 1U or up to a half petabyte in 2U. Inside, Supermicro's innovative symmetrical architecture reduced latency by ensuring the shortest signal paths for data and maximized airflow over critical components, allowing them to run at optimal speeds. With these new systems, a standard rack can now hold over 20 Petabytes of capacity for high throughput NVMe-oF (NVMe over Fabrics) configurations, ensuring that GPUs remain saturated with data. Systems are available with either the 4th Gen Intel Xeon Scalable processors or 4th Gen AMD EPYC processors.

Intel 4th Gen Xeon Powers New Amazon EC2 M7i-flex and M7i Instances

Today, Amazon Web Services (AWS) announced the general availability of new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by custom 4th Gen Intel Xeon Scalable processors. This launch is the latest on a growing list of 4th Gen Xeon-powered instances that deliver leading total cost of ownership (TCO) and the most built-in accelerators of any CPU to fuel key workloads like AI, database, networking and enterprise applications.

"Intel worked closely with AWS to bring our feature-rich 4th Gen Xeon processors to its cloud customers, many of which have benefited from its performance and value for months in private and public preview. Today, we're happy to bring that same real-world value to cloud customers around the globe," said Lisa Spelman, Intel corporate vice president and general manager of the Xeon Products and Solutions Group.

China Hosts 40% of all Arm-based Servers in the World

The escalating challenges in acquiring high-performance x86 servers have prompted Chinese data center companies to accelerate the shift to Arm-based system-on-chips (SoCs). Investment banking firm Bernstein reports that approximately 40% of all Arm-powered servers globally are currently being used in China. While most servers operate on x86 processors from AMD and Intel, there's a growing preference for Arm-based SoCs, especially in the Chinese market. Several global tech giants, including AWS, Ampere, Google, Fujitsu, Microsoft, and Nvidia, have already adopted or developed Arm-powered SoCs. However, Arm-based SoCs are increasingly favorable for Chinese firms, given the difficulty in consistently sourcing Intel's Xeon or AMD's EPYC. Chinese companies like Alibaba, Huawei, and Phytium are pioneering the development of these Arm-based SoCs for client and data center processors.

However, the US government's restrictions present some challenges. Both Huawei and Phytium, blacklisted by the US, cannot access TSMC's cutting-edge process technologies, limiting their ability to produce competitive processors. Although Alibaba's T-Head can leverage TSMC's latest innovations, it can't license Arm's high-performance computing Neoverse V-series CPU cores due to various export control rules. Despite these challenges, many chip designers are considering alternatives such as RISC-V, an unrestricted, rapidly evolving open-source instruction set architecture (ISA) suitable for designing highly customized general-purpose cores for specific workloads. Still, with the backing of influential firms like AWS, Google, Nvidia, Microsoft, Qualcomm, and Samsung, the Armv8 and Armv9 instruction set architectures continue to hold an edge over RISC-V. These companies' support ensures that the software ecosystem remains compatible with their CPUs, which will likely continue to drive the adoption of Arm in the data center space.

Intel Reports Second-Quarter 2023 Financial Results, Foundry Services Business up

Intel Corporation today reported second-quarter 2023 financial results. "Our Q2 results exceeded the high end of our guidance as we continue to execute on our strategic priorities, including building momentum with our foundry business and delivering on our product and process roadmaps," said Pat Gelsinger, Intel CEO. "We are also well-positioned to capitalize on the significant growth across the AI continuum by championing an open ecosystem and silicon solutions that optimize performance, cost and security to democratize AI from cloud to enterprise, edge and client."

David Zinsner, Intel CFO, said, "Strong execution, including progress towards our $3 billion in cost savings in 2023, contributed to the upside in the quarter. We remain focused on operational efficiencies and our Smart Capital strategy to support sustainable growth and financial discipline as we improve our margins and cash generation and drive shareholder value." In the second quarter, the company generated $2.8 billion in cash from operations and paid dividends of $0.5 billion.

IBM Launches AI-informed Cloud Carbon Calculator

IBM has launched a new tool to help enterprises track greenhouse gas (GHG) emissions across cloud services and advance their sustainability performance throughout their hybrid, multicloud journeys. Now generally available, the IBM Cloud Carbon Calculator - an AI-informed dashboard - can help clients access emissions data across a variety of IBM Cloud workloads such as AI, high performance computing (HPC) and financial services.

Across industries, enterprises are embracing modernization by leveraging hybrid cloud and AI to digitally transform with resiliency, performance, security, and compliance at the forefront, all while remaining focused on delivering value and driving more sustainable business practices. According to a recent study by IBM, 42% of CEOs surveyed pinpoint environmental sustainability as their top challenge over the next three years. At the same time, the study reports that CEOs are facing pressure to adopt generative AI while also weighing the data management needs to make AI successful. The increase in data processing required for AI workloads can present new challenges for organizations that are looking to reduce their GHG emissions. With more than 43% of CEOs surveyed already using generative AI to inform strategic decisions, organizations should prepare to balance executing high performance workloads with sustainability.

Intel, Ericsson Expand Collaboration to Advance Next-Gen Optimized 5G Infrastructure

Today, Intel announced a strategic collaboration agreement with Ericsson to utilize Intel's 18A process and manufacturing technology for Ericsson's future next-generation optimized 5G infrastructure. As part of the agreement, Intel will manufacture custom 5G SoCs (system-on-chip) for Ericsson to create highly differentiated leadership products for future 5G infrastructure. Additionally, the companies will expand their collaboration to optimize 4th Gen Intel Xeon Scalable processors with Intel vRAN Boost for Ericsson's Cloud RAN (radio access network) solutions to help communications service providers increase network capacity and energy efficiency while gaining greater flexibility and scalability.

"As our work together evolves, this is a significant milestone with Ericsson to partner broadly on their next-generation optimized 5G infrastructure. This agreement exemplifies our shared vision to innovate and transform network connectivity, and it reinforces the growing customer confidence in our process and manufacturing technology," said Sachin Katti, senior vice president and general manager of the Network and Edge group at Intel. "We look forward to working together with Ericsson, an industry leader, to build networks that are open, reliable and ready for the future."

AIC Launches HA401-TU, a New High-availability Server Model

AIC has launched the new high-availability storage server HA401-TU, which is optimized for mission-critical, enterprise-level storage applications. This cluster-in-a-box solution with active-active failover design and eliminates single points of failure. HA401-TU is a 4U high-availability (HA) server with 2 controller nodes and supports 24 3.5" SAS 12 Gb/s drives. Each controller node is equipped with an AIC Tucana server board that is powered by dual 3rd Gen Intel Xeon Scalable processors and Intel C621A chipset, which supports UPI speed up to 11.2 GT/s. HA401-TU provides enterprise users with a number of crucial benefits. The redundant hardware components ensure that there is no single point of failure.

With the hot-swappable functionality, the controller canisters protect enterprises from the loss of revenue that can occur when access to mission-critical data or applications is disrupted. Both controller nodes process data input/output (I/O) operations and users can experience simultaneous and balanced access to logical devices. In the event of failover, the secondary node will automatically take over the devices, client connections and all the processes and services running in the system. This high-availability design significantly enhances the overall performance of clusters, enabling seamless handling of demanding workloads.

Intel "Granite Rapids-D" Xeon Processors Come in Core-count and Memory-channel Based Physical Variants

The "Granite Rapids-D" line of upcoming processors are designed for data-center servers on the edge. These non-socketed processors come in BGA4368 packages. The company is reportedly readying at least two key variants of these chips based on core-counts and memory channels. The "Granite Rapids-D" HCC (high core-count) is an MCM of a "Granite Rapids" LCC (low-core count) compute tile, and a single I/O tile with a 4-channel DDR5 memory interface.

The "Granite Rapids-D" XCC (extreme core-count) has one "Granite Rapids" HCC (high core-count) compute tile, and two I/O tiles that make up the chip's 8-channel DDR5 memory interface. A probable reason for the confusion between LCC, HCC, and XCC terminologies for "Granite Rapids-D" is because the compute tiles are carried over from the main "Granite Rapids-SP" server processors, where they mean different things for the core-counts of mainline servers.
Return to Keyword Browsing
May 21st, 2024 07:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts