News Posts matching #Scalable

Return to Keyword Browsing

Intel to Host 4th Gen Xeon Scalable and Max Series Launch on the 10th of January

On Jan. 10, Intel will officially welcome to market the 4th Gen Intel Xeon Scalable processors and the Intel Xeon CPU Max Series, as well as the Intel Data Center GPU Max Series for high performance computing (HPC) and AI. Hosted by Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel, and Lisa Spelman, corporate vice president and general manager of Intel Xeon Products, the event will highlight the value of 4th Gen Intel Xeon Scalable processors and the Intel Max Series product family, while showcasing customer, partner and ecosystem support.

The event will demonstrate how Intel is addressing critical needs in the marketplace with a focus on a workload-first approach, performance leadership in key areas such as AI, networking and HPC, the benefits of security and sustainability, and how the company is delivering significant outcomes for its customers and the industry.

New Intel oneAPI 2023 Tools Maximize Value of Upcoming Intel Hardware

Today, Intel announced the 2023 release of the Intel oneAPI tools - available in the Intel Developer Cloud and rolling out through regular distribution channels. The new oneAPI 2023 tools support the upcoming 4th Gen Intel Xeon Scalable processors, Intel Xeon CPU Max Series and Intel Data Center GPUs, including Flex Series and the new Max Series. The tools deliver performance and productivity enhancements, and also add support for new Codeplay plug-ins that make it easier than ever for developers to write SYCL code for non-Intel GPU architectures. These standards-based tools deliver choice in hardware and ease in developing high-performance applications that run on multiarchitecture systems.

"We're seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators - applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL and Python AI frameworks such as PyTorch, accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."
-Timothy Williams, deputy director, Argonne Computational Science Division

Nfina Technologies Releases 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of three new server systems to its lineup, customized for hybrid/multi-cloud, hyperconverged HA infrastructure, HPC, backup/disaster recovery, and business storage solutions. Featuring 3rd Gen Intel Xeon Scalable Processors, Nfina-Store, and Nfina-View software, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy-to-use management tools, built-in backup, and rapid disaster recovery.

"We know we must build systems for the business IT needs of today while planning for unknown future demands. Flexible infrastructure is key, optimized for hybrid/multi-cloud, backup/disaster recovery, HPC, and growing storage needs," says Warren Nicholson, President, and CEO of Nfina. He continues by saying, "Flexible infrastructure also means offering managed services like IaaS, DRaaS, etc., that provide customers with choices that fit the size of their application and budget - not a one size fits all approach like many of our competitors. Our goal is to serve many different business IT applications, any size, anywhere, at any time."

AWS Updates Custom CPU Offerings with Graviton3E for HPC Workloads

Amazon Web Services (AWS) cloud division is extensively developing custom Arm-based CPU solutions to suit its enterprise clients and is releasing new iterations of the Graviton series. Today, during the company re:Invent week, we are getting a new CPU custom-tailored to high-performance computing (HPC) workloads called Graviton3E. Given that HPC workloads require higher bandwidth, wider datapaths, and data types span in multiple dimensions, AWS redesigned the Graviton3 processor and enhanced it with new vector processing capabilities with a new name—Graviton3E. This CPU is promised to offer up to 35% higher performance in workloads that depend on heavy vector processing.

With the rising popularity of HPC in the cloud, AWS sees a significant market opportunity and is trying to capture it. Available in the AWS EC2 instance types, this chip will be available with up to 64 vCPU cores and 128 GiB of memory. The supported EC2 tiers that will offer this enhanced chip are C7gn and Hpc7g instances that provide 200 Gbps of dedicated network bandwidth that is optimized for traffic between instances in the same VPC. In addition, Intel-based R7iz instances are available for HPC users in the cloud, now powered by 4th generation Xeon Scalable processors codenamed Sapphire Rapids.

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Intel Moves Xeon Scalable "Sapphire Rapids" General Availability to February-March 2023

Intel is reportedly moving the general availability of its 4th Gen Xeon Scalable processor, codenamed "Sapphire Rapids," in the region of early-February to early-March, 2023. The enterprise processors were expected to debut toward the end of 2022, and some of the oldest company roadmaps referencing the processor put its launch back in Q1-2021. Igor's Lab reports that there are as many as 12 steppings of the processor, with the latest discovered being the E5 (the others being A0, A1, B0, C0, C1, C2, D0, E0, E2, E3 and E4; although these could be validation samples handed out to various large customers of Intel to try these chips with their various applications. Built on the Intel 7 node, the processor features up to 60 "Golden Cove" CPU cores, a DDR5 memory interface, PCI-Express Gen 5, and various on-die accelerators. Certain variants even feature up to 32 GB of on-package HBM.

All in Liquid Cooling — Inspur Information Launches Full-Stack Liquid-Cooled Server Solutions

Inspur Information, a leading IT infrastructure solutions provider, is rolling out full-stack liquid-cooled products, with cold plate liquid-cooling technology being available in all of its products including general-purpose servers, high-density servers, rack servers, and AI servers. This is another major step in Inspur Information's march towards being carbon neutral following its unveiling of Asia's largest development and manufacturing facility for liquid-cooled data centers.

As Green, low-carbon and sustainable development has become the international consensus, nearly 130 countries and regions around the world have set the goal of being carbon neutral. In 2022, with "All in Liquid-Cooling" incorporated into its strategy, Inspur Information has incorporated cold plate liquid-cooling technology into all of its products (general-purpose servers, high-density servers, rack servers, and AI servers), which can be fully customized for a diverse array of scenarios.

AWS Graviton3 CPU with 64 Cores and DDR5 Memory Available with Three Sockets Per Motherboard

Amazon's AWS division has been making Graviton processors for a few years now, and the company recently announced its Graviton3 design will soon to available in the cloud. Today, we are witnessing a full launch of the Graviton3 CPUs with the first instances available in the AWS Cloud. In theC7g instances, AWS customers can now scale their workloads across 1-64 vCPU instance variants. Graviton3's 64 cores run at 2.6 GHz clock speed, 300 GB/sec maximum memory bandwidth, DDR5 memory controller, 64 cores, seven silicon die chiplet-based design, 256-bit SVE (Scalable Vector Extension), all across 55 billion transistors. Paired with up to 128 GiB of DDR5 memory, these processors are compute-intensive solutions. AWS noted that the company used a monolithic computing and memory controller logic design to reduce latency and improve performance.

One interesting thing to note is the motherboard that AWS hosts Graviton3 processors in. Usually, server motherboards can be single, dual, or quad-socket solutions. However, AWS decided to implement a unique solution with three sockets. This tri-socket setup is designed to see each CPU as an independent processor, managed by a Nitro Card, which can handle exactly three CPUs. The company notes that the CPU is now in general availability with C7g instances and you can see it below.

Supermicro Accelerates AI Workloads, Cloud Gaming, Media Delivery with New Systems Supporting Intel's Arctic Sound-M and Intel Habana Labs Gaudi 2

Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking, and green computing technology, supports two new Intel-based accelerators for demanding cloud gaming, media delivery, AI and ML workloads, enabling customers to deploy the latest acceleration technology from Intel and Intel Habana. "Supermicro continues to work closely with Intel and Habana Labs to deliver a range of server solutions supporting Arctic Sound-M and Gaudi 2 that address the demanding needs of organizations that require highly efficient media delivery and AI training," said Charles Liang, president and CEO. "We continue to collaborate with leading technology suppliers to deliver application-optimized total system solutions for complex workloads while also increasing system performance."

Supermicro can quickly bring to market new technologies by using a Building Block Solutions approach to designing new systems. This methodology allows new GPUs and acceleration technology to be easily placed into existing designs or, when necessary, quickly adapt an existing design when needed for higher-performing components. "Supermicro helps deliver advanced AI and media processing with systems that leverage our latest Gaudi 2 and Arctic Sound-M accelerators," stated Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel. "Supermicro's Gaudi AI Training Server will accelerate deep learning training in some of the fastest growing workloads in the datacenter."

Fujitsu Achieves Major Technical Milestone with World's Fastest 36 Qubit Quantum Simulator

Fujitsu has successfully developed the world's fastest quantum computer simulator capable of handling 36 qubit quantum circuits on a cluster system featuring Fujitsu's "FUJITSU Supercomputer PRIMEHPC FX 700" ("PRIMEHPC FX 700")(1), which is equipped with the same A64FX CPU that powers the world's fastest supercomputer, Fugaku.

The newly developed quantum simulator can execute the quantum simulator software "Qulacs"(3) in parallel at high speed, achieving approximately double the performance of other significant quantum simulators in 36 qubit quantum operations. Fujitsu's new quantum simulator will serve as an important bridge towards the development of quantum computing applications that are expected to be put to practical use in the years ahead.

Tanzanite Silicon Solutions Demonstrates Industry's First CXL Based Memory Expansion and Memory Pooling Products

Tanzanite Silicon Solutions Inc., the leader in the development of Compute Express Link (CXL) based products, is unveiling its architectural vision and product roadmap with an SoC mapped to FPGA Proof-Of-Concept vehicle demonstrating Memory Expansion and Memory Pooling, with multi-host CXL based connectivity. Explosive demand for memory and compute to meet the needs of emerging applications such as Artificial Intelligence (AI), Machine Learning (ML), blockchain technology, and the metaverse is outpacing monolithic systems. A disaggregated data center design with composable components for CPU, memory, storage, GPU, and XPU is needed to provide flexible and dynamic pooling of resources to meet the varying demands of heterogenous workloads in an optimal and efficient manner.

Tanzanite's visionary TanzanoidTZ architecture and purpose-built design of a "Smart Logic Interface Connector" (SLICTZ) SoC enables independent scaling and sharing of memory and compute in a pool with low latency within and across server racks. The Tanzanite solution provides a highly scalable architecture for exa-scale level memory capacity and compute acceleration, supporting multiple industry standard form-factors, ranging from E1.S, E3.S, memory expansion board, and memory appliance.

Intel "Sapphire Rapids" Xeon 4-tile MCM Annotated

Intel Xeon Scalable "Sapphire Rapids" is an upcoming enterprise processor with a CPU core count of up to 60. This core-count is achieved using four dies inter-connected using EMIB. Locuza, who leads social media with logic die annotation, posted one for "Sapphire Rapids," based on a high-resolution die-shot revealed by Intel in its ISSCC 2022 presentation.

Each of the four dies in "Sapphire Rapids" is a fully-fledged multi-core processor in its own right, complete with CPU cores, integrated northbridge, memory and PCIe interfaces, and other platform I/O. What brings four of these together is the use of five EMIB bridges per die. This allows CPU cores of a die to transparantly access the I/O and memory controlled any of the other dies transparently. Logically, "Sapphire Rapids" isn't unlike AMD "Naples," which uses IFOP (Infinity Fabric over package) to inter-connect four 8-core "Zeppelin" dies, but the effort here appears to be to minimize the latency arising from an on-package interconnect, toward a high-bandwidth, low-latency one that uses silicon bridges with high-density microscopic wiring between them (akin to an interposer).

Intel Powers Latest Amazon EC2 General Purpose Instances with 3rd Gen Intel Xeon Scalable Processors

Intel today announced AWS customers can access the latest 3rd Gen Intel Xeon Scalable processors via the new Amazon Elastic Compute Cloud (Amazon EC2) M6i instances. Optimized for high-performance, general-purpose compute, the latest Intel-powered Amazon EC2 instances provide customers increased flexibility and more choices when running their Intel-powered infrastructure within the AWS cloud. Today's news is a further continuation of Intel and AWS' close collaboration, giving customers scalable compute instances in the cloud for almost 15 years.

"Our latest 3rd Gen Intel Xeon Scalable processors are our highest performance data center CPU and provide AWS customers an excellent platform to run their most critical business applications. We look forward to continuing our long-term collaboration with AWS to deploy industry-leading technologies within AWS' cloud infrastructure." -Sandra Rivera, Intel executive vice president and general manager, Datacenter and AI Group.

Xiaomi Announces CyberDog Powered by NVIDIA Jetson NX and Intel RealSense D450

Xiaomi today took another bold step in the exploration of future technology with its new bio-inspired quadruped robot - CyberDog. The launch of CyberDog is the culmination of Xiaomi's engineering prowess, condensed into an open source robot companion that developers can build upon.

CyberDog is Xiaomi's first foray into quadruped robotics for the open source community and developers worldwide. Robotics enthusiasts interested in CyberDog can compete or co-create with other like-minded Xiaomi Fans, together propelling the development and potential of quadruped robots.

Intel Xeon "Sapphire Rapids" Processor Die Shot Leaks

Thanks to the information coming from Yuuki_Ans, a person which has been leaking information about Intel's upcoming 4th generation Xeon Scalable processors codenamed Sapphire Rapids, we have the first die shots of the Sapphire Rapids processor and its delidded internals to look at. After performing the delidding process and sanding down the metal layers of the dies, the leaker has been able to take a few pictures of the dies present on the processor. As the Sapphire Rapids processor uses multi-chip modules (MCM) approach to building CPUs, the design is supposed to provide better yields for Intel and give the 10 nm dies better usability if defects happen.

In the die shots, we see that there are four dies side by side, with each die featuring 15 cores. That would amount to 60 cores present in the system, however, not all of the 60 cores are enabled. The top SKU is supposed to feature 56 cores, meaning that there would be at least four cores disabled across the configuration. This gives Intel flexibility to deliver plenty of processors, whatever the yields look like. The leaked CPU is an early engineering sample design with a low frequency of 1.3 GHz, which should improve in the final design. Notably, as Sapphire Rapids has SKUs that use in-package HBM2E memory, we don't know if the die configuration will look different from the one pictured down below.

Intel's Upcoming Sapphire Rapids Server Processors to Feature up to 56 Cores with HBM Memory

Intel has just launched its Ice Lake-SP lineup of Xeon Scalable processors, featuring the new Sunny Cove CPU core design. Built on the 10 nm node, these processors represent Intel's first 10 nm shipping product designed for enterprise. However, there is another 10 nm product going to be released for enterprise users. Intel is already preparing the Sapphire Rapids generation of Xeon processors and today we get to see more details about it. Thanks to the anonymous tip that VideoCardz received, we have a bit more details like core count, memory configurations, and connectivity options. And Sapphire Rapids is shaping up to be a very competitive platform. Do note that the slide is a bit older, however, it contains useful information.

The lineup will top at 56 cores with 112 threads, where this processor will carry a TDP of 350 Watts, notably higher than its predecessors. Perhaps one of the most interesting notes from the slide is the department of memory. The new platform will make a debut of DDR5 standard and bring higher capacities with higher speeds. Along with the new protocol, the chiplet design of Sapphire Rapids will bring HBM2E memory to CPUs, with up to 64 GBs of it per socket/processor. The PCIe 5.0 standard will also be present with 80 lanes, accompanying four Intel UPI 2.0 links. Intel is also supposed to extend the x86_64 configuration here with AMX/TMUL extensions for better INT8 and BFloat16 processing.

Intel to Launch 3rd Gen Intel Xeon Scalable Portfolio on April 6

Intel today revealed that it will launch its 3rd Generation Xeon Scalable processor series at an online event titled "How Wonderful Gets Done 2021," on April 6, 2021. This will be one of the first major media events headed by Intel's new CEO, Pat Gelsinger. Besides the processor launch, Intel is expected to detail many of its advances in the enterprise space, particularly in the areas of 5G infrastructure rollout, edge computing, and AI/HPC. The 3rd Gen Xeon Scalable processors are based on the new 10 nm "Ice Lake-SP" silicon, heralding the company's first CPU core IPC gain in the server space since 2015. The processors also introduce new I/O capabilities, such as PCI-Express 4.0.

HPE Lists 40-Core Intel Ice Lake-SP Xeon Server Processor

Hewlett Packard Enterprise, the company focused on making enterprise hardware and software, has today mistakenly listed some of Intel's upcoming 3rd generation Xeon Scalable processors. Called Ice Lake-SP, the latest server processor generation is expected to launch sometime in the coming days, with a possible launch date being the March 23rd "Intel Unleashed" webcast. The next generation of processors will finally bring a new vector of technologies Intel needs in server space. That means the support for PCIe 4.0 protocol for higher speed I/O and octa-channel DDR4 memory controller for much greater bandwidth. The CPU lineup will for the first time use Intel's advanced 10 nm node called 10 nm SuperFin.

Today, in the leaked HPE listing, we get to see some of the Xeon models Intel plans to launch. Starting from 32-core models, all the way to 40-core models, all SKUs above 28 cores are supposed to use dual die configuration to achieve high core counts. The limit of a single die is 28 cores. HPE listed a few models, with the highest-end one being the Intel Xeon Platinum XCC 8380 processor. It features 40 cores with 80 threads and a running frequency of 2.3 GHz. If you are wondering about TDP, it looks like the 10 nm SuperFin process is giving good results, as the CPU is rated only for 270 Watts of power.

Intel Announces Its Next Generation Memory and Storage Products

Today, at Intel's Memory and Storage 2020 event, the company highlighted six new memory and storage products to help customers meet the challenges of digital transformation. Key to advancing innovation across memory and storage, Intel announced two new additions to its Intel Optane Solid State Drive (SSD) Series: the Intel Optane SSD P5800X, the world's fastest data center SSD, and the Intel Optane Memory H20 for client, which features performance and mainstream productivity for gaming and content creation. Optane helps meet the needs of modern computing by bringing the memory closer to the CPU. The company also revealed its intent to deliver its 3rd generation of Intel Optane persistent memory (code-named "Crow Pass") for cloud and enterprise customers.

"Today is a key moment for our memory and storage journey. With the release of these new Optane products, we continue our innovation, strengthen our memory and storage portfolio, and enable our customers to better navigate the complexity of digital transformation. Optane products and technologies are becoming a mainstream element of business compute. And as a part of Intel, these leadership products are advancing our long-term growth priorities, including AI, 5G networking and the intelligent, autonomous edge." -Alper Ilkbahar, Intel vice president in the Data Platforms Group and general manager of the Intel Optane Group.

Intel Introduces new Security Technologies for 3rd Generation Intel Xeon Scalable Platform, Code-named "Ice Lake"

Intel today unveiled the suite of new security features for the upcoming 3rd generation Intel Xeon Scalable platform, code-named "Ice Lake." Intel is doubling down on its Security First Pledge, bringing its pioneering and proven Intel Software Guard Extension (Intel SGX) to the full spectrum of Ice Lake platforms, along with new features that include Intel Total Memory Encryption (Intel TME), Intel Platform Firmware Resilience (Intel PFR) and new cryptographic accelerators to strengthen the platform and improve the overall confidentiality and integrity of data.

Data is a critical asset both in terms of the business value it may yield and the personal information that must be protected, so cybersecurity is a top concern. The security features in Ice Lake enable Intel's customers to develop solutions that help improve their security posture and reduce risks related to privacy and compliance, such as regulated data in financial services and healthcare.

Intel Enters Strategic Collaboration with Lightbits Labs

Intel Corp. and Lightbits Labs today announced an agreement to propel development of disaggregated storage solutions to solve the challenges of today's data center operators who are craving improved total-cost-of-ownership (TCO) due to stranded disk capacity and performance. This strategic partnership includes technical co-engineering, go-to-market collaboration and an Intel Capital investment in Lightbits Labs. Lightbits' LightOS product delivers high-performance shared storage across servers while providing high availability and read-and-write management designed to maximize the value of flash-based storage. LightOS, while being fully optimized for Intel hardware, provides customers with vastly improved storage efficiency and reduces underutilization while maintaining compatibility with existing infrastructure without compromising performance and simplicity.

Lightbits Labs will enhance its composable disaggregated software-defined storage solution, LightOS, for Intel technologies, creating an optimized software and hardware solution. The system will utilize Intel Optane persistent memory and Intel 3D NAND SSDs based on Intel QLC Technology, Intel Xeon Scalable processors with unique built-in artificial intelligence (AI) acceleration capabilities and Intel Ethernet 800 Series Network Adapters with Application Device Queues (ADQ) technology. Intel's leadership FPGAs for next-generation performance, flexibility and programmability will complement the solution.

Arm Announces Next-Generation Neoverse V1 and N2 Cores

Ten years ago, Arm set its sights on deploying its compute-efficient technology in the data center with a vision towards a changing landscape that would require a new approach to infrastructure compute.

That decade-long effort to lay the groundwork for a more efficient infrastructure was realized when we announced Arm Neoverse, a new compute platform that would deliver 30% year-over-year performance improvements through 2021. The unveiling of our first two platforms, Neoverse N1 and E1, was significant and important. Not only because Neoverse N1 shattered our performance target by nearly 2x to deliver 60% more performance when compared to Arm's Cortex-A72 CPU, but because we were beginning to see real demand for more choice and flexibility in this rapidly evolving space.

Intel Whitley Platform for Xeon "Ice Lake-SP" Processors Pictured

Here's is the first schematic of Intel's upcoming "Whitley" enterprise platform for the upcoming Xeon Scalable "Ice Lake-SP" processors, courtesy momomo_us. The platform sees the introduction of the new LGA4189 socket necessitated by Intel increasing the memory channels per socket to 8, compared to 6 of the current-gen "Cascade Lake-SP." The new platform also sees the introduction of PCI-Express gen 4.0 bus, with each socket putting out up to 64 PCI-Express gen 4.0 CPU-attached lanes. This are typically wired out as three x16 slots, two x8 slots, an x4 chipset bus, and a CPU-attached 10 GbE controller.

The processor supports up to 8 memory channels running at DDR4-3200 with ECC. The other key component of the platform is the Intel C621A PCH. The C621A talks to the "Ice Lake-SP" processor over a PCI-Express 3.0 x4 link, and appears to retain gen 3.0 fabric from the older generation C621. momomo_us also revealed that the 10 nm "Ice Lake-SP" processor could have TDP of up to 270 W.

Intel Xeon Scalable "Ice Lake-SP" 28-core Die Detailed at Hot Chips - 18% IPC Increase

Intel in the opening presentation of the Hot Chips 32 virtual conference detailed its next-generation Xeon Scalable "Ice Lake-SP" enterprise processor. Built on the company's 10 nm silicon fabrication process, "Ice Lake-SP" sees the first non-client and non-mobile deployment of the company's new "Sunny Cove" CPU core that introduces higher IPC than the "Skylake" core that's been powering Intel microarchitectures since 2015. While the "Sunny Cove" core itself is largely unchanged from its implementation in 10th Gen Core "Ice Lake-U" mobile processors, it conforms to the cache hierarchy and tile silicon topology of Intel's enterprise chips.

The "Ice Lake-SP" die Intel talked about in its Hot Chips 32 presentation had 28 cores. The "Sunny Cove" CPU core is configured with the same 48 KB L1D cache as its client-segment implementation, but a much larger 1280 KB (1.25 MB) dedicated L2 cache. The core also receives a second fused multiply/add (FMA-512) unit, which the client-segment implementation lacks. It also receives a handful new instruction sets exclusive to the enterprise segment, including AVX-512 VPMADD52, Vector-AES, Vector Carry-less Multiply, GFNI, SHA-NI, Vector POPCNT, Bit Shuffle, and Vector BMI. In one of the slides, Intel also detailed the performance uplifts from the new instructions compared to "Cascade Lake-SP".
Return to Keyword Browsing
Nov 25th, 2024 00:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts