News Posts matching #HPC

Return to Keyword Browsing

AMD Advances Corporate Responsibility Across its Value Chain

AMD (NASDAQ: AMD) today announced its 27th annual Corporate Responsibility Report, demonstrating how AMD - together with its employees, customers, partners, and communities - advances computing to help solve the world's most important social and environmental challenges. From advancing sustainable computing to cultivating a diverse workforce, AMD is committed to responsibly delivering on its mission to be the high-performance and adaptive computing leader. AMD now powers the fastest and most energy-efficient supercomputer in the world - the Frontier supercomputer - as well as 17 of the top 20 most efficient supercomputers. To drive continued innovation, diversity hiring remains a component of the company's strategic metrics and milestones to inform its annual bonus program. AMD also entered into a $3 billion sustainability-linked credit facility, demonstrating its commitment to advancing sustainability.

In 2021, AMD announced new corporate responsibility goals for 2025 and 2030 spanning digital impact, environmental sustainability, supply chain responsibility, and diversity, belonging, and inclusion. Today, the company reported it is on track to achieve these goals. "At AMD, it is not just what our semiconductor technology can do that matters, but also how we develop and deliver it," said Susan Moore, AMD corporate vice president, corporate responsibility, and international government affairs. "Together with our employees, partners, and customers, we create possibilities for how our high-performance and adaptive computing can advance an inclusive, sustainable future for our world."

TSMC has Seven Major Customers Lined Up for its 3 nm Node

Based on media reports out of Taiwan, TSMC seems to have plenty of customers lined up for its 3 nm node, with Apple being the first customer out the gates when production starts sometime next month. However, TSMC is only expected to start the production with a mere 1,000 wafer starts a month, which seems like a very low figure, especially as this is said to remain unchanged through all of Q4. On the plus side, yields are expected to be better than the initial 5 nm node yields. Full-on mass production for the 3 nm node isn't expected to happen until the second half of 2023 and TSMC will also kick off its N3E node sometime in 2023.

Apart from Apple, major customers for the 3 nm node include AMD, Broadcom, Intel, MediaTek, NVIDIA and Qualcomm. Contrary to earlier reports by TrendForce, it appears that TSMC will continue its rollout of the 3 nm node as previously planned. Apple is expected to produce the A17 smartphone and tablet SoC, as well as advanced versions of the M2, as well as the M3 laptop and desktop processors on the 3 nm node. Intel is still said to be producing its graphics chiplets with TSMC, with the potential for GPU and FPGA products in the future. There's no word on what the other customers are planning to produce on the 3 nm node, but MediaTek and Qualcomm are obviously looking at using the node for future smartphone and tablet SoCs, with AMD and NVIDIA most likely aiming for upcoming GPUs and Broadcom for some kind of HPC related hardware.

Tachyum Submits Bid for 20-Exaflop Supercomputer to U.S. Department of Energy Advanced Computing Ecosystems

Tachyum today announced that it has responded to a U.S. Department of Energy Request for Information soliciting Advanced Computing Ecosystems for DOE national laboratories engaged in scientific and national security research. Tachyum has submitted a proposal to create a 20-exaflop supercomputer based on Tachyum's Prodigy, the world's first universal processor.

The DOE's request calls for computing systems that are five to 10 times faster than those currently available and/or that can perform more complex applications in "data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to the traditional modeling and simulation applications."

Infortrend EonStor GS All Flash U.2 Storage with 100Gb Ethernet Connectivity Tackles Extreme Workloads

Infortrend Technology, Inc., the industry-leading enterprise storage provider, released their flagship EonStor GS all-flash unified storage systems. Featuring the latest Intel Xeon D CPU, PCIe Gen4, and 100GbE, the solutions are perfect for applications requiring low latency and high performance such as database, virtualization, HPC, multimedia and entertainment (M&E).

EonStor GS series is designed for enterprises to flexibly deploy and utilize in a variety of applications. It has been chosen and deployed by several global enterprises and organizations. These organizations include world-renowned car-makers, Czechoslovakia's Municipal Library, Turkish media conglomerate Ciner Media Group, etc.

SMART Modular Announces SMART Zefr Memory with Ultra-High Reliability Performance for Demanding Compute Applications

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and hybrid storage products announces its SMART Zefr Memory, a proprietary process that eliminates more than 90% of memory reliability failures and optimizes memory subsystems for maximum uptime. System start-up delays are frequently attributed to memory errors. These failures reduce system efficiency and may also lead to higher maintenance costs and lower system yield rates. These failures reduce system efficiency and may also lead to higher maintenance costs and lower system yield rates. SMART Zefr Memory has been tested under real-world conditions to identify and filter out marginal components that may undermine memory reliability.

SMART Zefr Memory uses a proprietary screening process developed by SMART that when performed on memory modules ensures the industry's highest levels of uptime and reliability. SMART Zefr Memory is ideally suited for data centers, hyperscalers, high performance computing (HPC) platforms, and other environments that run large memory applications and depend on uptime for customers.

Biren Technology Unveils BR100 7 nm HPC GPU with 77 Billion Transistors

Chinese company Biren Technology has recently unveiled the Biren BR100 HPC GPU during their Biren Explore Summit 2022 event. The Biren BR100 features an in-house chiplet architecture with 77 billion transistors and is manufactured on a 7 nm process using TSMC's 2.5D CoWoS packaging technology. The card is equipped with 300 MB of onboard cache alongside 64 GB of HBM2E memory running at 2.3 TFLOPs. This combination delivers performance above that of the NVIDIA Ampere A100 achieving 1024 TFLOPs in 16-bit floating point operations.

The company also announced the BR104 which features a monolithic design and should offer approximately half the performance of the BR100 at a TDP of 300 W. The Biren BR104 will be available as a standard PCIe card while the BR100 will come in the form of an OAM compatible board with a custom tower cooler. The pricing and availability information for these cards is currently unknown.

Phison Debuts the X1 to Provide the Industry's Most Advanced Enterprise SSD Solution

Phison Electronics Corp., a global leader in NAND flash controller and storage solutions, today announced the launch of its X1 controller based solid state drive (SSD) platform that delivers the industry's most advanced enterprise SSD solution. Engineered with Phison's technology to meet the evolving demands of faster and smarter global data-center infrastructures, the X1 SSD platform was designed in partnership with Seagate Technology Holdings plc, a world leader in mass-data storage infrastructure solutions. The X1 SSD customizable platform offers more computing with less energy consumption. With a cost-effective solution that eliminates bottlenecks and improves quality of service, the X1 offers more than a 30 percent increase in data reads than existing market competitors for the same power used.

"We combined Seagate's proprietary data management and customer integration capabilities with Phison's cutting-edge technology to create highly customized SSDs that meet the ever-evolving needs of the enterprise storage market," said Sai Varanasi, senior vice president of product and business marketing at Seagate Technology. "Seagate is excited to partner with Phison on developing advanced SSD technology to provide the industry with increased density, higher performance and power efficiency for all mass capacity storage providers."

Avicena Raises $25 Million in Series A to Fund Development of High Capacity microLED-based Optical Interconnects

-AvicenaTech Corp., the leader in microLED-based chip-to-chip interconnects, today announced that the company has secured $25M in Series A funding from Samsung Catalyst Fund, Cerberus Capital Management, Clear Ventures, and Micron Ventures to drive the development of products based on Avicena's breakthrough photonic I/O solution. "We believe that Avicena technology can be transformational in unlocking compute-to-memory chip-to-chip high-speed interconnects. Such technology can be central to supporting future disaggregated architectures and distributed high-performance computing (HPC) systems," said Marco Chisari, EVP of Samsung Electronics and Head of the Samsung Semiconductor Innovation Center.

"We are excited to participate in this round at Avicena," said Amir Salek, Senior Managing Director at Cerberus Capital Management and former Head of silicon for Google Infrastructure and Cloud. "Avicena has a highly differentiated technology addressing one of the main challenges in modern computer architecture. The technology offered by Avicena meets the needs for scaling future HPC and cloud compute networks and covers applications in conventional datacenter and 5G cellular networking."

Supermicro Launches Multi-GPU Cloud Gaming Solutions Based on Intel Arctic Sound-M

Super Micro Computer, Inc., a global leader in enterprise computing, storage, networking, and green computing technology, is announcing future Total IT Solutions for availability with Android Cloud Gaming and Media Processing & Delivery. These new solutions will incorporate the Intel Data Center GPU, codenamed Arctic Sound-M, and will be supported on several Supermicro servers. Supermicro solutions that will contain the Intel Data Center GPUs codenamed Arctic Sound-M, include the 4U 10x GPU server for transcoding and media delivery, the Supermicro BigTwin system with up to eight Intel Data Center GPUs, codenamed Arctic Sound-M in 2U for media processing applications, the Supermicro CloudDC server for edge AI inferencing, and the Supermicro 2U 2-Node server with three Intel Data Center GPUs, codenamed Arctic Sound-M per node, optimized for cloud gaming. Additional systems will be made available later this year.

"Supermicro will extend our media processing solutions by incorporating the Intel Data Center GPU," said Charles Liang, President, and CEO, Supermicro. "The new solutions will increase video stream rates and enable lower latency Android cloud gaming. As a result, Android cloud gaming performance and interactivity will increase dramatically with the Supermicro BigTwin systems, while media delivery and transcoding will show dramatic improvements with the new Intel Data Center GPUs. The solutions will expand our market-leading accelerated computing offerings, including everything from Media Processing & Delivery to Collaboration, and HPC."

Semiconductor Fab Order Cancellations Expected to Result in Reduced Capacity Utilization Rate in 2H22

According to TrendForce investigations, foundries have seen a wave of order cancellations with the first of these revisions originating from large-size Driver IC and TDDI, which rely on mainstream 0.1X μm and 55 nm processes, respectively. Although products such as MCU and PMIC were previously in short supply, foundries' capacity utilization rate remained roughly at full capacity through their adjustment of product mix. However, a recent wave cancellations have emerged for PMIC, CIS, and certain MCU and SoC orders. Although still dominated by consumer applications, foundries are beginning to feel the strain of the copious order cancellations from customers and capacity utilization rate has officially declined.

Looking at trends in 2H22, TrendForce indicates, in addition to no relief from the sustained downgrade of driver IC demand, inventory adjustment has begun for smartphones, PCs, and TV-related peripheral components such as SoCs, CIS, and PMICs, and companies are beginning to curtail their wafer input plans with foundries. This phenomenon of order cancellations is occurring simultaneously in 8-inch and 12-inch fabs at nodes including 0.1X μm, 90/55 nm, and 40/28 nm. Not even the advanced 7/6 nm processes are immune.

Cerebras Systems Sets Record for Largest AI Models Ever Trained on A Single Device

Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today announced, for the first time ever, the ability to train models with up to 20 billion parameters on a single CS-2 system - a feat not possible on any other single device. By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes. It also eliminates one of the most painful aspects of NLP—namely the partitioning of the model across hundreds or thousands of small graphics processing units (GPU).

"In NLP, bigger models are shown to be more accurate. But traditionally, only a very select few companies had the resources and expertise necessary to do the painstaking work of breaking up these large models and spreading them across hundreds or thousands of graphics processing units," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "As a result, only very few companies could train large NLP models - it was too expensive, time-consuming and inaccessible for the rest of the industry. Today we are proud to democratize access to GPT-3 1.3B, GPT-J 6B, GPT-3 13B and GPT-NeoX 20B, enabling the entire AI ecosystem to set up large models in minutes and train them on a single CS-2."

AMD Instinct MI300 APU to Power El Capitan Exascale Supercomputer

The Exascale supercomputing race is now well underway, as the US-based Frontier supercomputer got delivered, and now we wait to see the remaining systems join the race. Today, during 79th HPC User Forum at Oak Ridge National Laboratory (ORNL), Terri Quinn at Lawrence Livermore National Laboratory (LLNL) delivered a few insights into what El Capitan exascale machine will look like. And it seems like the new powerhouse will be based on AMD's Instinct MI300 APU. LLNL targets peak performance of over two exaFLOPs and a sustained performance of more than one exaFLOP, under 40 megawatts of power. This should require a very dense and efficient computing solution, just like the MI300 APU is.

As a reminder, the AMD Instinct MI300 is an APU that combines Zen 4 x86-64 CPU cores, CDNA3 compute-oriented graphics, large cache structures, and HBM memory used as DRAM on a single package. This is achieved using a multi-chip module design with 2.5D and 3D chiplet integration using Infinity architecture. The system will essentially utilize thousands of these APUs to become one large Linux cluster. It is slated for installation in 2023, with an operating lifespan from 2024 to 2030.

PCI-SIG Announces PCI Express 7.0 Specification to Reach 128 GT/s

PCI-SIG today announced that the PCI Express (PCIe ) 7.0 specification will double the data rate to 128 GT/s and is targeted for release to members in 2025. "For 30 years the guiding principle of PCI-SIG has been, 'If we build it, they will come,'" observed Nathan Brookwood, Research Fellow at Insight 64. "Early parallel versions of PCI technology accommodated speeds of hundreds of megabytes/second, well matched to the graphics, storage and networking demands of the 1990s.

In 2003, PCI-SIG evolved to a serial design that supported speeds of gigabytes/second to accommodate faster solid-state disks and 100MbE Ethernet. Almost like clockwork, PCI-SIG has doubled PCIe specification bandwidth every three years to meet the challenges of emerging applications and markets. Today's announcement of PCI-SIG's plan to double the channel's speed to 512 GB/s (bi-directionally) puts it on track to double PCIe specification performance for another 3-year cycle."

EuroHPC Joint Undertaking Announces Five Sites to Host new World-Class Supercomputers

JUPITER, the first European exascale supercomputer, will be hosted by the Jülich Supercomputing Centre in Germany. Exascale supercomputers are systems capable of performing more than a billion billion calculations per second and represent a significant milestone for Europe. By supporting the development of high-precision models of complex systems, they will have a major impact on European scientific excellence.

Researchers Use SiFive's RISC-V SoC to Build a Supercomputer

Researchers from Università di Bologna and CINECA, the largest supercomputing center in Italy, have been playing with the concept of developing a RISC-V supercomputer. The team has laid the grounds for the first-ever implementation that demonstrates the capability of the relatively novel ISA to run high-performance computing. To create a supercomputer, you need pieces of hardware that seem like Lego building blocks. Those are called clusters, made from a motherboard, processor, memory, and storage. Italian researchers decided to try and use something different than Intel/AMD solution to the problem and use a processor based on RISC-V ISA. Using SiFive's Freedom U740 SoC as the base, researchers named their RISC-V cluster "Monte Cimone."

Monte Cimone features four dual-board servers, each in a 1U form factor. Each board has a SiFive's Freedom U740 SoC with four U74 cores running up to 1.4 GHz and one S7 management core. In total, eight nodes combine for a total of 32 RISC-V cores. Paired with 16 GB of 64-bit DDR4 memory operating at 1866s MT/s, PCIe Gen 3 x8 bus running at 7.8 GB/s, one gigabit Ethernet port, USB 3.2 Gen 1 interfaces, the system is powered by two 250 Watt PSUs to support future expansion and addition of accelerator cards.

Intel Announces "Rialto Bridge" Accelerated AI and HPC Processor

During the International Supercomputing Conference on May 31, 2022, in Hamburg, Germany, Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation, announced Rialto Bridge, Intel's data center graphics processing unit (GPU). Using the same architecture as the Intel data center GPU Ponte Vecchio and combining enhanced tiles with Intel's next process node, Rialto Bridge will offer up to 160 Xe cores, more FLOPs, more I/O bandwidth and higher TDP limits for significantly increased density, performance and efficiency.

"As we embark on the exascale era and sprint towards zettascale, the technology industry's contribution to global carbon emissions is also growing. It has been estimated that by 2030, between 3% and 7% of global energy production will be consumed by data centers, with computing infrastructure being a top driver of new electricity use," said Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation.

Ayar Labs Partners with NVIDIA to Deliver Light-Based Interconnect for AI Architectures

Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs' technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

GIGABYTE Joins Computex to Promote Emerging Enterprise Technologies

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced its participation in Computex, which runs from May 24-27. GIGABYTE's booth will showcase the latest in accelerated computing, liquid & immersion cooling, and edge solutions, as well as provide an opportunity for attendees to get a glimpse at next gen enterprise hardware. GIGABYTE also announced plans to support the NVIDIA Grace CPU Superchip and NVIDIA Grace Hopper Superchip. With the new hardware innovations, GIGABYTE is charting a path to meet the requirements for the foreseeable future in data centers, accordingly many of those important technological modernizations that will be on show at Computex, as well as behind the NDA curtain.

Computex is unquestionably the world's largest computer expo, which has been held annually in Taiwan for over 20 years. Computex has continued to generate great interest and anticipation from the international community. The Nangang Exhibition Center will once again have a strong presence of manufacturers and component buyers encompassing the consumer PC market to enterprise.

HPE Build Supercomputer Factory in Czech Republic

Hewlett Packard Enterprise (NYSE: HPE) today announced its ongoing commitment in Europe by building its first factory in the region for next-generation high performance computing (HPC) and artificial intelligence (AI) systems to accelerate delivery to customers and strengthen the region's supplier ecosystem. The new site will manufacture HPE's industry-leading systems as custom-designed solutions to advance scientific research, mature AL/ML initiatives, and bolster innovation.

The dedicated HPC factory, which will become the fourth of HPE's global HPC sites, will be located in Kutná Hora, Czech Republic, next to HPE's existing European site for manufacturing its industry-standard servers and storage solutions. Operations will begin in summer 2022.

Tachyum Delivers the Highest AI and HPC Performance with the Launch of the World's First Universal Processor

Tachyum today launched the world's first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products.

After the company undertook its mission to conquer the processor performance plateau in nanometer-class chips and the systems they power, Tachyum has succeeded by launching its first commercial product. The Prodigy Cloud/AI/HPC supercomputer processor chip offers 4x the performance of the fastest Xeon, has 3x more raw performance than NVIDIA's H100 on HPC and has 6x more raw performance on AI training and inference workloads, and up to 10x performance at the same power. Prodigy is poised to overcome the challenges of increasing data center power consumption, low server utilization and stalled performance scaling.

Ayar Labs Raises $130 Million for Light-based Chip-to-Chip Communication

Ayar Labs, the leader in chip-to-chip optical connectivity, today announced that the company has secured $130 million in additional financing led by Boardman Bay Capital Management to drive the commercialization of its breakthrough optical I/O solution. Hewlett Packard Enterprise (HPE) and NVIDIA entered this investment round, joining existing strategic investors Applied Ventures LLC, GlobalFoundries, Intel Capital, and Lockheed Martin Ventures. Other new strategic and financial investors participating in the round include Agave SPV, Atreides Capital, Berkeley Frontier Fund, IAG Capital Partners, Infinitum Capital, Nautilus Venture Partners, and Tyche Partners. They join existing investors such as BlueSky Capital, Founders Fund, Playground Global, and TechU Venture Partners.

"As a successful technology-focused crossover fund operating for over a decade, Ayar Labs represents our largest private investment to date," said Will Graves, Chief Investment Officer at Boardman Bay Capital Management. "We believe that silicon photonics-based optical interconnects in the data center and telecommunications markets represent a massive new opportunity and that Ayar Labs is the leader in this emerging space with proven technology, a fantastic team, and the right ecosystem partners and strategy."

TSMC First Quarter 2022 Financials Show 45.1% Increase in Revenues

A new quarter and another forecast shattering revenue report from TSMC, as the company beat analysts' forecasts by over US$658 million, with a total revenue for the quarter of US$17.6 billion and a net income of almost US$7.26 billion. That's an increase in net income of 45.1 percent or 35.5 percent in sales. Although the monetary figures might be interesting to some, far more interesting details were also shared, such as production updates about future nodes. As a followup on yesterday's news post about 3 nanometer nodes, the N3 node is officially on track for mass production in the second half of this year. TSMC says that customer engagement is stronger than at the start of its N7 and N7 nodes, with HPC and smartphone chip makers lining up to get onboard. The N3E node is, as reported yesterday, expected to enter mass production in the second half of 2023, or a year after N3. Finally, the N2 node is expected in 2025 and won't adhere to TSMC's two year process technology cadence.

Breaking down the revenue by nodes, N7 has taken back the lead over N5, as N7 accounted for 30 percent of TSMC's Q1 revenues up from 27 percent last quarter, but down from 35 percent in the previous year. N5 sits at 20 percent, which is down from 23 percent in the previous quarter, but up from 14 percent a year ago. The 16 and 28 nm nodes still hold on to 25 percent of TSMC's revenue, which is the same as a year ago and up slightly from the previous quarter. Remaining nodes are unchanged from last quarter.

Tachyum Successfully Runs FreeBSD in Prodigy Ecosystem; Expands Open-Source OS Support

Tachyum today announced it has completed validation of its Prodigy Universal Processor and software ecosystem with the operating system FreeBSD, and completed the Prodigy instruction set architecture (ISA) for FreeBSD porting. FreeBSD powers modern servers, desktops, and embedded platforms in environments that value performance, stability, and security. It is the platform of choice for many of the busiest websites and the most pervasive embedded networking and storage devices.

The validation of FreeBSD extends Tachyum's support for open-source operating systems and tools, including Linux, Yocto Project, PHP, MariaDB, PostgreSQL, Apache, QEMU, Git, RabbitMQ, and more.

NVIDIA Claims Grace CPU Superchip is 2X Faster Than Intel Ice Lake

When NVIDIA announced its Grace CPU Superchip, the company officially showed its efforts of creating an HPC-oriented processor to compete with Intel and AMD. The Grace CPU Superchip combines two Grace CPU modules that use the NVLink-C2C technology to deliver 144 Arm v9 cores and 1 TB/s of memory bandwidth. Each core is Arm Neoverse N2 Perseus design, configured to achieve the highest throughput and bandwidth. As far as performance is concerned, the only detail NVIDIA provides on its website is the estimated SPECrate 2017_int_base score of over 740. Thanks to the colleges over at Tom's Hardware, we have another performance figure to look at.

NVIDIA has made a slide about comparison with Intel's Ice Lake server processors. One Grace CPU Superchip was compared to two Xeon Platinum 8360Y Ice Lake CPUs configured in a dual-socket server node. The Grace CPU Superchip outperformed the Ice Lake configuration by two times and provided 2.3 times the efficiency in WRF simulation. This HPC application is CPU-bound, allowing the new Grace CPU to show off. This is all thanks to the Arm v9 Neoverse N2 cores pairing efficiently with outstanding performance. NVIDIA made a graph showcasing all HPC applications running on Arm today, with many more to come, which you can see below. Remember that NVIDIA provides this information, so we have to wait for the 2023 launch to see it in action.

Intel Planning a Return to HEDT with "Alder Lake-X"?

Enthused with its IPC leadership, Intel is possibly planning a return to the high-end desktop (HEDT) market segment, with the "Alder Lake-X" line of processors, according to a Tom's Hardware report citing a curious-looking addition to an AIDA64 beta change-log. The exact nature of "Alder Lake-X" (ADL-X) still remains a mystery—one theory holds that ADL-X could be a consumer variant of the "Sapphire Rapids" microarchitecture, much like how the 10th Gen Core "Cascade Lake-X" was to "Cascade Lake," a server processor microarchitecture. Given that Intel is calling it "Alder Lake-X" and not "Sapphire Rapids-X," it could even be a whole new client-specific silicon. What's the difference between the two? It's all in the cores.

While both "Alder Lake" and "Sapphire Rapids" come with "Golden Cove" performance cores (P-cores), they use variants of it. Alder Lake has the client-specific variant with 1.25 MB L2 cache, a lighter client-relevant ISA, and other optimizations that enable it to run at higher clock speeds. Sapphire Rapids, on the other hand, will use a server-specific variant of "Golden Cove" that's optimized for the Mesh interconnect, has 2 MB of L2 cache, a server/HPC-relevant ISA, and a propensity to run at lower clock speeds, to support the silicon's overall TDP and high CPU core-count.
Return to Keyword Browsing
Jul 16th, 2024 00:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts