News Posts matching #HPC

Return to Keyword Browsing

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

AMD Celebrates its 55th Birthday

AMD is now a 55-year-old company. The chipmaker was founded on May Day, 1969, and traversed practically every era of digital computing to reach where it is today—a company that makes contemporary processors for PCs, servers, and consumer electronics; GPUs for gaming graphics, professional visualization, and the all important AI HPC processors that are driving the latest era of computing. As of this writing, AMD has a market capitalization of over $237 billion, presence in all market regions, and supplies hardware and services to nearly every Fortune 500 company, including every IT giant. Happy birthday, AMD!

Micron First to Ship Critical Memory for AI Data Centers

Micron Technology, Inc. (Nasdaq: MU), today announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128 GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms. Powered by Micron's industry-leading 1β (1-beta) technology, the 128 GB DDR5 RDIMM memory delivers more than 45% improved bit density, up to 22% improved energy efficiency and up to 16% lower latency over competitive 3DS through-silicon via (TSV) products.

Micron's collaboration with industry leaders and customers has yielded broad adoption of these new high-performance, large-capacity modules across high-volume server CPUs. These high-speed memory modules were engineered to meet the performance needs of a wide range of mission-critical applications in data centers, including artificial intelligence (AI) and machine learning (ML), high-performance computing (HPC), in-memory databases (IMDBs) and efficient processing for multithreaded, multicore count general compute workloads. Micron's 128 GB DDR5 RDIMM memory will be supported by a robust ecosystem including AMD, Hewlett Packard Enterprise (HPE), Intel, Supermicro, along with many others.

Huawei Aims to Develop Homegrown HBM Memory Amidst US Sanctions

According to The Information, in a strategic maneuver to circumvent the constraints imposed by US sanctions, Huawei is accelerating efforts to establish domestic production capabilities for High Bandwidth Memory (HBM) within China. This move addresses the limitations that have hampered the company's advancements in AI and high-performance computing (HPC) sectors. HBM technology plays a pivotal role in enhancing the performance of AI and HPC processors by mitigating memory bandwidth bottlenecks. Recognizing its significance, Huawei has assembled a consortium comprising memory manufacturers backed by the Chinese government and prominent semiconductor companies like Fujian Jinhua Integrated Circuit. This consortium is focused on advancing HBM2 memory technology, which is crucial for Huawei's Ascend-series processors for AI applications.

Huawei's initiative comes at a time when the company faces challenges in accessing HBM from external sources, impacting the availability of its AI processors in the market. Despite facing obstacles such as international regulations restricting the sale of advanced chipmaking equipment to China, Huawei's efforts underscore China's broader push for self-sufficiency in critical technologies essential for AI and supercomputing. By investing in domestic HBM production, Huawei aims to secure a stable supply chain for these vital components, reducing reliance on external suppliers. This strategic shift not only demonstrates Huawei's resilience in navigating geopolitical challenges but also highlights China's determination to strengthen its technological independence in the face of external pressures. As the global tech landscape continues to evolve, Huawei's move to develop homegrown HBM memory could have far-reaching implications for China's AI and HPC capabilities, positioning the country as a significant player in the memory field.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

U.S. Updates Advanced Semiconductor Ban, Actual Impact on the Industry Will Be Insignificant

On March 29th, the United States announced another round of updates to its export controls, targeting advanced computing, supercomputers, semiconductor end-uses, and semiconductor manufacturing products. These new regulations, which took effect on April 4th, are designed to prevent certain countries and businesses from circumventing U.S. restrictions to access sensitive chip technologies and equipment. Despite these tighter controls, TrendForce believes the practical impact on the industry will be minimal.

The latest updates aim to refine the language and parameters of previous regulations, tightening the criteria for exports to Macau and D:5 countries (China, North Korea, Russia, Iran, etc.). They require a detailed examination of all technology products' Total Processing Performance (TPP) and Performance Density (PD). If a product exceeds certain computing power thresholds, it must undergo a case-by-case review. Nevertheless, a new provision, Advanced Computing Authorized (ACA), allows for specific exports and re-exports among selected countries, including the transshipment of particular products between Macau and D:5 countries.

X-Silicon Startup Wants to Combine RISC-V CPU, GPU, and NPU in a Single Processor

While we are all used to having a system with a CPU, GPU, and, recently, NPU—X-Silicon Inc. (XSi), a startup founded by former Silicon Valley veterans—has unveiled an interesting RISC-V processor that can simultaneously handle CPU, GPU, and NPU workloads in a chip. This innovative chip architecture, which will be open-source, aims to provide a flexible and efficient solution for a wide range of applications, including artificial intelligence, virtual reality, automotive systems, and IoT devices. The new microprocessor combines a RISC-V CPU core with vector capabilities and GPU acceleration into a single chip, creating a versatile all-in-one processor. By integrating the functionality of a CPU and GPU into a single core, X-Silicon's design offers several advantages over traditional architectures. The chip utilizes the open-source RISC-V instruction set architecture (ISA) for both CPU and GPU operations, running a single instruction stream. This approach promises lower memory footprint execution and improved efficiency, as there is no need to copy data between separate CPU and GPU memory spaces.

Called the C-GPU architecture, X-Silicon uses RISC-V Vector Core, which has 16 32-bit FPUs and a Scaler ALU for processing regular integers as well as floating point instructions. A unified instruction decoder feeds the cores, which are connected to a thread scheduler, texture unit, rasterizer, clipping engine, neural engine, and pixel processors. All is fed into a frame buffer, which feeds the video engine for video output. The setup of the cores allows the users to program each core individually for HPC, AI, video, or graphics workloads. Without software, there is no usable chip, which prompts X-Silicon to work on OpenGL ES, Vulkan, Mesa, and OpenCL APIs. Additionally, the company plans to release a hardware abstraction layer (HAL) for direct chip programming. According to Jon Peddie Research (JPR), the industry has been seeking an open-standard GPU that is flexible and scalable enough to support various markets. X-Silicon's CPU/GPU hybrid chip aims to address this need by providing manufacturers with a single, open-chip design that can handle any desired workload. The XSi gave no timeline, but it has plans to distribute the IP to OEMs and hyperscalers, so the first silicon is still away.

Ultra Ethernet Consortium Experiences Exponential Growth in Support of Ethernet for High-Performance AI

Ultra Ethernet Consortium (UEC) is delighted to announce the addition of 45 new members to its thriving community since November, 2023. This remarkable influx of members underscores UEC's position as a unifying force, bringing together industry leaders to build a complete Ethernet-based communication stack architecture for high-performance networking. As a testament to UEC's commitment and the vibrant growth of its community, members shared their excitement about the recent developments. The community testimonials, accessible on our Testimonial page, reflect the positive impact UEC is having on its members. These testimonials highlight the collaborative spirit and the shared vision for the future of high-performance networking.

In the four months since November 2023, when UEC began accepting new members, the consortium has experienced an impressive growth of 450%. In October 2023, UEC boasted a distinguished membership comprising 10 steering members, marking the initial steps towards fostering collaboration in the high-performance networking sector. Now, the community is flourishing with the addition of 45 new member companies, reflecting an extraordinary expansion that demonstrates the industry's recognition of UEC's commitment. With a total of 715 industry experts actively engaged in the eight working groups, UEC is positioned at the forefront of industry collaboration, driving advancements in Ethernet-based communication technologies.

ASUS Presents MGX-Powered Data-Center Solutions

ASUS today announced its participation at the NVIDIA GTC global AI conference, where it will showcase its solutions at booth #730. On show will be the apex of ASUS GPU server innovation, ESC NM1-E1 and ESC NM2-E1, powered by the NVIDIA MGX modular reference architecture, accelerating AI supercomputing to new heights. To help meet the increasing demands for generative AI, ASUS uses the latest technologies from NVIDIA, including the B200 Tensor Core GPU, the GB200 Grace Blackwell Superchip, and H200 NVL, to help deliver optimized AI server solutions to boost AI adoption across a wide range of industries.

To better support enterprises in establishing their own generative AI environments, ASUS offers an extensive lineup of servers, from entry-level to high-end GPU server solutions, plus a comprehensive range of liquid-cooled rack solutions, to meet diverse workloads. Additionally, by leveraging its MLPerf expertise, the ASUS team is pursuing excellence by optimizing hardware and software for large-language-model (LLM) training and inferencing and seamlessly integrating total AI solutions to meet the demanding landscape of AI supercomputing.

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

Samsung Expected to Unveil Enterprise "PBSSD" Subscription Service at GTC

Samsung Electronics is all set to discuss the future of AI, alongside Jensen Huang, at NVIDIA's upcoming GTC 2024 conference. South Korean insiders have leaked the company's intentions, only days before the event's March 18 kickoff time. Their recently unveiled 36 GB HBM3E 12H DRAM product is expected to be the main focus of official presentations—additionally, a new storage subscription service is marked down for a possible live introduction. An overall "Redefining AI Infrastructure" presentation could include—according to BusinessKorea—a planned launch of: "petabyte (PB)-level SSD solution, dubbed 'PBSSD,' along with a subscription service in the US market within the second quarter (of 2024) to address the era of ultra-high-capacity data."

A Samsung statement—likely sourced from leaked material—summarized this business model: "the subscription service will help reduce initial investment costs in storage infrastructure for our customers and cut down on maintenance expenses." Under agreed upon conditions, customers are not required to purchasing ultra-high-capacity SSD solutions outright: "enterprises using the service can flexibly utilize SSD storage without the need to build separate infrastructure, while simultaneously receiving various services from Samsung Electronics related to storage management, security, and upgrades." A special session—"The Value of Storage as a Service for AI/ML and Data Analysis"—is alleged to be on the company's GTC schedule.

Intel Postpones Planned Investments in Italy & France

Two years ago, Intel Corporation and the Italian Government initiated negotiations over the "enabling" of a new state-of-the-art back-end manufacturing facility—a potential investment of up to 4.5 billion euros was mentioned at the time. Italy's chipmaking fund was put together in order to attract several big semiconductor firms, but Team Blue appeared to be the primary target. This week, Minister Adolfo Urso confirmed to media outlets that Intel had: "given up or postponed its investments in France and Italy, compared with others that it plans in Germany." Intel has not commented on this announcement according to a Reuters report—a spokesperson declined to make a statement.

Italy's Business Minister stated that he will welcome a continuation of negotiations, if Intel leadership chooses to diversify its construction portfolio outside of Germany: "if it decides to complete those projects, we are still here." His nation is set to receive further investments, following a recent announcement from Silicon Box—the Singapore-headquartered advanced semiconductor packaging company has signed an up to €3.2 billion deal. Their new Italian facility will: "enable next generation applications in artificial intelligence (AI), high performance computing (HPC)," and other segments. Urso reckons that "there will be others in coming months." He also added that a ministry task force had conducted talks with unnamed Taiwanese groups.

ZOTAC Expands Computing Hardware with GPU Server Product Line for the AI-Bound Future

ZOTAC Technology Limited, a global leader in innovative technology solutions, expands its product portfolio with the GPU Server Series. The first series of products in ZOTAC's Enterprise lineup offers organizations affordable and high-performance computing solutions for a wide range of demanding applications, from core-to-edge inferencing and data visualization to model training, HPC modeling, and simulation.

The ZOTAC series of GPU Servers comes in a diverse range of form factors and configurations, featuring both Tower Workstations and Rack Mount Servers, as well as both Intel and AMD processor configurations. With support for up to 10 GPUs, modular design for easier access to internal hardware, a high space-to-performance ratio, and industry-standard features like redundant power supplies and extensive cooling options, ZOTAC's enterprise solutions can ensure optimal performance and durability, even under sustained intense workloads.

The SEA Projects Prepare Europe for Exascale Supercomputing

The HPC research projects DEEP-SEA, IO-SEA and RED-SEA are wrapping up this month after a three-year project term. The three projects worked together to develop key technologies for European Exascale supercomputers, based on the Modular Supercomputing Architecture (MSA), a blueprint architecture for highly efficient and scalable heterogeneous Exascale HPC systems. To achieve this, the three projects collaborated on system software and programming environments, data management and storage, as well as interconnects adapted to this architecture. The results of their joint work will be presented at a co-design workshop and poster session at the EuroHPC Summit (Antwerp, 18-21 March, www.eurohpcsummit.eu).

Global Top 10 Foundries Q4 Revenue Up 7.9%, Annual Total Hits US$111.54 Billion in 2023

The latest TrendForce report reveals a notable 7.9% jump in 4Q23 revenue for the world's top ten semiconductor foundries, reaching $30.49 billion. This growth is primarily driven by sustained demand for smartphone components, such as mid and low-end smartphone APs and peripheral PMICs. The launch season for Apple's latest devices also significantly contributed, fueling shipments for the A17 chipset and associated peripheral ICs, including OLED DDIs, CIS, and PMICs. TSMC's premium 3 nm process notably enhanced its revenue contribution, pushing its global market share past the 60% threshold this quarter.

TrendForce remarks that 2023 was a challenging year for foundries, marked by high inventory levels across the supply chain, a weak global economy, and a slow recovery in the Chinese market. These factors led to a downward cycle in the industry, with the top ten foundries experiencing a 13.6% annual drop as revenue reached just $111.54 billion. Nevertheless, 2024 promises a brighter outlook, with AI-driven demand expected to boost annual revenue by 12% to $125.24 billion. TSMC, benefiting from steady advanced process orders, is poised to far exceed the industry average in growth.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

NVIDIA AI GPU Customers Reportedly Selling Off Excess Hardware

The NVIDIA H100 Tensor Core GPU was last year's hot item for HPC and AI industry segments—the largest purchasers were reported to have acquired up to 150,000 units each. Demand grew so much that lead times of 36 to 52 weeks became the norm for H100-based server equipment. The latest rumblings indicate that things have stabilized—so much so that some organizations are "offloading chips" as the supply crunch cools off. Apparently it is more cost-effective to rent AI processing sessions through cloud service providers (CSPs)—the big three being Amazon Web Services, Google Cloud, and Microsoft Azure.

According to a mid-February Seeking Alpha report, wait times for the NVIDIA H100 80 GB GPU model have been reduced down to around three to four months. The Information believes that some companies have already reduced their order counts, while others have hardware sitting around, completely unused. Maintenance complexity and costs are reportedly cited as a main factors in "offloading" unneeded equipment, and turning to renting server time from CSPs. Despite improved supply conditions, AI GPU demand is still growing—driven mainly by organizations dealing with LLM models. A prime example being Open AI—as pointed out by The Information—insider murmurings have Sam Altman & Co. seeking out alternative solutions and production avenues.

Quantum Machines Launches OPX1000, a High-density Processor-based Control Platform

In Sept. 2023, Quantum Machines (QM) unveiled OPX1000, our most advanced quantum control system to date - and the industry's leading controller in terms of performance and channel density. OPX1000 is the third generation of QM's processor-based quantum controllers. It enhances its predecessor, OPX+, by expanding analog performance and multiplying channel density to support the control of over 1,000 qubits. However, QM's vision for quantum controllers extends far beyond.

OPX1000 is designed as a platform for orchestrating the control of large-scale QPUs (quantum processing units). It's equipped with 8 frontend modules (FEMs) slots, representing the cutting-edge modular architecture for quantum control. The first low-frequency (LF) module was introduced in September 2023, and today, we're happy to introduce the Microwave (MW) FEM, which delivers additional value to our rapidly expanding customer base.

NVIDIA GH200 72-core Grace CPU Benched Against AMD Threadripper 7000 Series

GPTshop.ai is building prototypes of their "ultimate high-end desktop supercomputer," running the NVIDIA GH200 "Grace" CPU for AI and HPC workloads. Michael Larabel—founder and principal author of Phoronix—was first allowed to "remote access" a GPTshop.ai GH200 576 GB workstation converted model in early February—for the purpose of benchmarking it against systems based on AMD EPYC Zen 4 and Intel Xeon Emerald Rapids processors. Larabel noted: "it was a very interesting battle" that demonstrated the capabilities of 72 Arm Neoverse-V2 cores (in Grace). With this GPTshop.ai GH200 system actually being in workstation form, I also ran some additional benchmarks looking at the CPU capabilities of the GH200 compared to AMD Ryzen Threadripper 7000 series workstations."

Larabel had on-site access to two different Threadripper systems—a Hewlett-Packard (HP) Z6 G5 A workstation and a System76 Thelio Major semi-custom build. No comparable Intel "Xeon W hardware" was within reach, so the Team Green desktop supercomputer was only pitched against AMD HEDT processors. The HP review sample was configured with an AMD Ryzen Threadripper PRO 7995WX 96-core / 192-thread Zen 4 processor, 8 x 16 GB DDR5-5200 memory, and NVIDIA RTX A4000 GPU. Larabel said that it was an "all around nice high-end AMD workstation." The System76 Thelio Major was specced with an AMD Ryzen Threadripper 7980X processor "as the top-end non-PRO SKU." It is a 64-core / 128-thread part, working alongside 4 x 32 GB DDR5-4800 memory and a Radeon PRO W7900 graphics card.

NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

NVIDIA is anticipating supply issues for its upcoming Blackwell GPUs, which are expected to significantly improve artificial intelligence compute performance. "We expect our next-generation products to be supply constrained as demand far exceeds supply," said Colette Kress, NVIDIA's chief financial officer, during a recent earnings call. This prediction of scarcity comes just days after an analyst noted much shorter lead times for NVIDIA's current flagship Hopper-based H100 GPUs tailored to AI and high-performance computing. The eagerly anticipated Blackwell architecture and B100 GPUs built on it promise major leaps in capability—likely spurring NVIDIA's existing customers to place pre-orders already. With skyrocketing demand in the red-hot AI compute market, NVIDIA appears poised to capitalize on the insatiable appetite for ever-greater processing power.

However, the scarcity of NVIDIA's products may present an excellent opportunity for significant rivals like AMD and Intel. If both companies can offer a product that could beat NVIDIA's current H100 and provide a suitable software stack, customers would be willing to jump to their offerings and not wait many months for the anticipated high lead times. Intel is preparing the next-generation Gaudi 3 and working on the Falcon Shores accelerator for AI and HPC. AMD is shipping its Instinct MI300 accelerator, a highly competitive product, while already working on the MI400 generation. It remains to be seen if AI companies will begin the adoption of non-NVIDIA hardware or if they will remain a loyal customer and agree to the higher lead times of the new Blackwell generation. However, capacity constrain should only be a problem at launch, where the availability should improve from quarter to quarter. As TSMC improves CoWoS packaging capacity and 3 nm production, NVIDIA's allocation of the 3 nm wafers will likely improve over time as the company moves its priority from H100 to B100.

Cadence Digital and Custom/Analog Flows Certified for Latest Intel 18A Process Technology

Cadence's digital and custom/analog flows are certified on the Intel 18A process technology. Cadence design IP supports this node from Intel Foundry, and the corresponding process design kits (PDKs) are delivered to accelerate the development of a wide variety of low-power consumer, high-performance computing (HPC), AI and mobile computing designs. Customers can now begin using the production-ready Cadence design flows and design IP to achieve design goals and speed up time to market.

"Intel Foundry is very excited to expand our partnership with Cadence to enable key markets for the leading-edge Intel 18A process technology," said Rahul Goyal, Vice President and General Manager, Product and Design Ecosystem, Intel Foundry. "We will leverage Cadence's world-class portfolio of IP, AI design technologies, and advanced packaging solutions to enable high-volume, high-performance, and power-efficient SoCs in Intel Foundry's most advanced process technology. Cadence is an indispensable partner supporting our IDM2.0 strategy and the Intel Foundry ecosystem."

Arm Launches Next-Generation Neoverse CSS V3 and N3 Designs for Cloud, HPC, and AI Acceleration

Last year, Arm introduced its Neoverse Compute Subsystem (CSS) for the N2 and V2 series of data center processors, providing a reference platform for the development of efficient Arm-based chips. Major cloud service providers like AWS with Graviton 4 and Trainuium 2, Microsoft with Cobalt 100 and Maia 100, and even NVIDIA with Grace CPU and Bluefield DPUs are already utilizing custom Arm server CPU and accelerator designs based on the CSS foundation in their data centers. The CSS allows hyperscalers to optimize Arm processor designs specifically for their workloads, focusing on efficiency rather than outright performance. Today, Arm has unveiled the next generation CSS N3 and V3 for even greater efficiency and AI inferencing capabilities. The N3 design provides up to 32 high-efficiency cores per die with improved branch prediction and larger caches to boost AI performance by 196%, while the V3 design scales up to 64 cores and is 50% faster overall than previous generations.

Both the N3 and V3 leverage advanced features like DDR5, PCIe 5.0, CXL 3.0, and chiplet architecture, continuing Arm's push to make chiplets the standard for data center and cloud architectures. The chiplet approach enables customers to connect their own accelerators and other chiplets to the Arm cores via UCIe interfaces, reducing costs and time-to-market. Looking ahead, Arm has a clear roadmap for its Neoverse platform. The upcoming CSS V4 "Adonis" and N4 "Dionysus" designs will build on the improvements in the N3 and V3, advancing Arm's goal of greater efficiency and performance using optimized chiplet architectures. As more major data center operators introduce custom Arm-based designs, the Neoverse CSS aims to provide a flexible, efficient foundation to power the next generation of cloud computing.

Interposer and Fan-out Wafer Level Packaging Market worth $63.5 billion by 2029: MarketsandMarkets Research

The global interposer and FOWLP market is expected to be valued at USD 35.6 billion in 2024 and is projected to reach USD 63.5 billion by 2029; it is expected to grow at a CAGR of 12.3% during the forecast period according to a new report by MarketsandMarkets. The increasing demand for advanced packaging in AI and high-performance computing (HPC) are the key drivers fueling the expansion of the interposer and FOWLP market.

Interposer-based packaging is experiencing robust growth in the semiconductor industry, leveraging its ability to enhance performance and reduce power consumption by facilitating efficient connections between diverse chip components. This technology is increasingly adopted for its role in enabling high-bandwidth and high-performance applications, driving advancements in data centers, 5G infrastructure, and emerging technologies.
Return to Keyword Browsing
May 8th, 2024 18:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts