News Posts matching #Data Center GPU

Return to Keyword Browsing

JPR: Q3'24 PC Graphics AiB Shipments Decreased 14.5% Compared to the Last Quarter

According to a new research report from the analyst firm Jon Peddie Research, the growth of the global PC-based graphics add-in board market reached 8.1 million units in Q3'24 and desktop PC CPU shipments increased to 20.1 million units. Overall, AIBs will have a compound annual growth rate of -6.0% from 2024 to 2028 and reach an installed base of 119 million units at the end of the forecast period. Over the next five years, the penetration of AIBs in desktop PCs will be 83%.

As indicated in the following chart, AMD's overall AIB market share decreased -2.0% from last quarter, and NVIDIA's market share increased by 2.0%. These slight flips of market share in a down quarter don't mean much except to the winner. The overall market dynamics haven't changed.
  • The AIB overall attach rate in desktop PCs for the quarter decreased to 141%, down -26.9% from last quarter.
  • The desktop PC CPU market decreased -3.4% year to year and increased 42.2% quarter to quarter, which influenced the attach rate of AIBs.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Intel-powered Aurora Supercomputer Ranks Fastest for AI

At ISC High Performance 2024, Intel announced in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE) that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is the fastest AI system in the world dedicated to AI for open science, achieving 10.6 AI exaflops. Intel will also detail the crucial role of open ecosystems in driving AI-accelerated high performancehigh -performance computing (HPC). "The Aurora supercomputer surpassing exascale will allow it to pave the road to tomorrow's discoveries. From understanding climate patterns to unraveling the mysteries of the universe, supercomputers serve as a compass guiding us toward solving truly difficult scientific challenges that may improve humanity," said Ogi Brkic, Intel vice president and general manager of Data Center AI Solutions.

Designed as an AI-centric system from its inception, Aurora will allow researchers to harness generative AI models to accelerate scientific discovery. Significant progress has been made in Argonne's early AI-driven research. Success stories include mapping the human brain's 80 billion neurons, high-energy particle physics enhanced by deep learning, and drug design and discovery accelerated by machine learning, among others. The Aurora supercomputer is an expansive system with 166 racks, 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Center GPU Max Series units, making it one of the world's largest GPU clusters.

Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel's latest offerings address the challenges businesses face in scaling AI initiatives.

"Innovation is advancing at an unprecedented pace, all enabled by silicon - and every company is quickly becoming an AI company," said Intel CEO Pat Gelsinger. "Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalize on the immense opportunities ahead."

NVIDIA Data Center GPU Business Predicted to Generate $87 Billion in 2024

Omdia, an independent analyst and consultancy firm, has bestowed the title of "Kingmaker" on NVIDIA—thanks to impressive 2023 results in the data server market. The research firm predicts very buoyant numbers for the financial year of 2024—their February Cloud and Datacenter Market snapshot/report guesstimates that Team Green's data center GPU business group has the potential to rake in $87 billion of revenue. Omdia's forecast is based on last year's numbers—Jensen & Co. managed to pull in $34 billion, courtesy of an unmatched/dominant position in the AI GPU industry sector. Analysts have estimated a 150% rise in revenues for in 2024—the majority of popular server manufacturers are reliant on NVIDIA's supply of chips. Super Micro Computer Inc. CEO—Charles Liang—disclosed that his business is experiencing strong demand for cutting-edge server equipment, but complications have slowed down production: "once we have more supply from the chip companies, from NVIDIA, we can ship more to customers."

Demand for AI inference in 2023 accounted for 40% of NVIDIA data center GPU revenue—according Omdia's expert analysis—they predict further growth this year. Team Green's comfortable AI-centric business model could expand to a greater extent—2023 market trends indicated that enterprise customers had spent less on acquiring/upgrading traditional server equipment. Instead, they prioritized the channeling of significant funds into "AI heavyweight hardware." Omdia's report discussed these shifted priorities: "This reaffirms our thesis that end users are prioritizing investment in highly configured server clusters for AI to the detriment of other projects, including delaying the refresh of older server fleets." Late February reports suggest that NVIDIA H100 GPU supply issues are largely resolved—with much improved production timeframes. Insiders at unnamed AI-oriented organizations have admitted that leadership has resorted to selling-off of excess stock. The Omdia forecast proposes—somewhat surprisingly—that H100 GPUs will continue to be "supply-constrained" throughout 2024.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

NVIDIA Prepared to Offer Custom Chip Designs to AI Clients

NVIDIA is reported to be setting up an AI-focused semi-custom chip design business unit, according to inside sources known to Reuters—it is believed that Team Green leadership is adapting to demands leveraged by key data-center customers. Many companies are seeking cheaper alternatives, or have devised their own designs (budget/war chest permitting)—NVIDIA's current range of AI GPUs are simply off-the-shelf solutions. OpenAI has generated the most industry noise—their alleged early 2024 fund-raising pursuits have attracted plenty of speculative/kind-of-serious interest from notable semiconductor personalities.

Team Green is seemingly reacting to emerging market trends—Jensen Huang (CEO, president and co-founder) has hinted that NVIDIA custom chip designing services are on the cusp. Stephen Nellis—a Reuters reporter specializing in tech industry developments—has highlighted select NVIDIA boss quotes from an incoming interview piece: "We're always open to do that. Usually, the customization, after some discussion, could fall into system reconfigurations or recompositions of systems." The Team Green chief teased that his engineering team is prepared to take on the challenge meeting exact requests: "But if it's not possible to do that, we're more than happy to do a custom chip. And the benefit to the customer, as you can imagine, is really quite terrific. It allows them to extend our architecture with their know-how and their proprietary information." The rumored NVIDIA semi-custom chip design business unit could be introduced in an official capacity at next month's GTC 2024 Conference.

Indian Client Purchases Additional $500 Million Batch of NVIDIA AI GPUs

Indian data center operator Yotta is reportedly set to spend big with another placed with NVIDIA—a recent Reuters article outlines a $500 million purchase of Team Green AI GPUs. Yotta is in the process of upgrading its AI Cloud infrastructure, and their total tally for this endeavor (involving Hopper and newer Grace Hopper models) is likely to hit $1 billion. An official company statement from December confirmed the existence of an extra procurement of GPUs, but they did not provide any details regarding budget or hardware choices at that point in time. Reuters contacted Sunil Gupta, Yotta's CEO, last week for a comment on the situation. The co-founder elaborated: "that the order would comprise nearly 16,000 of NVIDIA's artificial intelligence chips H100 and GH200 and will be placed by March 2025."

Team Green is ramping up its embrace of the Indian data center market, as US sanctions have made it difficult to conduct business with enterprise customers in nearby Chinese territories. Reuters state that Gupta's firm (Yotta) is: "part of Indian billionaire Niranjan Hiranandani's real estate group, (in turn) a partner firm for NVIDIA in India and runs three data centre campuses, in Mumbai, Gujarat and near New Delhi." Microsoft, Google and Amazon are investing heavily in cloud and data centers situated in India. Shankar Trivedi, an NVIDIA executive, recently attended Vibrant Gujarat Global Summit—the article's reporter conducted a brief interview with him. Trivedi stated that Yotta is targeting a March 2024 start for a new NVIDIA-powered AI data center located in the region's tech hub: Gujarat International Finance Tec-City.

Intel Innovation 2023: Bringing AI Everywhere

As the world experiences a generational shift to artificial intelligence, each of us is participating in a new era of global expansion enabled by silicon. It's the "Siliconomy," where systems powered by AI are imbued with autonomy and agency, assisting us across both knowledge-based and physical-based tasks as part of our everyday environments.

At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads - from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data center AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024 and tools for securely powering AI deployments at the edge.

Intel Discontinues Brand New Max 1350 Data Center GPU, Successor Targets Alternative Markets

Intel has decided to re-organize its Max series of Data Center GPUs (codenamed Ponte Vecchio), as revealed to Tom's Hardware this week, with a particular model - the Data Center Max GPU 1350 set for removal from the lineup. Industry experts are puzzled by this decision, given that the 1350 has been officially "available" on the market since January 2023, following soon after the announcement of the entire Max range in November 2022. Intel has removed listings and entries for the Data Center GPU Max 1350 from its various web presences.

A (sort of) successor is in the works, Intel has lined up the Data Center Max GPU 1450 for release later in the year. This model will have a trimmed I/O bandwidth - this modification is likely targeting companies in China, where performance standards are capped at a certain level (via U.S. sanctions on GPU exports). An Intel spokesperson provided further details and reasons for rearranging the Max product range: "We launched the Intel Data Center Max GPU 1550 (600 W), which was initially targeted for liquid-cooled solutions only. We have since expanded our support by offering Intel Data Center Max GPU 1550 (600 W) to include air-cooled solutions."
Return to Keyword Browsing
Dec 22nd, 2024 01:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts