News Posts matching #Data Center

Return to Keyword Browsing

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

Micron Introduces 9550 NVMe Data Center SSD

Micron Technology, Inc., today announced availability of the Micron 9550 NVMe SSD - the world's fastest data center SSD and industry leader in AI workload performance and power efficiency. The Micron 9550 SSD showcases Micron's deep expertise and innovation by integrating its own controller, NAND, DRAM and firmware into one world-class product. This integrated solution enables class-leading performance, power efficiency and security features for data center operators.

The Micron 9550 SSD delivers best-in-class performance with 14.0 GB/s sequential reads and 10.0 GB/s sequential writes to provide up to 67% better performance over similar competitive SSDs and enables industry-leading performance for demanding workloads such as AI. In addition, its random reads of 3,300 KIOPS are up to 35% better and random writes of 400 KIOPS are up to 33% better than competitive offerings.

Samsung Electro-Mechanics Collaborates with AMD to Supply High-Performance Substrates for Hyperscale Data Center Computing

Samsung Electro-Mechanics (SEMCO) today announced a collaboration with AMD to supply high-performance substrates for hyperscale data center compute applications. These substrates are made in SEMCO's key the technology hub in Busan and the newly built state of the art factory in Vietnam. Market research firm Prismark predicts that the semiconductor substrate market will grow at an average annual rate of about 7%, increasing from 15.2 trillion KRW in 2024 to 20 trillion KRW in 2028. SEMCO's substantial investment of 1.9 trillion KRW in the FCBGA factory underscores its commitment to advancing substrate technology and manufacturing capabilities to meet the highest industry standards and the future technology needs.

SEMCO's collaboration with AMD focuses on meeting the unique challenges of integrating multiple semiconductor chips (Chiplets) on a single large substrate. These high-performance substrates, essential for CPU/GPU applications, offer significantly larger surface areas and higher layer counts, providing the dense interconnections required for today's advanced data centers. Compared to standard computer substrates, data center substrates are ten times larger and feature three times more layers, ensuring efficient power delivery and lossless signal integrity between chips. Addressing these challenges, SEMCO's innovative manufacturing processes mitigate issues like warpage to ensure high yields during chip mounting.

Ex-Xeon Chief Lisa Spelman Leaves Intel and Joins Cornelis Networks as CEO

Cornelis Networks, a leading independent provider of intelligent, high-performance networking solutions, today announced the appointment of Lisa Spelman as its new chief executive officer (CEO), effective August 15. Spelman joins Cornelis from Intel Corporation, where she held executive leadership roles for more than two decades, including leading the company's core data center business. Spelman will succeed Philip Murphy, who will assume the role of president and chief operating officer (COO).

"Cornelis is unique in having the products, roadmap, and talent to help customers address this issue. I look forward to joining the team to bring their innovations to even more organizations around the globe."

NVIDIA Beats Microsoft to Become World's Most Valuable Company, at $3.34 Trillion

With a market capitalization of USD $3.34 trillion, NVIDIA has beaten Microsoft to become the world's most valuable company. The company's valuation doubled year-over-year, thanks to its meteoric rise as the preeminent manufacturer of AI accelerator chips that's in a dominant position to support the productization and mainstreaming of generative AI, and the company only expects further growth of the AI acceleration industry. Chris Penrose, global head of business development for telecom at NVIDIA, speaking at an event in Copenhagen, said "The generative AI journey is really transforming businesses and telcos around the world," he said. "We're just at the beginning." BBC notes that eight years ago, NVIDIA was worth less than 1% of its current valuation.

In the most recent quarterly result, Q1 fiscal 2025, NVIDIA posted a revenue of $26 billion, with the Data Center business handling the company's AI GPUs making up the lion's share of it, at $22.6 billion. The Gaming and AI PC segment, which handles the GeForce GPU product line that used to be NVIDIA's main breadwinner until a few years ago, made just $2.6 billion, in stark contrast. This highlights that NVIDIA is now mainly a data center acceleration hardware company that happens to sell visual compute products on the side, along with a constellation of smaller product lines such as robotics and automobile self-driving hardware. With NVIDIA at the number-1 spot, the top-5 most valuable companies in the world are all American tech giants—NVIDIA, Microsoft, Apple, Alphabet (Google), and Amazon. The other companies in the top-10 list include Meta and Broadcom.

TYAN Presents AMD EPYC Server Platforms Optimized for Data Center Compute Performance and Large-Scale AI/HPC Infrastructure

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, brings its latest AMD EPYC server platforms to the COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7.

"With the advanced capabilities of 4th Gen AMD EPYC Processors, TYAN's AMD EPYC server platforms deliver optimized performance for modern data centers and large-scale AI/HPC infrastructure, ensuring high energy efficiency and robust security," said Rick Hwang, President of MiTAC Computing Technology Corporation. "For the group of smaller businesses and dedicated hosters, TYAN offers AMD EPYC 4004 CPU-based servers that provide cost-effective, user-friendly solutions with enterprise-grade reliability, scalability, and security."

Next-Gen Computing: MiTAC and TYAN Launch Intel Xeon 6 Processor-Based Servers for AI, HPC, Cloud, and Enterprise Workloads at COMPUTEX 2024

The subsidiary of MiTAC Holdings Corp, MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership," said Rick Hwang, President of MiTAC Computing Technology Corporation.

MiTAC, TYAN Unveil Intel Xeon 6 Servers for AI, HPC, Cloud, and Enterprise at Computex 2024

MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership." said Rick Hwang, President of MiTAC Computing Technology Corporation.

MSI Unveils New AI and Computing Platforms with 4th Gen AMD EPYC Processors at Computex 2024

MSI, a leading global server provider, will introduce its latest server platforms based on the 4th Gen AMD EPYC processors at Computex 2024, booth #M0806 in Taipei, Taiwan, from June 4-7. These new platforms, designed for growing cloud-native environments, deliver a combination of performance and efficiency for data centers.

"Leveraging the advantages of 4th Gen AMD EPYC processors, MSI's latest server platforms feature scalability and flexibility with new adoption of CXL technology and DC-MHS architecture, helping data centers achieve the most scalable cloud applications while delivering leading performance," said Danny Hsu, General Manager of Enterprise Platform Solutions.

GIGABYTE Joins COMPUTEX to Unveil Energy Efficiency and AI Acceleration Solutions

Giga Computing, a subsidiary of GIGABYTE and an industry leader in AI servers and green computing, today announced its participation in COMPUTEX and unveiling of solutions tackling complex AI workloads at scale, as well as advanced cooling infrastructure that will lead to greater energy efficiency. Additionally, to support innovations in accelerated computing and generative AI, GIGABYTE will have NVIDIA GB200 NVL72 systems available in Q1 2025. Discussions around GIGABYTE products will be held in booth #K0116 in Hall 1 at the Taipei Nangang Exhibition Center. As an NVIDIA-Certified System provider, GIGABYTE servers also support NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform.

Redefining AI Servers and Future Data Centers
All new and upcoming CPU and accelerated computing technologies are being showcased at the GIGABYTE booth alongside GIGA POD, a rack-scale AI solution by GIGABYTE. The flexibility of GIGA POD is demonstrated with the latest solutions such as the NVIDIA HGX B100, NVIDIA HGX H200, NVIDIA GH200 Grace Hopper Superchip, and other OAM baseboard GPU systems. As a turnkey solution, GIGA POD is designed to support baseboard accelerators at scale with switches, networking, compute nodes, and more, including support for NVIDIA Spectrum -X to deliver powerful networking capabilities for generative AI infrastructures.

Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft's Phi-3 family of open models. The Phi-3 family of small, open models can run on lower-compute hardware, be more easily fine-tuned to meet specific requirements and enable developers to build applications that run locally. Intel's supported products include Intel Gaudi AI accelerators and Intel Xeon processors for data center applications and Intel Core Ultra processors and Intel Arc graphics for client.

"We provide customers and developers with powerful AI solutions that utilize the industry's latest AI models and software. Our active collaboration with fellow leaders in the AI software ecosystem, like Microsoft, is key to bringing AI everywhere. We're proud to work closely with Microsoft to ensure Intel hardware - spanning data center, edge and client - actively supports several new Phi-3 models," said Pallavi Mahajan, Intel corporate vice president and general manager, Data Center and AI Software.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Apple Reportedly Developing Custom Data Center Processors with Focus on AI Inference

Apple is reportedly working on creating in-house chips designed explicitly for its data centers. This news comes from a recent report by the Wall Street Journal, which highlights the company's efforts to enhance its data processing capabilities and reduce dependency on third parties to supply the infrastructure. In the internal project called Apple Chips in Data Center (ACDC), which started in 2018, Apple wanted to design data center processors to handle the massive user base and increase the company's service offerings. The most recent advancement in AI means that Apple will probably serve an LLM processed in Apple's data center. The chip will most likely focus on inference of AI models rather than training.

The AI chips are expected to play a crucial role in improving the efficiency and speed of Apple's data centers, which handle vast amounts of data generated by the company's various services and products. By developing these custom chips, Apple aims to optimize its data processing and storage capabilities, ultimately leading to better user experiences across its ecosystem. The move by Apple to develop AI-enhanced chips for data centers is seen as a strategic step in the company's efforts to stay ahead in the competitive tech landscape. Almost all major tech companies, famously called the big seven, have products that use AI in silicon and in software processing. However, Apple is the one that seemingly lacked that. Now, the company is integrating AI across the entire vertical, from the upcoming iPhone integration to M4 chips for Mac devices and ACDC chips for data centers.

Micron First to Production of 200+ Layer QLC NAND in Client and Data Center

Micron Technology, Inc., today demonstrated its continued NAND technology leadership by announcing that its 232-layer QLC NAND is now in mass production and shipping in select Crucial SSDs, in volume production to enterprise storage customers and sampling to OEM PC manufacturers in the Micron 2500 NMVe SSD.

Micron 232-layer QLC NAND delivers unparalleled performance for use cases across mobile, client, edge and data center storage by leveraging these important capabilities:
  • Industry-leading bit-density, up to 28% more compact than leading competitors' latest products
  • Industry-leading NAND I/O speeds of 2400 MT/s, a 50% improvement over the prior generation
  • 24% better read performance over the prior generation
  • 31% better programming performance over the prior generation

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel's latest offerings address the challenges businesses face in scaling AI initiatives.

"Innovation is advancing at an unprecedented pace, all enabled by silicon - and every company is quickly becoming an AI company," said Intel CEO Pat Gelsinger. "Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalize on the immense opportunities ahead."

US Government Wants Nuclear Plants to Offload AI Data Center Expansion

The expansion of AI technology affects not only the production and demand for graphics cards but also the electricity grid that powers them. Data centers hosting thousands of GPUs are becoming more common, and the industry has been building new facilities for GPU-enhanced servers to serve the need for more AI. However, these powerful GPUs often consume over 500 Watts per single card, and NVIDIA's latest Blackwell B200 GPU has a TGP of 1000 Watts or a single kilowatt. These kilowatt GPUs will be present in data centers with 10s of thousands of cards, resulting in multi-megawatt facilities. To combat the load on the national electricity grid, US President Joe Biden's administration has been discussing with big tech to re-evaluate their power sources, possibly using smaller nuclear plants. According to an Axios interview with Energy Secretary Jennifer Granholm, she has noted that "AI itself isn't a problem because AI could help to solve the problem." However, the problem is the load-bearing of the national electricity grid, which can't sustain the rapid expansion of the AI data centers.

The Department of Energy (DOE) has been reportedly talking with firms, most notably hyperscalers like Microsoft, Google, and Amazon, to start considering nuclear fusion and fission power plants to satisfy the need for AI expansion. We have already discussed the plan by Microsoft to embed a nuclear reactor near its data center facility and help manage the load of thousands of GPUs running AI training/inference. However, this time, it is not just Microsoft. Other tech giants are reportedly thinking about nuclear as well. They all need to offload their AI expansion from the US national power grid and develop a nuclear solution. Nuclear power is a mere 20% of the US power sourcing, and DOE is currently financing a Holtec Palisades 800-MW electric nuclear generating station with $1.52 billion in funds for restoration and resumption of service. Microsoft is investing in a Small Modular Reactors (SMRs) microreactor energy strategy, which could be an example for other big tech companies to follow.

NVIDIA Data Center GPU Business Predicted to Generate $87 Billion in 2024

Omdia, an independent analyst and consultancy firm, has bestowed the title of "Kingmaker" on NVIDIA—thanks to impressive 2023 results in the data server market. The research firm predicts very buoyant numbers for the financial year of 2024—their February Cloud and Datacenter Market snapshot/report guesstimates that Team Green's data center GPU business group has the potential to rake in $87 billion of revenue. Omdia's forecast is based on last year's numbers—Jensen & Co. managed to pull in $34 billion, courtesy of an unmatched/dominant position in the AI GPU industry sector. Analysts have estimated a 150% rise in revenues for in 2024—the majority of popular server manufacturers are reliant on NVIDIA's supply of chips. Super Micro Computer Inc. CEO—Charles Liang—disclosed that his business is experiencing strong demand for cutting-edge server equipment, but complications have slowed down production: "once we have more supply from the chip companies, from NVIDIA, we can ship more to customers."

Demand for AI inference in 2023 accounted for 40% of NVIDIA data center GPU revenue—according Omdia's expert analysis—they predict further growth this year. Team Green's comfortable AI-centric business model could expand to a greater extent—2023 market trends indicated that enterprise customers had spent less on acquiring/upgrading traditional server equipment. Instead, they prioritized the channeling of significant funds into "AI heavyweight hardware." Omdia's report discussed these shifted priorities: "This reaffirms our thesis that end users are prioritizing investment in highly configured server clusters for AI to the detriment of other projects, including delaying the refresh of older server fleets." Late February reports suggest that NVIDIA H100 GPU supply issues are largely resolved—with much improved production timeframes. Insiders at unnamed AI-oriented organizations have admitted that leadership has resorted to selling-off of excess stock. The Omdia forecast proposes—somewhat surprisingly—that H100 GPUs will continue to be "supply-constrained" throughout 2024.

Tachyum Has Disclosed 1U and 2U Server, HPC, AI Reference Designs

Tachyum today announced that it is bringing 1U and 2U platform solutions to market behind a strategy that ensures customers and partners will be able to quickly and easily test, benchmark and deploy Prodigy solutions across a broad range of supported applications and workloads. Tachyum's platform strategy includes offering evaluation platforms for early testing with OEMs and ODMs able to incorporate Prodigy into their own designs. The 2U evaluation platform, optimized to address the high-performance needs of HPC and Big AI, will be the first to sample in Q1 of 2025. The 1U platform, targeting applications such as AI inference and a wide range of cloud applications, follows in the second quarter.

Tachyum has chosen Chenbro as the chassis partner for the Prodigy evaluation platforms. Chenbro's standard chassis products provide solutions for both 1U and 2U that address Prodigy's high-performance requirements, allowing Tachyum to focus on the device and motherboard development. Both platforms will use the same 2-socket motherboard supporting 32 DDR5 DIMMs, as well as supporting storage integration for 16 E1.S SSDs. The evaluation platforms will initially launch as air-cooled infrastructures, allowing for fast, easy evaluations, with a 4-socket liquid-cooled platform arriving later.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

Huawei Launches OptiXtrans DC908 Pro, a Next-gen DCI Platform for the AI Era

At MWC Barcelona 2024, Huawei launched the Huawei OptiXtrans DC908 Pro, a new platform for Data Center Interconnect (DCI) designed for the intelligent era. This innovative platform ensures the efficient, secure, and stable transmission of data between data centers (DCs), setting a new standard for DCI networks. As AI continues to proliferate across various service scenarios, the demand for foundation models has intensified, leading to an explosion in data volume. DCs are now operating at the petabyte level, and DCI networks have evolved from single-wavelength 100 Gbit/s to single-wavelength Tbit/s.

In response to the challenges posed by massive data transmission in the intelligent era, Huawei introduces the next-generation DCI platform, the Huawei OptiXtrans DC908 Pro. Compared to its predecessor, the DC908 Pro offers higher bandwidth, reliability, and intelligence.

Quantum Machines Launches OPX1000, a High-density Processor-based Control Platform

In Sept. 2023, Quantum Machines (QM) unveiled OPX1000, our most advanced quantum control system to date - and the industry's leading controller in terms of performance and channel density. OPX1000 is the third generation of QM's processor-based quantum controllers. It enhances its predecessor, OPX+, by expanding analog performance and multiplying channel density to support the control of over 1,000 qubits. However, QM's vision for quantum controllers extends far beyond.

OPX1000 is designed as a platform for orchestrating the control of large-scale QPUs (quantum processing units). It's equipped with 8 frontend modules (FEMs) slots, representing the cutting-edge modular architecture for quantum control. The first low-frequency (LF) module was introduced in September 2023, and today, we're happy to introduce the Microwave (MW) FEM, which delivers additional value to our rapidly expanding customer base.

NVIDIA Prepared to Offer Custom Chip Designs to AI Clients

NVIDIA is reported to be setting up an AI-focused semi-custom chip design business unit, according to inside sources known to Reuters—it is believed that Team Green leadership is adapting to demands leveraged by key data-center customers. Many companies are seeking cheaper alternatives, or have devised their own designs (budget/war chest permitting)—NVIDIA's current range of AI GPUs are simply off-the-shelf solutions. OpenAI has generated the most industry noise—their alleged early 2024 fund-raising pursuits have attracted plenty of speculative/kind-of-serious interest from notable semiconductor personalities.

Team Green is seemingly reacting to emerging market trends—Jensen Huang (CEO, president and co-founder) has hinted that NVIDIA custom chip designing services are on the cusp. Stephen Nellis—a Reuters reporter specializing in tech industry developments—has highlighted select NVIDIA boss quotes from an incoming interview piece: "We're always open to do that. Usually, the customization, after some discussion, could fall into system reconfigurations or recompositions of systems." The Team Green chief teased that his engineering team is prepared to take on the challenge meeting exact requests: "But if it's not possible to do that, we're more than happy to do a custom chip. And the benefit to the customer, as you can imagine, is really quite terrific. It allows them to extend our architecture with their know-how and their proprietary information." The rumored NVIDIA semi-custom chip design business unit could be introduced in an official capacity at next month's GTC 2024 Conference.

Intel and Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), today introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).

AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential. Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.

Edged Energy Launches Four Ultra-Efficient AI-Ready Data Centers in USA

Edged Energy, a subsidiary of Endeavour devoted to carbon neutral data center infrastructure, announced today the launch of its first four U.S. data centers, all designed for today's high-density AI workloads and equipped with advanced waterless cooling and ultra-efficient energy systems. The facilities will bring more than 300 MW of critical capacity with an industry-leading average Power Usage Effectiveness (PUE) of 1.15 portfolio-wide. Edged has nearly a dozen new data centers operating or under construction across Europe and North America and a gigawatt-scale project pipeline.

The first phase of this U.S. expansion includes a 168 MW campus in Atlanta, a 96 MW campus in the Chicago area, 36 MW in Phoenix and 24 MW in Kansas City. At a time of growing water scarcity where rivers, aquifers and watersheds are at dangerously low levels, it is more critical than ever that IT infrastructure conserve precious water resources. The new Edged facilities are expected to save more than 1.2 billion gallons of water each year compared to conventional data centers. "The rise of AI and machine learning is requiring more power, and often more water, to cool outdated servers. While traditional data centers struggle to adapt, Edged facilities are ready for the advanced computing of today and tomorrow without consuming any water for cooling," said Bryant Farland, Chief Executive Officer for Edged. "Sustainability is at the core of our platform. It is why our data centers are uniquely optimized for energy efficiency and water conservation. We are excited to be partnering with local communities to bring future-proof solutions to a growing digital economy."
Return to Keyword Browsing
Mar 6th, 2025 22:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts