News Posts matching #datacenter

Return to Keyword Browsing

MSI Servers Power the Next-Gen Datacenters at the 2025 OCP EMEA Summit

MSI, a leading global provider of high-performance server solutions, unveiled its latest ORv3-compliant and high-density multi-node server platforms at the 2025 OCP EMEA Summit, held April 29-30 at booth A19. Built on OCP-recognized DC-MHS architecture and supporting the latest AMD EPYC 9005 Series processors, these next-generation platforms are engineered to deliver outstanding compute density, energy efficiency, and scalability—meeting the evolving demands of modern, data-intensive datacenters.

"We are excited to be part of open-source innovation and sustainability through our contributions to the Open Compute Project," said Danny Hsu, General Manager of Enterprise Platform Solutions. "We remain committed to advancing open standards, datacenter-focused design, and modular server architecture. Our ability to rapidly develop products tailored to specific customer requirements is central to enabling next-generation infrastructure, making MSI a trusted partner for scalable, high-performance solutions."

MSI Presenting AI's Next Leap at Japan IT Week Spring 2025

MSI, a leading global provider of high-performance server solutions, is bringing AI-driven innovation to Japan IT Week Spring 2025 at Booth #21-2 with high-performance server platforms built for next-generation AI and cloud computing workloads. MSI's NVIDIA MGX AI Servers deliver modular GPU-accelerated computing to optimize AI training and inference, while the Core Compute line of Multi-Node Servers maximize compute density and efficiency for AI inference and cloud service provider workloads. MSI's Open Compute line of ORv3 Servers enhance scalability and thermal efficiency in hyperscale AI deployments. MSI's Enterprise Servers provide balanced compute, storage, and networking for seamless AI workloads across cloud and edge. With deep expertise in system integration and AI-driven infrastructure, MSI is advancing the next generation of intelligent computing solutions to power AI's next leap.

"AI's advancement hinges on performance efficiency, compute density, and workload scalability. MSI's server platforms are engineered to accelerate model training, optimize inference, and maximize resource utilization—ensuring enterprises have the processing power to turn AI potential into real-world impact," said Danny Hsu, General Manager of MSI Enterprise Platform Solutions.

Industry's First-to-Market Supermicro NVIDIA HGX B200 Systems Demonstrate AI Performance Leadership

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, has announced first-to-market industry leading performance on several MLPerf Inference v5.0 benchmarks, using the 8-GPU. The 4U liquid-cooled and 10U air-cooled systems achieved the best performance in select benchmarks. Supermicro demonstrated more than 3 times the tokens per second (Token/s) generation for Llama2-70B and Llama3.1-405B benchmarks compared to H200 8-GPU systems. "Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks released by MLCommons in 2025," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first-to-market with a diverse range of systems optimized for various workloads. We continue to collaborate closely with NVIDIA to fine-tune our systems and secure a leadership position in AI workloads." Learn more about the new MLPerf v5.0 Inference benchmarks here.

Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks. MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.

Server Market Revenue Increased 91% During the Q4 2024, NVIDIA Continues Dominating the GPU Server Space

According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, the server market reached a record $77.3 billion dollars in revenue during the last quarter of the year. This quarter showed the second highest growth rate since 2019 with a year-over-year increase of 91% in vendor revenue. Revenue generated from x86 servers increased 59.9% in 2024Q4 to $54.8 billion while Non-x86 servers increased 262.1% year over year to $22.5 billion.

Revenue for servers with an embedded GPU in the fourth quarter of 2024 grew 192.6% year-over-year and for the full year 2024, more than half of the server market revenue came from servers with an embedded GPU. NVIDIA continues dominating the server GPU space with over 90% of the total shipments with and embedded GPU in 2024Q4. The fast pace at which hyperscalers and cloud service providers have been adopting servers with embedded GPUs has fueled the server market growth which has more than doubled in size since 2020 with revenue of $235.7 billion dollars for the full year 2024.

Equal1 Launches Bell-1: The First Quantum System Purpose-Built for the HPC Era

Equal1 today unveils Bell-1, the first quantum system purpose-built for the HPC era. Unlike first-generation quantum computers that demand dedicated rooms, infrastructure, and complex cooling systems, Bell-1 is designed for direct deployment in HPC-class environments. As a rack-mountable quantum node, it integrates directly alongside classical compute—as compact as a GPU server, yet exponentially more powerful for the world's hardest problems. Bell-1 is engineered to eliminate the traditional barriers of cost, infrastructure, and complexity, setting a new benchmark for scalable quantum computing integration.

Bell-1 rewrites the rule book. While today's quantum computers demand specialized infrastructure, Bell-1 is a silicon-powered quantum computer that integrates seamlessly into existing HPC environments. Simply rack it, plug it in, and unlock quantum capabilities wherever your classical computers already operate. No new cooling systems. No extraordinary power demands. Just quantum computing that works in the real world, as easy to deploy as a high-end GPU server. It plugs into a standard power socket, operates at just 1600 W, and delivers on-demand quantum computing for computationally intensive workloads.

AMD Discusses EPYC's "No Compromise" Driving of Performance and Efficiency

One of the main pillars that vendors of Arm-based processors often cite as a competitive advantage versus x86 processors is a keen focus on energy efficiency and predictability of performance. In the quest for higher efficiency and performance, Arm vendors have largely designed out the ability to operate on multiple threads concurrently—something that most enterprise-class CPUs have enabled for years under the technology description of "SMT"—which was also created in the name of enabling performance and efficiency benefits.

Arm vendors often claim that SMT brings security risks, creates performance unpredictability from shared resource contention and drives added cost and energy needed to implement SMT. Interestingly, Arm does support multi-threading in its Neoverse E1-class processor family for embedded uses such as automotive. Given these incongruities, this blog intends to provide a bit more clarity to help customers assess what attributes of performance and efficiency really bring them value for their critical workloads.

Microsoft Presents Majorana 1: First Quantum Processor to Pave the Way to Million-Qubit Systems

Microsoft has launched Majorana 1, the world's first quantum processor powered by a Topological Core architecture, marking a significant step toward fault-tolerant, utility-scale quantum computing. The chip leverages tetron qubits—topological qubits built on Majorana zero modes (MZMs)—to achieve stability and scalability, with a roadmap to one million qubits, a threshold critical for solving industrial challenges like microplastic degradation and self-healing materials. At the heart of Majorana 1 lies a superconductor-semiconductor heterostructure combining indium arsenide and aluminium. This "topoconductor" material enables precise control of MZMs, exotic quantum particles that encode information non-locally, inherently resisting noise and errors. The design, detailed in the latest paper, arranges MZMs in H-shaped nanowires, forming two-sided tetrons that suppress errors exponentially via three factors: topological gap-to-temperature ratio, wire length-to-coherence length, and high-fidelity microwave readout. Microsoft claims that thopoconductor can "create an entirely new state of matter - not a solid, liquid or gas but a topological state."

Unlike conventional qubits requiring analog tuning, Microsoft's architecture uses digital voltage pulses for error-resistant, measurement-based operations. This approach simplifies scaling, with the current chip housing eight tetrons and supporting protocols for quantum error detection, such as the Hastings-Haah Floquet codes and ladder codes outlined in Microsoft's technical roadmap. These codes rely on single- and two-qubit Pauli measurements, native to tetrons, to detect and correct errors without complex gate sequences. DARPA's US2QC program validated that Microsoft's topology-first strategy minimizes overhead, enabling a future million-qubit system compact enough to fit in Azure datacenters. The chip's quantum capacitance measurement system detects parity shifts in microseconds, achieving a signal-to-noise ratio critical for fault tolerance. Applications span designing catalysts to break down pollutants, optimizing enzymes for agriculture, and simulating novel materials. Microsoft aims to merge quantum, AI, and high-performance computing into Azure, accelerating discoveries once deemed decades away. Majorana 1 proves that topological qubits—once a high-risk bet—are now the cornerstone of scalable quantum systems.

Global Semiconductor Manufacturing Industry Reports Solid Q4 2024 Results

The global semiconductor manufacturing industry closed 2024 with strong fourth quarter results and solid year-on-year (YoY) growth across most of the key industry segments, SEMI announced today in its Q4 2024 publication of the Semiconductor Manufacturing Monitor (SMM) Report, prepared in partnership with TechInsights. The industry outlook is cautiously optimistic at the start of 2025 as seasonality and macroeconomic uncertainty may impede near-term growth despite momentum from strong investments related to AI applications.

After declining in the first half of 2024, electronics sales bounced back later in the year resulting in a 2% annual increase. Electronics sales grew 4% YoY in Q4 2024 and are expected to see a 1% YoY increase in Q1 2025 impacted by seasonality. Integrated circuit (IC) sales rose by 29% YoY in Q4 2024 and continued growth is expected in Q1 2025 with a 23% increase YoY as AI-fueled demand continues boosting shipments of high-performance computing (HPC) and datacenter memory chips.

Silicon Motion Working on MonTitan SM8466, a Next-gen PCIe 6.0 SSD Controller

Silicon Motion will expand its MonTitan lineup of SSD controllers—for datacenters and enterprise platforms—with the upcoming addition of a truly next-generation model. Wallace C. Kou (the company's founder and CEO) contributes to ChinaFlashMarket.com with a regular written column—his latest feature (posted on January 17) includes a short sentence dedicated to announcing his firm's new SM8466 design. This appears to be their first foray into PCIe 6.0-based interface territories—details are minimal (at this point in time), but the CEO divulged the very basics. Silicon Motion's engineering team is currently in the "development stage" with the SM8466 project—a: "4 nm PCIe Gen 6 SSD master chip."

It is not clear whether this next-gen PCIe 6.0 SSD controller will be heading to market anytime soon, but Kou's column mostly focused on current plans—likely signalling where priorities lie. Silicon Motion's "built-in PCIe Gen 5 SSD enterprise-level master chip" (SM8366) is in mass production—industry experts believe that the company's MonTitan PCIe 5.0 family has had a tough time keeping up with equivalent Phison products—in particular, the market leading PS5026-E26 (PCIe 5.0 x4) controller. The SM8366 could be potent enough to take the crown in higher-end enterprise segments, but the existence of a PCIe 6.0-based successor is bound to attract extra attention.

GlobalFoundries Announces New York Advanced Packaging and Photonics Center

GlobalFoundries (Nasdaq: GFS) (GF) today announced plans to create a new center for advanced packaging and testing of U.S.-made essential chips within its New York manufacturing facility. Supported by investments from the State of New York and the U.S. Department of Commerce, the first-of-its-kind center aims to enable semiconductors to be securely manufactured, processed, packaged and tested entirely onshore in the United States to meet the growing demand for GF's silicon photonics and other essential chips needed for critical end markets including AI, automotive, aerospace and defense, and communications.

Growth in AI is driving the adoption of silicon photonics and 3D and heterogeneously integrated (HI) chips to meet power, bandwidth and density requirements in datacenters and edge devices. Silicon photonics chips are also positioned to address power and performance needs in automotive, communications, radar, and other critical infrastructure applications.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

IonQ Unveils Its First Quantum Computer in Europe, Online Now at a Record #AQ36

IonQ, a leader in the quantum computing and networking industry, today announced the delivery of IonQ Forte Enterprise to its first European Innovation Center at the uptownBasel campus in Arlesheim, Switzerland. Achieved in partnership with QuantumBasel, this major milestone marks the first datacenter-ready quantum computer IonQ has delivered that will operate outside the United States and the first quantum system for commercial use in Switzerland.

Forte Enterprise is now online servicing compute jobs while performing at a record algorithmic qubit count of #AQ36, which is significantly more powerful than the promised #AQ35. With each additional #AQ, the useful computational space for running quantum algorithms doubles. A system with #AQ36 is capable of considering more than 68 billion different possibilities simultaneously. With this milestone, IonQ once again leads the industry in delivering production-ready systems to customers.

Corsair by d-Matrix Enables GPU-Free AI Inference

d-Matrix today unveiled Corsair, an entirely new computing paradigm designed from the ground-up for the next era of AI inference in modern datacenters. Corsair leverages d-Matrix's innovative Digital In-Memory Compute architecture (DIMC), an industry first, to accelerate AI inference workloads with industry-leading real-time performance, energy efficiency, and cost savings as compared to GPUs and other alternatives.

The emergence of reasoning agents and interactive video generation represents the next level of AI capabilities. These leverage more inference computing power to enable models to "think" more and produce higher quality outputs. Corsair is the ideal inference compute solution with which enterprises can unlock new levels of automation and intelligence without compromising on performance, cost or power.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

SiFive Announces Performance P870-D RISC-V Datacenter Processor

Today SiFive, Inc., the gold standard for RISC-V computing, announced its new SiFive Performance P870-D datacenter processor to meet customer requirements for highly parallelizable infrastructure workloads including video streaming, storage, and web appliances. When used in combination with products from the SiFive Intelligence product family, datacenter architects can also build an extremely high-performance, energy efficient compute subsystem for AI-powered applications.

Building on the success of the P870, the P870-D supports the open AMBA CHI protocol so customers have more flexibility to scale the number of clusters. This scalability allows customers to boost performance while minimizing power consumption. By harnessing a standard CHI bus, the P870-D enables SiFive's customers to scale up to 256 cores while harnessing industry-standard protocols, including Compute Express Link (CXL) and CHI chip to chip (C2C), to enable coherent high core count heterogeneous SoCs and chiplet configurations.

Kioxia Announces Completion of New Flash Memory Manufacturing Building in Kitakami Plant

Kioxia Corporation, a world leader in memory solutions, today announced that the building construction of Fab2 (K2) of its industry-leading Kitakami Plant was completed in July. K2 is the second flash memory manufacturing facility at the Kitakami Plant in the Iwate Prefecture of Japan. As demand is recovering, the company will gradually make capital investments while closely monitoring flash memory market trends. Kioxia plans to start operation at K2 in the fall of Calendar Year 2025.

In addition, some administration and engineering departments will move into a new administration building located adjacent to K2 beginning in November 2024 to oversee the operation of K2. A portion of investment for K2 will be subsidized by the Japanese government according to the plan approved in February 2024.

Next-Gen Computing: MiTAC and TYAN Launch Intel Xeon 6 Processor-Based Servers for AI, HPC, Cloud, and Enterprise Workloads at COMPUTEX 2024

The subsidiary of MiTAC Holdings Corp, MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership," said Rick Hwang, President of MiTAC Computing Technology Corporation.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

EdgeCortix to Showcase Flagship SAKURA-I Chip at Singapore Airshow 2024

EdgeCortix, the Japan-based fabless semiconductor company focused on energy-efficient AI processing, announced today that the Acquisitions, Technology and Logistics Agency (ATLA), Japan Ministry of Defense, will include the groundbreaking edge AI startup alongside an elite group of leading Japanese companies to represent Japan's air and defense innovation landscape at ATLA's booth at the Singapore Airshow to be held February 20 - 25. The Singapore Airshow is one of the largest and most influential shows of its kind in the world, and the largest in Asia, seeing as many as 50,000 attendees per biennial show. Over 1,000 companies from 50 countries are expected to participate in the 2024 show.

EdgeCortix's flagship product, the SAKURA-I chip, will be featured among a small handful of influential Japanese innovations at the booth. SAKURA-I is a dedicated co-processor that delivers high compute efficiency and low latency for artificial intelligence (AI) workloads that are carried out "at the edge", where the data is collected and mission critical decisions need to be made - far away from a datacenter. SAKURA-I delivers orders of magnitude better energy efficiency and processing speed than conventional semiconductors (ex: GPUs & CPUs), while drastically reducing operating costs for end users.

AMD Instinct MI300X Released at Opportune Moment. NVIDIA AI GPUs in Short Supply

LaminiAI appeared to be one of the first customers to receive an initial shipment of AMD's Instinct MI300X accelerators, as disclosed by their CEO posting about functioning hardware on social media late last week. A recent Taiwan Economic Daily article states that the "MI300X is rumored to have begun supply"—we are not sure about why they have adopted a semi-secretive tone in their news piece, but a couple of anonymous sources are cited. A person familiar with supply chains in Taiwan divulged that: "(they have) been receiving AMD MI300X chips one after another...due to the huge shortage of NVIDIA AI chips, the arrival of new AMD products is really a timely rainfall." Favorable industry analysis (from earlier this month) has placed Team Red in a position of strength, due to growing interest in their very performant flagship AI accelerator.

The secrecy seems to lie in Team Red's negotiation strategies in Taiwan—the news piece alleges that big manufacturers in the region have been courted. AMD has been aggressive in a push to: "cooperate and seize AI business opportunities, with GIGABYTE taking the lead and attracting the most attention. Not only was GIGABYTE the first to obtain a partnership with AMD's MI300A chip, which had previously been mass-produced, but GIGABYTE was also one of the few Taiwanese manufacturers included in AMD's first batch of MI300X partners." GIGABYTE is expected to release two new "G593" product lines of server hardware later this year, based on combinations of AMD's Instinct MI300X accelerator and EPYC 9004 series processors.

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Alphawave Semi Partners with Keysight to Deliver a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity for the world's technology infrastructure, today announced successful collaboration with Keysight Technologies, a market-leading design, emulation, and test solutions provider, demonstrating interoperability between Alphawave Semi's PCIe 6.0 64 GT/s Subsystem (PHY and Controller) Device and Keysight PCIe 6.0 64 GT/s Protocol Exerciser, negotiating a link to the maximum PCIe 6.0 data rate. Alphawave Semi, already on the PCI-SIG 5.0 Integrators list, is accelerating next-generation PCIe 6.0 Compliance Testing through this collaboration.

Alphawave Semi's leading-edge silicon implementation of the new PCIe 6.0 64 GT/s Flow Control Unit (FLIT)-based protocol enables higher data rates for hyperscale and data infrastructure applications. Keysight and Alphawave Semi achieved another milestone by successfully establishing a CXL 2.0 link setting the stage for future cache coherency in the datacenter.

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.
Return to Keyword Browsing
May 2nd, 2025 20:19 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts