News Posts matching #Server

Return to Keyword Browsing

Microsoft Announces its FY25 Q2 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended December 31, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $69.6 billion and increased 12%
  • Operating income was $31.7 billion and increased 17% (up 16% in constant currency)
  • Net income was $24.1 billion and increased 10%
  • Diluted earnings per share was $3.23 and increased 10%
"We are innovating across our tech stack and helping customers unlock the full ROI of AI to capture the massive opportunity ahead," said Satya Nadella, chairman and chief executive officer of Microsoft. "Already, our AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-over-year."

NVIDIA Outlines Cost Benefits of Inference Platform

Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform—a full stack comprising world-class silicon, systems and software—is the key to delivering high-throughput and low-latency inference and enabling great user experiences while lowering cost. NVIDIA's advancements in inference software optimization and the NVIDIA Hopper platform are helping industries serve the latest generative AI models, delivering excellent user experiences while optimizing total cost of ownership. The Hopper platform also helps deliver up to 15x more energy efficiency for inference workloads compared to previous generations.

AI inference is notoriously difficult, as it requires many steps to strike the right balance between throughput and user experience. But the underlying goal is simple: generate more tokens at a lower cost. Tokens represent words in a large language model (LLM) system—and with AI inference services typically charging for every million tokens generated, this goal offers the most visible return on AI investments and energy used per task. Full-stack software optimization offers the key to improving AI inference performance and achieving this goal.

Ultra Accelerator Link Consortium (UALink) Welcomes Alibaba, Apple and Synopsys to Board of Directors

Ultra Accelerator Link Consortium (UALink) has announced the expansion of its Board of Directors with the election of Alibaba Cloud Computing Ltd., Apple Inc., and Synopsys Inc. The new Board members will leverage their industry knowledge to advance development and industry adoption of UALink - a high-speed, scale-up interconnect for next-generation AI cluster performance.

"Alibaba Cloud believes that driving AI computing accelerator scale-up interconnection technology by defining core needs and solutions from the perspective of cloud computing and applications has significant value in building the competitiveness of intelligent computing supernodes," said Qiang Liu, VP of Alibaba Cloud, GM of Alibaba Cloud Server Infrastructure. "The UALink consortium, as a leader in the interconnect field of AI accelerators, has brought together key members from the AI infrastructure industry to work together to define interconnect protocol which is natively designed for AI accelerators, driving innovation in AI infrastructure. This will strongly promote the innovation of AI infrastructure and improve the execution efficiency of AI workloads, contributing to the establishment of an open and innovative industry ecosystem."

NVIDIA's GB200 "Blackwell" Racks Face Overheating Issues

NVIDIA's new GB200 "Blackwell" racks are running into trouble (again). Big cloud companies like Microsoft, Amazon, Google, and Meta Platforms are cutting back their orders because of heat problems, Reuters reports, quoting The Information. The first shipments of racks with Blackwell chips are getting too hot and have connection issues between chips, the report says. These tech hiccups have made some customers who ordered $10 billion or more worth of racks think twice about buying.

Some are putting off their orders until NVIDIA has better versions of the racks. Others are looking at buying older NVIDIA AI chips instead. For example, Microsoft planned to set up GB200 racks with no less than 50,000 Blackwell chips at one of its Phoenix sites. However, The Information reports that OpenAI has asked Microsoft to provide NVIDIA's older "Hopper" chips instead pointing to delays linked to the Blackwell racks. NVIDIA's problems with its Blackwell GPUs housed in high-density racks are not something new; in November 2024, Reuters, also referencing The Information, uncovered overheating issues in servers that housed 72 processors. NVIDIA has made several changes to its server rack designs to tackle these problems, however, it seems that the problem was not entirely solved.

Supermicro Begins Volume Shipments of Max-Performance Servers Optimized for AI, HPC, Virtualization, and Edge Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is commencing shipments of max-performance servers featuring Intel Xeon 6900 series processors with P-cores. The new systems feature a range of new and upgraded technologies with new architectures optimized for the most demanding high-performance workloads including large-scale AI, cluster-scale HPC, and environments where a maximum number of GPUs are needed, such as collaborative design and media distribution.

"The systems now shipping in volume promise to unlock new capabilities and levels of performance for our customers around the world, featuring low latency, maximum I/O expansion providing high throughput with 256 performance cores per system, 12 memory channels per CPU with MRDIMM support, and high performance EDSFF storage options," said Charles Liang, president and CEO of Supermicro. "We are able to ship our complete range of servers with these new application-optimized technologies thanks to our Server Building Block Solutions design methodology. With our global capacity to ship solutions at any scale, and in-house developed liquid cooling solutions providing unrivaled cooling efficiency, Supermicro is leading the industry into a new era of maximum performance computing."

InWin Introduces New Server & IPC Equipment at CES 2025

InWin has showcased several new server chassis models at CES—these new introductions form part of the company's efforts to expand regional IPC, server, and systems assembly operations going into 2025. New manufacturing facilities in the USA and Malaysia were brought online last year, and new products have sprung forth. TechPowerUp staffers were impressed by InWin's RG650B model—this cavernous rackmount GPU server has been designed with AI and HPC applications in mind. Its 6.5U dual-chamber design is divided into two sections with optimized and independent heat dissipation systems—GPU accelerators are destined for the 4.5U space, while the motherboard and CPUs go into the 2U chamber.

The RG650B's front section is dominated by the nine pre-installed hot swappable 80 x 30 mm (12,000 RPM max. rated) PWM fans. This array should provide plenty of cooling for any contained hardware; these components will be powered by an 80 Plus Titanium CRPS 3200 W PSU (with four 12V-2x6 pin connectors). InWin's spec sheet states that their RG650B supports 18 FHFL PCI-Express slots with four PCI-Express riser cables—granting plenty of potential for the installation of add-in boards.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Advantech Introduces Its GPU Server SKY-602E3 With NVIDIA H200 NVL

Advantech, a leading global provider of industrial edge AI solutions, is excited to introduce its GPU server SKY-602E3 equipped with the NVIDIA H200 NVL platform. This powerful combination is set to accelerate the offline LLM for manufacturing, providing unprecedented levels of performance and efficiency. The NVIDIA H200 NVL, requiring 600 W passive cooling, is fully supported by the compact and efficient SKY-602E3 GPU server, making it an ideal solution for demanding edge AI applications.

Core of Factory LLM Deployment: AI Vision
The SKY-602E3 GPU server excels in supporting large language models (LLMs) for AI inference and training. It features four PCIe 5.0 x16 slots, delivering high bandwidth for intensive tasks, and four PCIe 5.0 x8 slots, providing enhanced flexibility for GPU and frame grabber card expansion. The half-width design of the SKY-602E3 makes it an excellent choice for workstation environments. Additionally, the server can be equipped with the NVIDIA H200 NVL platform, which offers 1.7x more performance than the NVIDIA H100 NVL, freeing up additional PCIe slots for other expansion needs.

$30,000 Music Streaming Server is the Next Audiophile Dream Device

Taiko Audio, a Dutch high-end audio manufacturer, has unveiled what might be the most over-engineered music server ever created—the Extreme Server. With a starting price of €28,000 (US$29,600), this meticulously crafted device embodies either the pinnacle of audio engineering or the epitome of audiophile excess. The Extreme's most distinctive feature is its unique dual-processor architecture, using two Intel Xeon Scalable 10-core CPUs. This unusual configuration isn't just for show—Taiko claims it solves a specific audiophile dilemma: the impact of Roon's music management interface on sound quality. By dedicating two processors to Roon and Windows 10 Enterprise LTSC 2019 interface, they've made Roon's processing "virtually inaudible", addressing a concern most music listeners probably never knew existed.

Perhaps the most striking technical achievement is the server's cooling system, or rather, its complete absence of conventional cooling. Taiko designed a custom 240 W passive cooling solution with absolutely no fans or moving parts. The company machined the CPU interface to a mind-boggling precision of 5 microns (0.005 mm) and opted for solid copper heat sinks instead of aluminium, claiming this will extend component life by 4 to 12 years. The attention to detail extends to the memory configuration, where Taiko takes an unconventional approach. The server uses twelve 4 GB custom-made industrial memory modules, each factory pre-selected with components matched to within 1% tolerance. According to Taiko, this reduces the refresh rate burst current by almost 50% and allows for lower operating temperatures. The PSU that powers the PC is a custom 400 W linear power supply, an in-house development designed specifically for the Extreme's unique needs. It combines premium Mundorf and Duelund capacitors for sonic neutrality, Lundahl chokes selected by ear, and extensive vibrational damping using Panzerholz (a compressed wood composite) for durability, low temperature operation, longevity, and exceptional sound quality.

Path of Exile 2 Becomes Victim of Its Own Success As 450,000+ Players Overwhelm Servers

Path of Exile 2 today released in Early Access on Steam and consoles, and, despite the game's $29.99 Early Access pricing, it has already managed to amass a peak player count of over 458,920 players on Steam alone. While this is undoubtedly good news for the developer and publisher, the increased server load has apparently already caused problems, resulting in excessive queue times to get into game sessions. At the time of writing, the game has only been available to play for a little over four hours, and the player count is only beginning to plateau now.

According to the Path of Exile X account, the development team has been hard at work trying to stem the bleeding, as it were. So far, the Path of Exile website has been down several times due to the high traffic, preventing players from claiming their Steam keys. Additionally, and somewhat hilariously, this outage has also affected the "Early Access Live Updates" site that was meant to be a resource for gamers to keep track of work the live service team was doing to try and deal with the high launch-day volumes.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."

Dell Technologies Delivers Third Quarter Fiscal 2025 Financial Results

Dell Technologies announces financial results for its fiscal 2025 third quarter. Revenue was $24.4 billion, up 10% year over year. Operating income was $1.7 billion and non-GAAP operating income was $2.2 billion, both up 12% year over year. Diluted earnings per share was $1.58, and non-GAAP diluted earnings per share was $2.15, up 16% and 14% year over year, respectively.

"We continued to build on our AI leadership and momentum, delivering combined ISG and CSG revenue of $23.5 billion, up 13% year over year," said Yvonne McGill, chief financial officer, Dell Technologies. "Our continued focus on profitability resulted in EPS growth that outpaced revenue growth, and we again delivered strong cash performance."

Aetina Debuts at SC24 With NVIDIA MGX Server for Enterprise Edge AI

Aetina, a subsidiary of the Innodisk Group and an expert in edge AI solutions, is pleased to announce its debut at Supercomputing (SC24) in Atlanta, Georgia, showcasing the innovative SuperEdge NVIDIA MGX short-depth edge AI server, AEX-2UA1. By integrating an enterprise-class on-premises large language model (LLM) with the advanced retrieval-augmented generation (RAG) technique, Aetina NVIDIA MGX short-depth server demonstrates exceptional enterprise edge AI performance, setting a new benchmark in Edge AI innovation. The server is powered by the latest Intel Xeon 6 processor and dual high-end double-width NVIDIA GPUs, delivering ultimate AI computing power in a compact 2U form factor, accelerating Gen AI at the edge.

The SuperEdge NVIDIA MGX server expands Aetina's product portfolio from specialized edge devices to comprehensive AI server solutions, propelling a key milestone in Innodisk Group's AI roadmap, from sensors and storage to AI software, computing platforms, and now AI edge servers.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

Hypertec Introduces the World's Most Advanced Immersion-Born GPU Server

Hypertec proudly announces the launch of its latest breakthrough product, the TRIDENT iG series, an immersion-born GPU server line that brings extreme density, sustainability, and performance to the AI and HPC community. Purpose-built for the most demanding AI applications, this cutting-edge server is optimized for generative AI, machine learning (ML), deep learning (DL), large language model (LLM) training, inference, and beyond. With up to six of the latest NVIDIA GPUs in a 2U form factor, a staggering 8 TB of memory with enhanced RDMA capabilities, and groundbreaking density supporting up to 200 GPUs per immersion tank, the TRIDENT iG server line is a game-changer for AI infrastructure.

Additionally, the server's innovative design features a single or dual root complex, enabling greater flexibility and efficiency for GPU usage in complex workloads.

IBM Expands Its AI Accelerator Offerings; Announces Collaboration With AMD

IBM and AMD have announced a collaboration to deploy AMD Instinct MI300X accelerators as a service on IBM Cloud. This offering, which is expected to be available in the first half of 2025, aims to enhance performance and power efficiency for Gen AI models such as and high-performance computing (HPC) applications for enterprise clients. This collaboration will also enable support for AMD Instinct MI300X accelerators within IBM's watsonx AI and data platform, as well as Red Hat Enterprise Linux AI inferencing support.

"As enterprises continue adopting larger AI models and datasets, it is critical that the accelerators within the system can process compute-intensive workloads with high performance and flexibility to scale," said Philip Guido, executive vice president and chief commercial officer, AMD. "AMD Instinct accelerators combined with AMD ROCm software offer wide support including IBM watsonx AI, Red Hat Enterprise Linux AI and Red Hat OpenShift AI platforms to build leading frameworks using these powerful open ecosystem tools. Our collaboration with IBM Cloud will aim to allow customers to execute and scale Gen AI inferencing without hindering cost, performance or efficiency."

NVIDIA "Blackwell" NVL72 Servers Reportedly Require Redesign Amid Overheating Problems

According to The Information, NVIDIA's latest "Blackwell" processors are reportedly encountering significant thermal management issues in high-density server configurations, potentially affecting deployment timelines for major tech companies. The challenges emerge specifically in NVL72 GB200 racks housing 72 GB200 processors, which can consume up to 120 kilowatts of power per rack, weighting a "mere" 3,000 pounds (or about 1.5 tons). These thermal concerns have prompted NVIDIA to revisit and modify its server rack designs multiple times to prevent performance degradation and potential hardware damage. Hyperscalers like Google, Meta, and Microsoft, who rely heavily on NVIDIA GPUs for training their advanced language models, have allegedly expressed concerns about possible delays in their data center deployment schedules.

The thermal management issues follow earlier setbacks related to a design flaw in the Blackwell production process. The problem stemmed from the complex CoWoS-L packaging technology, which connects dual chiplets using RDL interposer and LSI bridges. Thermal expansion mismatches between various components led to warping issues, requiring modifications to the GPU's metal layers and bump structures. A company spokesperson characterized these modifications as part of the standard development process, noting that a new photomask resolved this issue. The Information states that mass production of the revised Blackwell GPUs began in late October, with shipments expected to commence in late January. However, these timelines are unconfirmed by NVIDIA, and some server makers like Dell confirmed that these GB200 NVL72 liquid-cooled systems are shipping now, not in January, with CoreWave GPU cloud provider as a customer. The original report could be using older information, as Dell is one of NVIDIA's most significant partners and among the first in the supply chain to gain access to new GPU batches.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."

IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage

Today at its inaugural IBM Quantum Developer Conference, IBM announced quantum hardware and software advancements to execute complex algorithms on IBM quantum computers with record levels of scale, speed, and accuracy.

IBM Quantum Heron, the company's most performant quantum processor to-date and available in IBM's global quantum data centers, can now leverage Qiskit to accurately run certain classes of quantum circuits with up to 5,000 two-qubit gate operations. Users can now use these capabilities to expand explorations in how quantum computers can tackle scientific problems across materials, chemistry, life sciences, high-energy physics, and more.

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

Arctic Intros Freezer 4U-M Rev. 2 Server CPU Cooler With Support for Ampere Altra Series

Even more versatile in the second revision: Developed on the basis of its proven predecessor model, the new version of the Freezer 4U-M offers optimised cooling performance, not only for powerful server CPUs from AMD and Intel, but also for the ARM processors of the Ampere Altra series.

Multi-compatible with additional flexibility
The 2nd revision of the Freezer 4U-M also impresses with its case and socket compatibility. In addition, it has been specially adapted to support Ampere Altra processors with 32 to 128 cores.

TerraMaster Launches Five New BBS Integrated Backup Servers

In an era where data has become a core asset for modern enterprises, TerraMaster, a global leader in data storage and management solutions, has announced the official launch of five high-performance integrated backup servers: T9-500 Pro, T12-500 Pro, U4-500, U8-500 Plus, and U12-500 Plus. This product release not only enriches TerraMaster enterprise-level product line but also provides enterprise users with an integrated, efficient, and secure data backup solution - from hardware to software - by pairing these devices with the company proprietary BBS Business Backup Suite.

Key Features of the New Integrated Backup Servers
  • T9-500 Pro & T12-500 Pro: As new members of TerraMaster high-end series, these products feature compact designs and are easy to manage. With powerful processors, large memory capacities, and dual 10GbE network interfaces, they ensure high-efficiency data backup tasks, catering to the large-scale data storage and backup needs of small and medium-sized enterprises.
  • U4-500: Designed for SOHO, small offices, and remote work scenarios, the U4-500 features a compact 4-bay design and convenient network connectivity, making it an ideal data backup solution. Its user-friendly management interface allows for easy deployment and maintenance.
  • U8-500 Plus & U12-500 Plus: These two rackmount 8-bay and 12-bay upgraded models feature fully optimized designs, high-performance processors, and standard dual 10GbE high-speed interfaces. They not only improve data processing speeds but also enhance data security, making them particularly suitable for small and medium-sized enterprises that need to handle large volumes of data backup and recovery.

AMD Captures 28.7% Desktop Market Share in Q3 2024, Intel Maintains Lead

According to the market research firm Mercury Research, the desktop CPU market has witnessed a remarkable transformation, with AMD seizing a substantial 28.7% market share in Q3 of 2024—a giant leap since the launch of the original Zen architecture in 2017. This 5.7 percentage point surge from the previous quarter is a testament to the company's continuous innovation against the long-standing industry leader, Intel. Their year-over-year growth of nearly ten percentage points, fueled by the success of their Ryzen 7000 and 9000 series processors, starkly contrasts Intel's Raptor Lake processors, which encountered technical hurdles like stability issues. AMD's revenue share soared by 8.5 percentage points, indicating robust performance in premium processor segments. Intel, witnessing a decline in its desktop market share to 71.3%, attributes this shift to inventory adjustments rather than competitive pressure and still holds the majority.

AMD's success story extends beyond desktops, with the company claiming 22.3% of the laptop processor market and 24.2% of the server segment. A significant milestone was reached as AMD's data center division generated $3.549 billion in quarterly revenue, a new record for a company not even present in the data center in any considerable quantity just a decade ago. Stemming from strong EPYC processor sales to hyperscalers and cloud providers, along with Instinct MI300X for AI applications, AMD's acceleration of data center deployments is massive. Despite these shifts, Intel continues to hold its dominant position in client computing, with 76.1% of the overall PC market, held by its strong corporate relationships and extensive manufacturing infrastructure. OEM partners like Dell, HP, Lenovo, and others rely heavily on Intel for their CPU choice, equipping institutions like schools, universities, and government agencies.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.
Return to Keyword Browsing
Feb 1st, 2025 12:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts