News Posts matching #Cloud

Return to Keyword Browsing

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Opera Partners with Google Cloud to Power its Browser AI with Gemini Models

Opera, the browser innovator, today announced a collaboration with Google Cloud to integrate Gemini models into its Aria browser AI. Aria is powered by Opera's multi-LLM Composer AI engine, which allows the Norwegian company to curate the best experiences for its users based on their needs.

Opera's Aria browser AI is unique as it doesn't just utilize one provider or LLM. Opera's Composer AI engine processes the user's intent and can decide which model to use for which task. Google's Gemini model is a modern, powerful, and user-friendly LLM that is the company's most capable model yet. Thanks to this integration, Opera will now be able to provide its users with the most current information, at high performance.
Opera Aria Gemini AI Opera Aria Gemini AI

SonicFi Debuts Small and Medium-Sized Business Network Products

SonicFi Inc., a brand new company aiming to provide solutions for SMB, today makes a debut of its Ronto cloud network product series in CommunicAsia 2024 in Singapore, making a proactive entry into the small and medium-sized enterprise market. Ronto SMB solutions include Controller, Indoor Wall Mount Access Point, Indoor Ceiling Mount Access Point, Managed Switch, and Outdoor Long Range Fixed Wireless Access Point. Combined with their in-house developed cloud platform and AI driven optimization features, these offerings provide an excellent and cost-effective network solution for small and medium-sized enterprises, factories, SI vendors, featuring cloud-based intelligent management.

Ronto products feature an in-house developed cloud platform, incorporating management functionalities essential for small and medium-sized enterprise networks. Through intuitive mobile applications, network deployment and management are streamlined, providing simple, intelligent, and secure networking solutions for small and medium-sized businesses.
SonifFi Ronto SMB

NVIDIA GeForce NOW Gets Men of War II, Palworld, and More Games This Week

Whether looking for new adventures, epic storylines or games to play with a friend, GeForce NOW members are covered. Start off with the much-anticipated sequel to the Men of War franchise or cozy up with some adorable pals in Palworld, both part of five games GeForce NOW is bringing to the cloud this week.

No Guts, No Glory
Get transported to the battlefields of World War II with historical accuracy and attention to detail in Men of War II, the newest entry in the real-time strategy series from Fulqrum Publishing. The game features an extensive roster of units, including tanks, airplanes and infantry. With advanced enemy AI and diverse gameplay modes, Men of War II promises an immersive experience for both history enthusiasts and casual gamers. Gear up, strategize and prepare to rewrite history. Get an extra fighting chance with a GeForce NOW Ultimate membership, which streams at up to 4K resolution and provides longer gaming sessions and faster access to games over a free membership.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel's latest offerings address the challenges businesses face in scaling AI initiatives.

"Innovation is advancing at an unprecedented pace, all enabled by silicon - and every company is quickly becoming an AI company," said Intel CEO Pat Gelsinger. "Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalize on the immense opportunities ahead."

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

ZOTAC Expands Computing Hardware with GPU Server Product Line for the AI-Bound Future

ZOTAC Technology Limited, a global leader in innovative technology solutions, expands its product portfolio with the GPU Server Series. The first series of products in ZOTAC's Enterprise lineup offers organizations affordable and high-performance computing solutions for a wide range of demanding applications, from core-to-edge inferencing and data visualization to model training, HPC modeling, and simulation.

The ZOTAC series of GPU Servers comes in a diverse range of form factors and configurations, featuring both Tower Workstations and Rack Mount Servers, as well as both Intel and AMD processor configurations. With support for up to 10 GPUs, modular design for easier access to internal hardware, a high space-to-performance ratio, and industry-standard features like redundant power supplies and extensive cooling options, ZOTAC's enterprise solutions can ensure optimal performance and durability, even under sustained intense workloads.

NVIDIA Data Center GPU Business Predicted to Generate $87 Billion in 2024

Omdia, an independent analyst and consultancy firm, has bestowed the title of "Kingmaker" on NVIDIA—thanks to impressive 2023 results in the data server market. The research firm predicts very buoyant numbers for the financial year of 2024—their February Cloud and Datacenter Market snapshot/report guesstimates that Team Green's data center GPU business group has the potential to rake in $87 billion of revenue. Omdia's forecast is based on last year's numbers—Jensen & Co. managed to pull in $34 billion, courtesy of an unmatched/dominant position in the AI GPU industry sector. Analysts have estimated a 150% rise in revenues for in 2024—the majority of popular server manufacturers are reliant on NVIDIA's supply of chips. Super Micro Computer Inc. CEO—Charles Liang—disclosed that his business is experiencing strong demand for cutting-edge server equipment, but complications have slowed down production: "once we have more supply from the chip companies, from NVIDIA, we can ship more to customers."

Demand for AI inference in 2023 accounted for 40% of NVIDIA data center GPU revenue—according Omdia's expert analysis—they predict further growth this year. Team Green's comfortable AI-centric business model could expand to a greater extent—2023 market trends indicated that enterprise customers had spent less on acquiring/upgrading traditional server equipment. Instead, they prioritized the channeling of significant funds into "AI heavyweight hardware." Omdia's report discussed these shifted priorities: "This reaffirms our thesis that end users are prioritizing investment in highly configured server clusters for AI to the detriment of other projects, including delaying the refresh of older server fleets." Late February reports suggest that NVIDIA H100 GPU supply issues are largely resolved—with much improved production timeframes. Insiders at unnamed AI-oriented organizations have admitted that leadership has resorted to selling-off of excess stock. The Omdia forecast proposes—somewhat surprisingly—that H100 GPUs will continue to be "supply-constrained" throughout 2024.

Google: CPUs are Leading AI Inference Workloads, Not GPUs

The AI infrastructure of today is mostly fueled by the expansion that relies on GPU-accelerated servers. Google, one of the world's largest hyperscalers, has noted that CPUs are still a leading compute for AI/ML workloads, recorded on their Google Cloud Services cloud internal analysis. During the TechFieldDay event, a speech by Brandon Royal, product manager at Google Cloud, explained the position of CPUs in today's AI game. The AI lifecycle is divided into two parts: training and inference. During training, massive compute capacity is needed, along with enormous memory capacity, to fit ever-expanding AI models into memory. The latest models, like GPT-4 and Gemini, contain billions of parameters and require thousands of GPUs or other accelerators working in parallel to train efficiently.

On the other hand, inference requires less compute intensity but still benefits from acceleration. The pre-trained model is optimized and deployed during inference to make predictions on new data. While less compute is needed than training, latency and throughput are essential for real-time inference. Google found out that, while GPUs are ideal for the training phase, models are often optimized and run inference on CPUs. This means that there are customers who choose CPUs as their medium of AI inference for a wide variety of reasons.

GIGABYTE Highlights its GPU Server Portfolio Ahead of World AI Festival

The World AI Cannes Festival (WAICF) is set to be the epicenter of artificial intelligence innovation, where the globe's top 200 decision-makers and AI innovators will converge for three days of intense discussions on groundbreaking AI strategies and use-cases. Against the backdrop of this premier event, GIGABYTE has strategically chosen to participate, unveiling its exponential growth in the AI and High-Performance Computing (HPC) market segments.

The AI industry has witnessed unprecedented growth, with Cloud Service Providers (CSP's) and data center operators spearheading supercomputing projects. GIGABYTE's decision to promote its GPU server portfolio with over 70+ models, at WAICF is a testament to the increasing demands from the French market for sovereign AI Cloud solutions. The spotlight will be on GIGABYTE's success stories on enabling GPU Cloud infrastructure, seamlessly powered by NVIDIA GPU technologies, as GIGABYTE aims to engage in meaningful conversations with end-users and firms dependent on GPU computing.

Indian Client Purchases Additional $500 Million Batch of NVIDIA AI GPUs

Indian data center operator Yotta is reportedly set to spend big with another placed with NVIDIA—a recent Reuters article outlines a $500 million purchase of Team Green AI GPUs. Yotta is in the process of upgrading its AI Cloud infrastructure, and their total tally for this endeavor (involving Hopper and newer Grace Hopper models) is likely to hit $1 billion. An official company statement from December confirmed the existence of an extra procurement of GPUs, but they did not provide any details regarding budget or hardware choices at that point in time. Reuters contacted Sunil Gupta, Yotta's CEO, last week for a comment on the situation. The co-founder elaborated: "that the order would comprise nearly 16,000 of NVIDIA's artificial intelligence chips H100 and GH200 and will be placed by March 2025."

Team Green is ramping up its embrace of the Indian data center market, as US sanctions have made it difficult to conduct business with enterprise customers in nearby Chinese territories. Reuters state that Gupta's firm (Yotta) is: "part of Indian billionaire Niranjan Hiranandani's real estate group, (in turn) a partner firm for NVIDIA in India and runs three data centre campuses, in Mumbai, Gujarat and near New Delhi." Microsoft, Google and Amazon are investing heavily in cloud and data centers situated in India. Shankar Trivedi, an NVIDIA executive, recently attended Vibrant Gujarat Global Summit—the article's reporter conducted a brief interview with him. Trivedi stated that Yotta is targeting a March 2024 start for a new NVIDIA-powered AI data center located in the region's tech hub: Gujarat International Finance Tec-City.

Synology Shows Off New Personal Cloud and Surveillance Products at CES 2024

NAS major Synology showcased its latest consumer NAS, personal cloud, and surveillance solutions at the 2024 International CES. Starting things off are the company's latest TC500 and BC500 high-resolution wired cameras; and the CC400W wireless cameras, enhanced with AI. This may sound nebulous, but upon detecting movement, the cameras automatically move to zoom in on faces or vehicle license plates, and automatically dial up video resolution and probably bitrate (image quality). The wired cameras come with 5 MP sensors, while the wireless ones have 4 MP (capable of 1440p @ 30 FPS). Each of these comes with a dark light that can illuminate up to 30 m. They also come with microSD slots, to in the event of network failures, recording continues onto a memory card. You don't need a Synology Surveillance Station device license to use these.

Moving on to the stuff Synology is best known for, NAS, and we have the upgraded DiskStation DS1823xs+, an 8-bay business grade NAS with a combined RW throughput of 3100 MB/s reads, with 2600 MB/s writes; over 173,000 IOPS random reads, and over 80,800 IOPS random writes. The main networking interface is a 10 GbE, but with an additional PCIe Gen 3 x4 slot to drop in more NICs. The NAS can pair with DX517 expansion units over USB 3.1 to scale up to 18 drives. The DS423+ is a compact 4-bay NAS powered by a Celeron J4125 quad-core CPU, 2 GB of RAM, and room for two M.2-2280 NVMe SSDs besides the four 3.5-inch HDDs. The maximum rated throughput is still around 226 MB/s, but the main networking interfaces are two 10 GbE. The DS224+ is nearly the same device, but with just two 3.5-inch bays, and two 2.5 GbE interfaces.

Microsoft's Next-Gen Xbox for 2028 to Combine AMD Zen 6 and RDNA5 with a Powerful NPU and Cloud Integration

Microsoft Xbox Series X/S, their hardware refreshes, and variants, will reportedly be the company's mainstay all the way up until 2028, the company disclosed in its documents filed as part of its anti-trust lawsuit with the FTC. In a presentation slide titled "From "Zero Microsoft" to "Full Microsoft," the company explains how its next gen Xbox, scheduled for calendar year (CY) 2028, will see a full convergence of Microsoft co-developed hardware, software, and cloud compute services, into a powerful entertainment system. It elaborates on this in another slide, titled "Cohesive Hybrid Compute," where it states the company's vision to be the development of "a next generation hybrid game platform capable of leveraging the combined power of the client and cloud to deliver deeper immersion and entirely new classes of game experiences."

From the looks of it, Microsoft fully understands the creator economy that has been built over the gaming industry, and wants to develop its next-gen console to target exactly this—a single device from which people can play, stream, and create content from—something that's traditionally reserved for gaming desktop PCs. Game streamers playing on consoles usually have an entire creator PC setup handling the production and streaming side of things. Keeping this exact use-case in mind, Microsoft plans to "enable new levels of performance beyond the capabilities of the client hardware alone," by which it means that not only will the console rely on its own hardware—which could be jaw-dropping powerful as you'll see—but also leverage cloud compute services from Microsoft.

AWS Unveils Next Generation AWS-Designed Graviton4 and Trainium2 Chips

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—delivering advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications. Graviton4 and Trainium2 mark the latest innovations in chip design from AWS. With each successive generation of chip, AWS delivers better price performance and energy efficiency, giving customers even more options—in addition to chip/instance combinations featuring the latest chips from third parties like AMD, Intel, and NVIDIA—to run virtually any application or workload on Amazon Elastic Compute Cloud (Amazon EC2).

NVIDIA Experiences Strong Cloud AI Demand but Faces Challenges in China, with High-End AI Server Shipments Expected to Be Below 4% in 2024

NVIDIA's most recent FY3Q24 financial reports reveal record-high revenue coming from its data center segment, driven by escalating demand for AI servers from major North American CSPs. However, TrendForce points out that recent US government sanctions targeting China have impacted NVIDIA's business in the region. Despite strong shipments of NVIDIA's high-end GPUs—and the rapid introduction of compliant products such as the H20, L20, and L2—Chinese cloud operators are still in the testing phase, making substantial revenue contributions to NVIDIA unlikely in Q4. Gradual shipments increases are expected from the first quarter of 2024.

The US ban continues to influence China's foundry market as Chinese CSPs' high-end AI server shipments potentially drop below 4% next year
TrendForce reports that North American CSPs like Microsoft, Google, and AWS will remain key drivers of high-end AI servers (including those with NVIDIA, AMD, or other high-end ASIC chips) from 2023 to 2024. Their estimated shipments are expected to be 24%, 18.6%, and 16.3%, respectively, for 2024. Chinese CSPs such as ByteDance, Baidu, Alibaba, and Tencent (BBAT) are projected to have a combined shipment share of approximately 6.3% in 2023. However, this could decrease to less than 4% in 2024, considering the current and potential future impacts of the ban.

Ansys Collaborates with TSMC and Microsoft to Accelerate Mechanical Stress Simulation for 3D-IC Reliability in the Cloud

Ansys has collaborated with TSMC and Microsoft to validate a joint solution for analyzing mechanical stresses in multi-die 3D-IC systems manufactured with TSMC's 3DFabric advanced packaging technologies. This collaborative solution gives customers added confidence to address novel multiphysics requirements that improve the functional reliability of advanced designs using TSMC's 3DFabric, a comprehensive family of 3D silicon stacking and advanced packaging technologies.

Ansys Mechanical is the industry-leading finite element analysis software used to simulate mechanical stresses caused by thermal gradients in 3D-ICs. The solution flow has been proven to run efficiently on Microsoft Azure, helping to ensure fast turn-around times with today's very large and complex 2.5D/3D-IC systems.

Microsoft Introduces 128-Core Arm CPU for Cloud and Custom AI Accelerator

During its Ignite conference, Microsoft introduced a duo of custom-designed silicon made to accelerate AI and excel in cloud workloads. First of the two is Microsoft's Azure Cobalt 100 CPU, a 128-core design that features a 64-bit Armv9 instruction set, implemented in a cloud-native design that is set to become a part of Microsoft's offerings. While there aren't many details regarding the configuration, the company claims that the performance target is up to 40% when compared to the current generation of Arm servers running on Azure cloud. The SoC has used Arm's Neoverse CSS platform customized for Microsoft, with presumably Arm Neoverse N2 cores.

The next and hottest topic in the server space is AI acceleration, which is needed for running today's large language models. Microsoft hosts OpenAI's ChatGPT, Microsoft's Copilot, and many other AI services. To help make them run as fast as possible, Microsoft's project Athena now has the name of Maia 100 AI accelerator, which is manufactured on TSMC's 5 nm process. It features 105 billion transistors and supports various MX data formats, even those smaller than 8-bit bit, for maximum performance. Currently tested on GPT 3.5 Turbo, we have yet to see performance figures and comparisons with competing hardware from NVIDIA, like H100/H200 and AMD, with MI300X. The Maia 100 has an aggregate bandwidth of 4.8 Terabits per accelerator, which uses a custom Ethernet-based networking protocol for scaling. These chips are expected to appear in Microsoft data centers early next year, and we hope to get some performance numbers soon.

TOP500 Update: Frontier Remains No.1 With Aurora Coming in at No. 2

The 62nd edition of the TOP500 reveals that the Frontier system retains its top spot and is still the only exascale machine on the list. However, five new or upgraded systems have shaken up the Top 10.

Housed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, Frontier leads the pack with an HPL score of 1.194 EFlop/s - unchanged from the June 2023 list. Frontier utilizes AMD EPYC 64C 2GHz processors and is based on the latest HPE Cray EX235a architecture. The system has a total of 8,699,904 combined CPU and GPU cores. Additionally, Frontier has an impressive power efficiency rating of 52.59 GFlops/watt and relies on HPE's Slingshot 11 network for data transfer.

Jabil to Take Over Intel Silicon Photonics Business

Jabil Inc., a global leader in design, manufacturing, and supply chain solutions, today announced it will take over the manufacture and sale of Intel's current Silicon Photonics-based pluggable optical transceiver ("module") product lines and the development of future generations of such modules.

"This deal better positions Jabil to cater to the needs of our valued customers in the data center industry, including hyperscale, next-wave clouds, and AI cloud data centers. These complex environments present unique challenges, and we are committed to tackling them head-on and delivering innovative solutions to support the evolving demands of the data center ecosystem," stated Matt Crowley, Senior Vice President of Cloud and Enterprise Infrastructure at Jabil. "This deal enables Jabil to expand its presence in the data center value chain."

Samsung Electronics Announces Third Quarter 2023 Results

Samsung Electronics today reported financial results for the third quarter ended September 30, 2023. Total consolidated revenue was KRW 67.40 trillion, a 12% increase from the previous quarter, mainly due to new smartphone releases and higher sales of premium display products. Operating profit rose sequentially to KRW 2.43 trillion based on strong sales of flagship models in mobile and strong demand for displays, as losses at the Device Solutions (DS) Division narrowed.

The Memory Business reduced losses sequentially as sales of high valued-added products and average selling prices somewhat increased. Earnings in system semiconductors were impacted by a delay in demand recovery for major applications, but the Foundry Business posted a new quarterly high for new backlog from design wins. The mobile panel business reported a significant increase in earnings on the back of new flagship model releases by major customers, while the large panel business narrowed losses in the quarter. The Device eXperience (DX) Division achieved solid results due to robust sales of premium smartphones and TVs. Revenue at the Networks Business declined in major overseas markets as mobile operators scaled back investments.

Western Digital Reports Fiscal First Quarter 2024 Financial Results

Western Digital Corp. (Nasdaq: WDC) today reported fiscal first quarter 2024 financial results. "Western Digital's fiscal first quarter results exceeded our expectations as the team's efforts to bolster business agility and develop differentiated and innovative products across a broad range of end-markets have resulted in sequential margin improvement across flash and HDD businesses," said David Goeckeler, Western Digital CEO. "Our Consumer and Client end markets continue to perform well and we now expect our Cloud end market to grow going forward. Our improved cost structure positions Western Digital to capitalize on enhanced earnings power as market conditions continue to improve."
  • First quarter revenue was $2.75 billion, up 3% sequentially (QoQ). Cloud revenue decreased 12% (QoQ), Client revenue increased 11% (QoQ) and Consumer revenue increased 14% (QoQ).
  • First quarter GAAP earnings per share (EPS) was $(2.17) and Non-GAAP EPS was $(1.76), which includes $225 million of underutilization-related charges in Flash and HDD.
  • Expect fiscal second quarter 2024 revenue to be in the range of $2.85 billion to $3.05 billion.
  • Expect Non-GAAP EPS in the range of $(1.35) to $(1.05), which includes $110 to $130 million of underutilization-related charges in Flash and HDD.

Oxide Unveils the World's First Commercial Cloud Computer

Oxide Computer Company today unveiled the world's first commercial Cloud Computer, a true rack-scale system with fully unified hardware and software designed for on-premises cloud computing. The company also announced a $44 million Series A financing round led by Eclipse with participation from Intel Capital, Riot Ventures, Counterpart Ventures, and Rally Ventures to accelerate production for Fortune 1000 enterprises. This brings the company's total financing raised to date to $78 million.

Despite the rapid rise of cloud computing, the vast majority of IT infrastructure today continues to live outside the public cloud in on-premises data centers, where enterprises are forced to cobble together a set of disjointed hardware and software components to run their businesses. Since its inception, Oxide's mission has been to solve this problem, developing the first unified product that delivers the developer experience and operational efficiencies of the public cloud to on-premises environments.
Return to Keyword Browsing
Nov 25th, 2024 12:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts