News Posts matching #Cloud

Return to Keyword Browsing

Microsoft Announces its FY25 Q2 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended December 31, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $69.6 billion and increased 12%
  • Operating income was $31.7 billion and increased 17% (up 16% in constant currency)
  • Net income was $24.1 billion and increased 10%
  • Diluted earnings per share was $3.23 and increased 10%
"We are innovating across our tech stack and helping customers unlock the full ROI of AI to capture the massive opportunity ahead," said Satya Nadella, chairman and chief executive officer of Microsoft. "Already, our AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-over-year."

Supermicro Empowers AI-driven Capabilities for Enterprise, Retail, and Edge Server Solutions

Supermicro, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing the latest solutions for the retail industry in collaboration with NVIDIA at the National Retail Federation (NRF) annual show. As generative AI (GenAI) grows in capability and becomes more easily accessible, retailers are leveraging NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, for a broad spectrum of applications.

"Supermicro's innovative server, storage, and edge computing solutions improve retail operations, store security, and operational efficiency," said Charles Liang, president and CEO of Supermicro. "At NRF, Supermicro is excited to introduce retailers to AI's transformative potential and to revolutionize the customer's experience. Our systems here will help resolve day-to-day concerns and elevate the overall buying experience."

Qualcomm Pushes for Data Center CPUs, Hires Ex-Intel Chief Xeon Architect

Qualcomm is becoming serious about its server CPU ambitions. Today, we have learned that Sailesh Kottapalli, Intel's former chief architect for Xeon server processors, has joined Qualcomm as Senior Vice President after 28 years at Intel. Kottapalli, who announced his departure on LinkedIn Monday, previously led the development of multiple Xeon and Itanium processors at Intel. Qualcomm's data center team is currently working on reference platforms based on their Snapdragon technology. The company already sells AI accelerator chips under the Qualcomm Cloud AI brand, supported by major providers including AWS, HPE, and Lenovo.

This marks Qualcomm's second attempt at entering the server CPU market, following an unsuccessful Centriq effort that ended in 2018. The company is now leveraging technology from its $1.4 billion Nuvia acquisition in 2021, though this has led to ongoing legal disputes with Arm over licensing terms. While Qualcomm hasn't officially detailed Kottapalli's role, the company confirmed in legal filings its intentions to continue developing data center CPUs, as originally planned by Nuvia.

LG and Xbox Partner to Expand Cloud Gaming Experience on LG Smart TVs

LG Electronics (LG) has announced a partnership with Xbox, providing players access to hundreds of games with the Xbox app on LG Smart TVs. Owners of LG's latest Smart TVs will be able to effortlessly discover and play a wide selection of PC and console games from industry-leading partners, and soon Xbox, through the new Gaming Portal. This versatile, gaming centric hub is designed as an all-in-one solution for seamless navigation and personalized gaming, both for the latest AAA games and casual webOS app games.

For gaming enthusiasts, LG Smart TV users can soon explore the Gaming Portal for direct access to hundreds of games with Xbox Game Pass Ultimate, including popular titles like Call of Duty: BlackOps 6, and highly anticipated releases like Avowed. With Game Pass Ultimate, players will also be able to stream a catalog of select Xbox games they own such as NBA 2K25 or Hogwarts Legacy.

NVIDIA GeForce NOW Gets Indiana Jones and the Great Circle and Twelve More Games

GeForce NOW is wrapping a sleigh-full of gaming gifts this month, stuffing members' cloud gaming stockings with new titles and fresh offers to keep holiday gaming spirits merry and bright. Adventure calls and whip-cracking action awaits in the highly anticipated Indiana Jones and the Great Circle, streaming in the cloud today during the Advanced Access period for those who have preordered the Premium Edition from Steam or the Microsoft Store. The title can only be played with RTX ON - GeForce NOW is offering gamers without high-performance hardware the ability to play it with 25% off Ultimate and Performance Day Passes. It's like finding that extra-special gift hidden behind the tree.

This GFN Thursday also brings a new limited-time offer: 50% off the first month of new Ultimate or Performance memberships - a gift that can keep on giving. Whether looking to try out GeForce NOW or buckle in for long-term cloud gaming, new members can choose between the Day Pass sale or the new membership offer. There's a perfect gaming gift for everyone this holiday season. GFN Thursday also brings 13 new titles in December, with four available this week to get the festivities going. Plus, the latest update to GeForce NOW - version 2.0.69 - includes expanded support for 10-bit color precision. This feature enhances image quality when streaming on Windows, macOS and NVIDIA SHIELD TVs - and now to Edge and Chrome browsers on Windows devices, as well as to the Chrome browser on Chromebooks, Samsung TVs and LG TVs.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."

Lenovo launches ThinkShield Firmware Assurance for Deep Protection Above and Below the Operating System

Today, Lenovo announced the introduction of ThinkShield Firmware Assurance as part of its portfolio of enterprise-grade cybersecurity solutions. ThinkShield Firmware Assurance is one of the only computer OEM solutions to enable deep visibility and protection below the operating system (OS) by embracing Zero Trust Architecture (ZTA) component-level visibility to generate more accurate and actionable risk management insights.

As a security paradigm, ZTA explicitly identifies users and devices to grant appropriate levels of access so a business can operate with less risk and minimal friction. ZTA is a critical framework to reduce risk as organizations endeavor to complete Zero-Trust implementations.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."

TerraMaster Launches Five New BBS Integrated Backup Servers

In an era where data has become a core asset for modern enterprises, TerraMaster, a global leader in data storage and management solutions, has announced the official launch of five high-performance integrated backup servers: T9-500 Pro, T12-500 Pro, U4-500, U8-500 Plus, and U12-500 Plus. This product release not only enriches TerraMaster enterprise-level product line but also provides enterprise users with an integrated, efficient, and secure data backup solution - from hardware to software - by pairing these devices with the company proprietary BBS Business Backup Suite.

Key Features of the New Integrated Backup Servers
  • T9-500 Pro & T12-500 Pro: As new members of TerraMaster high-end series, these products feature compact designs and are easy to manage. With powerful processors, large memory capacities, and dual 10GbE network interfaces, they ensure high-efficiency data backup tasks, catering to the large-scale data storage and backup needs of small and medium-sized enterprises.
  • U4-500: Designed for SOHO, small offices, and remote work scenarios, the U4-500 features a compact 4-bay design and convenient network connectivity, making it an ideal data backup solution. Its user-friendly management interface allows for easy deployment and maintenance.
  • U8-500 Plus & U12-500 Plus: These two rackmount 8-bay and 12-bay upgraded models feature fully optimized designs, high-performance processors, and standard dual 10GbE high-speed interfaces. They not only improve data processing speeds but also enhance data security, making them particularly suitable for small and medium-sized enterprises that need to handle large volumes of data backup and recovery.

Microsoft Brings Copilot AI Assistant to Windows Terminal

Microsoft has taken another significant step in its AI integration strategy by introducing "Terminal Chat," an AI assistant now available in Windows Terminal. This latest feature brings conversational AI capabilities directly to the command-line interface, marking a notable advancement in making terminal operations more accessible to users of all skill levels. The new feature, currently available in Windows Terminal (Canary), leverages various AI services, including ChatGPT, GitHub Copilot, and Azure OpenAI, to provide interactive assistance for command-line operations. What sets Terminal Chat apart is its context-aware functionality, which automatically recognizes the specific shell environment being used—whether it's PowerShell, Command Prompt, WSL Ubuntu, or Azure Cloud Shell—and tailors its responses accordingly.

Users can interact with Terminal Chat through a dedicated interface within Windows Terminal, where they can ask questions, troubleshoot errors, and request guidance on specific commands. The system provides shell-specific suggestions, automatically adjusting its recommendations based on whether a user is working in Windows PowerShell, Linux, or other environments. For example, when asked about creating a directory, Terminal Chat will suggest "New-Item -ItemType Directory" for PowerShell users while providing "mkdir" as the appropriate command for Linux environments. This intelligent adaptation helps bridge the knowledge gap between different command-line interfaces. Below are some examples courtesy of Windows Latest and their testing:

Microsoft Announces its FY25 Q1 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended September 30, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $65.6 billion and increased 16%
  • Operating income was $30.6 billion and increased 14%
  • Net income was $24.7 billion and increased 11% (up 10% in constant currency)
  • Diluted earnings per share was $3.30 and increased 10%
"AI-driven transformation is changing work, work artifacts, and workflow across every role, function, and business process," said Satya Nadella, chairman and chief executive officer of Microsoft. "We are expanding our opportunity and winning new customers as we help them apply our AI platforms and tools to drive new growth and operating leverage."

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Google Shows Production NVIDIA "Blackwell" GB200 NVL System for Cloud

Last week, we got a preview of Microsoft's Azure production-ready NVIDIA "Blackwell" GB200 system, showing that only a third of the rack that goes in the data center is actually holding the compute elements, with the other two-thirds holding the cooling compartment to cool down the immense heat output from tens of GB200 GPUs. Today, Google is showing off a part of its own infrastructure ahead of the Google Cloud App Dev & Infrastructure Summit, taking place on October 30, digitally as an event. Shown below are two racks standing side by side, connecting NVIDIA "Blackwell" GB200 NVL cards with the rest of the Google infrastructure. Unlike Microsoft Azure, Google Cloud uses a different data center design in its facilities.

There is one rack with power distribution units, networking switches, and cooling distribution units, all connected to the compute rack, which houses power supplies, GPUs, and CPU servers. Networking equipment is present, and it connects to Google's "global" data center network, which is Google's own data center fabric. We are not sure what is the fabric connection of choice between these racks; as for optimal performance, NVIDIA recommends InfiniBand (Mellanox acquisition). However, given that Google's infrastructure is set up differently, there may be Ethernet switches present. Interestingly, Google's design of GB200 racks differs from Azure's, as it uses additional rack space to distribute the coolant to its local heat exchangers, i.e., coolers. We are curious to see if Google releases more information on infrastructure, as it has been known as the infrastructure king because of its ability to scale and keep everything organized.

Supermicro Adds New Petascale JBOF All-Flash Storage Solution Integrating NVIDIA BlueField-3 DPU for AI Data Pipeline Acceleration

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is launching a new optimized storage system for high performance AI training, inference and HPC workloads. This JBOF (Just a Bunch of Flash) system utilizes up to four NVIDIA BlueField-3 data processing units (DPUs) in a 2U form factor to run software-defined storage workloads. Each BlueField-3 DPU features 400 Gb Ethernet or InfiniBand networking and hardware acceleration for high computation storage and networking workloads such as encryption, compression and erasure coding, as well as AI storage expansion. The state-of-the-art, dual port JBOF architecture enables active-active clustering ensuring high availability for scale up mission critical storage applications as well as scale-out storage such as object storage and parallel file systems.

"Supermicro's new high performance JBOF Storage System is designed using our Building Block approach which enables support for either E3.S or U.2 form-factor SSDs and the latest PCIe Gen 5 connectivity for the SSDs and the DPU networking and storage platform," said Charles Liang, president and CEO of Supermicro. "Supermicro's system design supports 24 or 36 SSD's enabling up to 1.105PB of raw capacity using 30.71 TB SSDs. Our balanced network and storage I/O design can saturate the full 400 Gb/s BlueField-3 line-rate realizing more than 250 GB/s bandwidth of the Gen 5 SSDs."

Jabil Intros New Servers Powered by AMD 5th Gen EPYC and Intel Xeon 6 Processors

Jabil Inc. announced today that it is expanding its server portfolio with the J421E-S and J422-S servers, powered by AMD 5th Generation EPYC and Intel Xeon 6 processors. These servers are purpose-built for scalability in a variety of cloud data center applications, including AI, high-performance computing (HPC), fintech, networking, storage, databases, and security — representing the latest generation of server innovation from Jabil.

Built with customization and innovation in mind, the design-ready J422-S and J421E-S servers will allow engineering teams to meet customers' specific requirements. By fine-tuning Jabil's custom BIOS and BMC firmware, Jabil can create a competitive advantage for customers by developing the server configuration needed for higher performance, data management, and security. The server platforms are now available for sampling and will be in production by the first half of 2025.

Astera Labs Introduces New Portfolio of Fabric Switches Purpose-Built for AI Infrastructure at Cloud-Scale

Astera Labs, Inc, a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced a new portfolio of fabric switches, including the industry's first PCIe 6 switch, built from the ground up for demanding AI workloads in accelerated computing platforms deployed at cloud-scale. The Scorpio Smart Fabric Switch portfolio is optimized for AI dataflows to deliver maximum predictable performance per watt, high reliability, easy cloud-scale deployment, reduced time-to-market, and lower total cost of ownership.

The Scorpio Smart Fabric Switch portfolio features two application-specific product lines with a multi-generational roadmap:
  • Scorpio P-Series for GPU-to-CPU/NIC/SSD PCIe 6 connectivity- architected to support mixed traffic head-node connectivity across a diverse ecosystem of PCIe hosts and endpoints.
  • Scorpio X-Series for back-end GPU clustering-architected to deliver the highest back-end GPU-to-GPU bandwidth with platform-specific customization.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

Supermicro Currently Shipping Over 100,000 GPUs Per Quarter in its Complete Rack Scale Liquid Cooled Servers

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing a complete liquid cooling solution that includes powerful Coolant Distribution Units (CDUs), cold plates, Coolant Distribution Manifolds (CDMs), cooling towers and end to end management software. This complete solution reduces ongoing power costs and Day 0 hardware acquisition and data center cooling infrastructure costs. The entire end-to-end data center scale liquid cooling solution is available directly from Supermicro.

"Supermicro continues to innovate, delivering full data center plug-and-play rack scale liquid cooling solutions," said Charles Liang, CEO and president of Supermicro. "Our complete liquid cooling solutions, including SuperCloud Composer for the entire life-cycle management of all components, are now cooling massive, state-of-the-art AI factories, reducing costs and improving performance. The combination of Supermicro deployment experience and delivering innovative technology is resulting in data center operators coming to Supermicro to meet their technical and financial goals for both the construction of greenfield sites and the modernization of existing data centers. Since Supermicro supplies all the components, the time to deployment and online are measured in weeks, not months."

Supermicro Adds New Max-Performance Intel-Based X14 Servers

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today adds new maximum performance GPU, multi-node, and rackmount systems to the X14 portfolio, which are based on the Intel Xeon 6900 Series Processors with P-Cores (formerly codenamed Granite Rapids-AP). The new industry-leading selection of workload-optimized servers addresses the needs of modern data centers, enterprises, and service providers. Joining the efficiency-optimized X14 servers leveraging the Xeon 6700 Series Processors with E-cores launched in June 2024, today's additions bring maximum compute density and power to the Supermicro X14 lineup to create the industry's broadest range of optimized servers supporting a wide variety of workloads from demanding AI, HPC, media, and virtualization to energy-efficient edge, scale-out cloud-native, and microservices applications.

"Supermicro X14 systems have been completely re-engineered to support the latest technologies including next-generation CPUs, GPUs, highest bandwidth and lowest latency with MRDIMMs, PCIe 5.0, and EDSFF E1.S and E3.S storage," said Charles Liang, president and CEO of Supermicro. "Not only can we now offer more than 15 families, but we can also use these designs to create customized solutions with complete rack integration services and our in-house developed liquid cooling solutions."

Supermicro Announces FlexTwin Multi-Node Liquid Cooled Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is announcing the all-new FlexTwin family of systems which has been designed to address the needs of scientists, researchers, governments, and enterprises undertaking the world's most complex and demanding computing tasks. Featuring flexible support for the latest CPU, memory, storage, power and cooling technologies, FlexTwin is purpose-built to support demanding HPC workloads including financial services, scientific research, and complex modeling. These systems are cost-optimized for performance per dollar and can be customized to suit specific HPC applications and customer requirements thanks to Supermicro's modular Building Block Solutions design.

"Supermicro's FlexTwin servers set a new standard of performance density for rack-scale deployments with up to 96 dual processor compute nodes in a standard 48U rack," said Charles Liang, president and CEO of Supermicro. "At Supermicro, we're able to offer a complete one-stop solution that includes servers, racks, networking, liquid cooling components, and liquid cooling towers, speeding up the time to deployment and resulting in higher quality and reliability across the entire infrastructure, enabling customers faster time to results. Up to 90% of the server generated heat is removed with the liquid cooling solution, saving significant amounts of energy and enabling higher compute performance."

SambaNova Launches Fastest AI Platform Based on Its SN40L Chip

SambaNova Systems, provider of the fastest and most efficient chips and AI models, announced SambaNova Cloud, the world's fastest AI inference service enabled by the speed of its SN40L AI chip. Developers can log on for free via an API today — no waiting list — and create their own generative AI applications using both the largest and most capable model, Llama 3.1 405B, and the lightning-fast Llama 3.1 70B. SambaNova Cloud runs Llama 3.1 70B at 461 tokens per second (t/s) and 405B at 132 t/s at full precision.

"SambaNova Cloud is the fastest API service for developers. We deliver world record speed and in full 16-bit precision - all enabled by the world's fastest AI chip," said Rodrigo Liang, CEO of SambaNova Systems. "SambaNova Cloud is bringing the most accurate open source models to the vast developer community at speeds they have never experienced before."

Intel Announces Deployment of Gaudi 3 Accelerators on IBM Cloud

IBM and Intel announced a global collaboration to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. This offering, which is expected to be available in early 2025, aims to help more cost-effectively scale enterprise AI and drive innovation underpinned with security and resiliency. This collaboration will also enable support for Gaudi 3 within IBM's watsonx AI and data platform. IBM Cloud is the first cloud service provider (CSP) to adopt Gaudi 3, and the offering will be available for both hybrid and on-premise environments.

"Unlocking the full potential of AI requires an open and collaborative ecosystem that provides customers with choice and accessible solutions. By integrating Gaudi 3 AI accelerators and Xeon CPUs with IBM Cloud, we are creating new AI capabilities and meeting the demand for affordable, secure and innovative AI computing solutions," said Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group.

India Targets 2026 for Its First Domestic AI Chip Development

Ola, an Indian automotive company, is venturing into AI chip development with its artificial intelligence branch, Krutrim, planning to launch India's first domestically designed AI chip by 2026. The company is leveraging ARM architecture for this initiative. CEO Bhavish Aggarwal emphasizes the importance of India developing its own AI technology rather than relying on external sources.

While detailed specifications are limited, Ola claims these chips will offer competitive performance and efficiency. For manufacturing, the company plans to partner with a global tier I or II foundry, possibly TSMC or Samsung. "We are still exploring foundries, we will go with a global tier I or II foundry. Taiwan is a global leader, and so is Korea. I visited Taiwan a couple of months back and the ecosystem is keen on partnering with India," Aggarwal said.

Supermicro Launches Plug-and-Play SuperCluster for NVIDIA Omniverse

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing a new addition to its SuperCluster portfolio of plug-and-play AI infrastructure solutions for the NVIDIA Omniverse platform to deliver the high-performance generative AI-enhanced 3D workflows at enterprise scale. This new SuperCluster features the latest Supermicro NVIDIA OVX systems and allows enterprises to easily scale as workloads increase.

"Supermicro has led the industry in developing GPU-optimized products, traditionally for 3D graphics and application acceleration, and now for AI," said Charles Liang, president and CEO of Supermicro. "With the rise of AI, enterprises are seeking computing infrastructure that combines all these capabilities into a single package. Supermicro's SuperCluster features fully interconnected 4U PCIe GPU NVIDIA-Certified Systems for NVIDIA Omniverse, with up to 256 NVIDIA L40S PCIe GPUs per scalable unit. The system helps deliver high performance across the Omniverse platform, including generative AI integrations. By developing this SuperCluster for Omniverse, we're not just offering a product; we're providing a gateway to the future of application development and innovation."

Microsoft's €20m European Cloud Providers Settlement Draws Mixed Reactions

Microsoft has agreed to pay €20 million to settle an antitrust complaint filed by Cloud Infrastructure Services Providers in Europe (CISPE), a European cloud providers association. The deal aims to address concerns about Microsoft's cloud product licensing practices, also, Microsoft will develop Azure Stack HCI for European cloud providers and compensate CISPE members for recent licensing costs. On the other side, CISPE will withdraw its EU complaint, cease supporting similar global complaints, and establish an independent European Cloud Observatory to monitor the product's development.

The settlement excludes major providers like AWS, Google Cloud, and AliCloud. While CISPE hails this as a victory, critics argue it's insufficient. AWS spokesperson Alia Ilyas said that Microsoft was only making "limited concessions for some CISPE members that demonstrate there are no technical barriers preventing it from doing what's right for every cloud customer". Google Cloud suggests more action is needed against anti-competitive behavior, and UK-based cloud company Civo's CEO Mark Boost questions the deal's long-term impact on the industry. Boost stated, "However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis". Despite resolving the CISPE complaint, Microsoft faces ongoing regulatory scrutiny worldwide. The UK's Competition and Markets Authority launched a cloud computing market investigation in October 2023 while the US Federal Trade Commission is conducting two separate probes involving Microsoft. The first FTC investigation, initiated in January 2024, examines AI services and partnerships of major tech companies, including Microsoft, Amazon, Alphabet, Anthropic, and OpenAI. The second focuses specifically on Microsoft, OpenAI, and Nvidia, assessing their impact and behavior in the AI sector.
Return to Keyword Browsing
Jan 30th, 2025 23:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts