News Posts matching #Azure

Return to Keyword Browsing

Microsoft Flight Simulator 2024 Dives into Long Loading Times and Negative Reviews

Microsoft's latest Flight Simulator 2024 just launched, and it already appears to be riddled with problems. When internally testing, we ran into some issues regarding long loading times and eventual errors without getting to the game. Additionally, many others confirmed that they were experiencing problems. Launched on 08:00 am PT on November 19, the simulator has faced widespread server infrastructure issues affecting player access. CEO of Asobo, maker of this Flight Simulator franchise, Sebastian Wloch, has released a public statement via video addressing the widespread technical issues that plagued their latest game release. According to Wloch, while pre-launch testing had successfully simulated concurrent player counts of 200,000 users, the actual launch revealed critical weaknesses in the database cache system that weren't apparent during testing.

Additionally, the negative reviews stemming from these issues have piled up. On Steam, the game currently has 2,865 reviews, only 500 of which are positive. The remaining 2,000+ are overwhelmingly negative, with many users not being satisfied with the gameplay and quality of the release. The game's infrastructure is powered by Microsoft's Azure cloud, which is also not good marketing for Microsoft as the Azure platforms should signal better infrastructure scaling and stability. While these issues should be cleared in the long run, the short-term consequences are turning the launch into a colossal failure, as gamers expected more from this release. Lastly, the alpha version of the game was notorious for the massive internet bandwidth hog, causing up to 180 Mbit/s load.

AMD Custom Makes CPUs for Azure: 88 "Zen 4" Cores and HBM3 Memory

Microsoft has announced its new Azure HBv5 virtual machines, featuring unique custom hardware made by AMD. CEO Satya Nadella made the announcement during Microsoft Ignite, introducing a custom-designed AMD processor solution that achieves remarkable performance metrics. The new HBv5 virtual machines deliver an extraordinary 6.9 TB/s of memory bandwidth, utilizing four specialized AMD processors equipped with HBM3 technology. This represents an eightfold improvement over existing cloud alternatives and a staggering 20-fold increase compared to previous Azure HBv3 configurations. Each HBv5 virtual machine boasts impressive specifications, including up to 352 AMD EPYC "Zen4" CPU cores capable of reaching 4 GHz peak frequencies. The system provides users with 400-450 GB of HBM3 RAM and features doubled Infinity Fabric bandwidth compared to any previous AMD EPYC server platform. Given that each VM had four CPUs, this yields 88 Zen 4 cores per CPU socket, with 9 GB of memory per core.

The architecture includes 800 Gb/s of NVIDIA Quantum-2 InfiniBand connectivity and 14 TB of local NVMe SSD storage. The development marks a strategic shift in addressing memory performance limitations, which Microsoft identifies as a critical bottleneck in HPC applications. This custom design particularly benefits sectors requiring intensive computational resources, including automotive design, aerospace simulation, weather modeling, and energy research. While the CPU appears custom-designed for Microsoft's needs, it bears similarities to previously rumored AMD processors, suggesting a possible connection to the speculated MI300C chip architecture. The system's design choices, including disabled SMT and single-tenant configuration, clearly focus on optimizing performance for specific HPC workloads. If readers can recall, Intel also made customized Xeons for AWS and their needs, which is normal in the hyperscaler space, given they drive most of the revenue.

NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite

NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.

In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals. At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.

Microsoft Brings Copilot AI Assistant to Windows Terminal

Microsoft has taken another significant step in its AI integration strategy by introducing "Terminal Chat," an AI assistant now available in Windows Terminal. This latest feature brings conversational AI capabilities directly to the command-line interface, marking a notable advancement in making terminal operations more accessible to users of all skill levels. The new feature, currently available in Windows Terminal (Canary), leverages various AI services, including ChatGPT, GitHub Copilot, and Azure OpenAI, to provide interactive assistance for command-line operations. What sets Terminal Chat apart is its context-aware functionality, which automatically recognizes the specific shell environment being used—whether it's PowerShell, Command Prompt, WSL Ubuntu, or Azure Cloud Shell—and tailors its responses accordingly.

Users can interact with Terminal Chat through a dedicated interface within Windows Terminal, where they can ask questions, troubleshoot errors, and request guidance on specific commands. The system provides shell-specific suggestions, automatically adjusting its recommendations based on whether a user is working in Windows PowerShell, Linux, or other environments. For example, when asked about creating a directory, Terminal Chat will suggest "New-Item -ItemType Directory" for PowerShell users while providing "mkdir" as the appropriate command for Linux environments. This intelligent adaptation helps bridge the knowledge gap between different command-line interfaces. Below are some examples courtesy of Windows Latest and their testing:

Microsoft Announces its FY25 Q1 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended September 30, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $65.6 billion and increased 16%
  • Operating income was $30.6 billion and increased 14%
  • Net income was $24.7 billion and increased 11% (up 10% in constant currency)
  • Diluted earnings per share was $3.30 and increased 10%
"AI-driven transformation is changing work, work artifacts, and workflow across every role, function, and business process," said Satya Nadella, chairman and chief executive officer of Microsoft. "We are expanding our opportunity and winning new customers as we help them apply our AI platforms and tools to drive new growth and operating leverage."

NVIDIA "Blackwell" GB200 Server Dedicates Two-Thirds of Space to Cooling at Microsoft Azure

Late Tuesday, Microsoft Azure shared an interesting picture on its social media platform X, showcasing the pinnacle of GPU-accelerated servers—NVIDIA "Blackwell" GB200-powered AI systems. Microsoft is one of NVIDIA's largest customers, and the company often receives products first to integrate into its cloud and company infrastructure. Even NVIDIA listens to feedback from companies like Microsoft about designing future products, especially those like the now-canceled NVL36x2 system. The picture below shows a massive cluster that roughly divides the compute area into a single-third of the entire system, with a gigantic two-thirds of the system dedicated to closed-loop liquid cooling.

The entire system is connected using Infiniband networking, a standard for GPU-accelerated systems due to its lower latency in packet transfer. While the details of the system are scarce, we can see that the integrated closed-loop liquid cooling allows the GPU racks to be in a 1U form for increased density. Given that these systems will go into the wider Microsoft Azure data centers, a system needs to be easily maintained and cooled. There are indeed limits in power and heat output that Microsoft's data centers can handle, so these types of systems often fit inside internal specifications that Microsoft designs. There are more compute-dense systems, of course, like NVIDIA's NVL72, but hyperscalers should usually opt for other custom solutions that fit into their data center specifications. Finally, Microsoft noted that we can expect to see more details at the upcoming Microsoft Ignite conference in November and learn more about its GB200-powered AI systems.

AMD Introduces the Radeon PRO V710 to Microsoft Azure

AMD today introduced the Radeon PRO V710, the newest member of AMD's family of visual cloud GPUs. Available today in private preview on Microsoft Azure, the Radeon PRO V710 brings new capabilities to the public cloud. The AMD Radeon PRO V710's 54 Compute Units, along with 28 GB of VRAM, 448 GB/s memory transfer rate, and 54 MB of L3 AMD Infinity Cache technology, support small to medium ML inference workloads and small model training using open-source AMD ROCm software.

With support for hardware virtualization implemented in compliance with the PCI Express SR-IOV standard, instances based on the Radeon PRO V710 can provide robust isolation between multiple virtual machines running on the same physical GPU and between the host and guest environments. The efficient RDNA 3 architecture provides excellent performance per watt, enabling a single slot, passively cooled form factor compliant with the PCIe CEM spec.

Microsoft Unveils New Details on Maia 100, Its First Custom AI Chip

Microsoft provided a detailed view of Maia 100 at Hot Chips 2024, their initial specialized AI chip. This new system is designed to work seamlessly from start to finish, with the goal of improving performance and reducing expenses. It includes specially made server boards, unique racks, and a software system focused on increasing the effectiveness and strength of sophisticated AI services, such as Azure OpenAI. Microsoft introduced Maia at Ignite 2023, sharing that they had created their own AI accelerator chip. More information was provided earlier this year at the Build developer event. The Maia 100 is one of the biggest processors made using TSMC's 5 nm technology, designed for handling extensive AI tasks on Azure platform.

Maia 100 SoC architecture features:
  • A high-speed tensor unit (16xRx16) offers rapid processing for training and inferencing while supporting a wide range of data types, including low precision data types such as the MX data format, first introduced by Microsoft through the MX Consortium in 2023.
  • The vector processor is a loosely coupled superscalar engine built with custom instruction set architecture (ISA) to support a wide range of data types, including FP32 and BF16.
  • A Direct Memory Access (DMA) engine supports different tensor sharding schemes.
  • Hardware semaphores enable asynchronous programming on the Maia system.

Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

A faulty software update to enterprise computers by cybersecurity firm CrowdStrike has taken millions of computers offline, most of which are in a commercial or enterprise environment, or are Azure deployments. CrowdStrike provides periodic software and security updates to commercial PCs, enterprise PCs, and cloud instances, with a high degree of automation. The latest update reportedly breaks the Windows bootloader, causing bluescreens of death (BSODs), and if configured, invokes Windows Recovery. Enterprises tend to bulletproof the bootloaders of their client machines, and disable generic Windows Recovery tools from Microsoft, which means businesses around the world are left with large numbers of machines that will each take manual fixing. The so-called "Windows CrowdStrike BSOD deluge" has hit critical businesses such as banks, airlines, supermarket chains, and TV broadcasters. Meanwhile, sysadmins on Reddit are wishing each other a happy weekend.

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

Xbox & Microsoft Schedule GDC 2024 Presentations

As GDC, the world's largest game developer conference, returns to San Francisco, Microsoft and Xbox will be there to engage and empower developers, publishers, and technology partners across the industry. We are committed to supporting game developers on any platform, anywhere in the world, at every stage of development. Our message is simple: Microsoft and Xbox are here to help power your games and empower your teams. From March 18 - 22, the Xbox Lobby Lounge in the Moscone Center South can't be missed—an easy meeting point, and a first step toward learning more about the ID@Xbox publishing program, the Developer Acceleration Program (DAP) for underrepresented creators, Azure cloud gaming services, and anything else developers might need.

GDC features dozens of speakers from across Xbox, Activision, Blizzard, King and ZeniMax who will demonstrate groundbreaking in-game innovations and share community-building strategies. Microsoft technology teams, with support from partners, will also host talks that spotlight new tools, software and services that help increase developer velocity, grow player engagement and help creators grow. See below for the Conference programming details.

Microsoft Investment in Mistral Attracts Possible Investigation by EU Regulators

Tech giant Microsoft and Paris-based startup Mistral AI, an innovator in open-source AI model development, have announced a new multi-year partnership to accelerate AI innovation and expand access to Mistral's state-of-the-art models. The collaboration will leverage Azure's cutting-edge AI infrastructure to propel Mistral's research and bring its innovations to more customers globally. The partnership focuses on three core areas. First, Microsoft will provide Mistral with Azure AI supercomputing infrastructure to power advanced AI training and inference for Mistral's flagship models like Mistral-Large. Second, the companies will collaborate on AI research and development to push AI model's boundaries. And third, Azure's enterprise capabilities will give Mistral additional opportunities to promote, sell, and distribute their models to Microsoft customers worldwide.

However, an investment in a European startup can not go smoothly without the constant eyesight of the European Union authorities and regulators to oversee the deal. According to Bloomberg, an EU spokesperson on Tuesday claimed that the EU regulators will perform an analysis of Microsoft's investment into Mistral after receiving a copy of the agreement between the two parties. While there is no formal investigation yet, if EU regulators continue to probe Microsoft's deal and intentions, they could launch a complete formal investigation that could lead to the termination of Microsoft's plans. Of course, the formal investigation is still on hold, but investing in EU startups might become unfeasible for American tech giants if the EU regulators continue to push the scrutiny of every investment made in companies based on EU soil.

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

Dell Technologies Delivers Third Quarter Fiscal 2024 Financial Results

Dell Technologies announces financial results for its fiscal 2024 third quarter. Revenue was $22.3 billion, down 10% year-over-year. The company generated operating income of $1.5 billion and non-GAAP operating income of $2 billion, down 16% and 17% year-over-year, respectively. Diluted earnings per share was $1.36, and non-GAAP diluted earnings per share was $1.88. Cash flow from operations for the third quarter was $2.2 billion, driven by profitability and strong working capital performance. The company has generated $9.9 billion of cash flow from operations throughout the last 12 months.

Dell ended the quarter with remaining performance obligations of $39 billion, recurring revenue of $5.6 billion, up 4% year-over-year, and deferred revenue of $29.1 billion, up 7% year-over-year, primarily due to increases in software and hardware maintenance agreements. The company's cash and investment balance was $9.9 billion.

Ansys Collaborates with TSMC and Microsoft to Accelerate Mechanical Stress Simulation for 3D-IC Reliability in the Cloud

Ansys has collaborated with TSMC and Microsoft to validate a joint solution for analyzing mechanical stresses in multi-die 3D-IC systems manufactured with TSMC's 3DFabric advanced packaging technologies. This collaborative solution gives customers added confidence to address novel multiphysics requirements that improve the functional reliability of advanced designs using TSMC's 3DFabric, a comprehensive family of 3D silicon stacking and advanced packaging technologies.

Ansys Mechanical is the industry-leading finite element analysis software used to simulate mechanical stresses caused by thermal gradients in 3D-ICs. The solution flow has been proven to run efficiently on Microsoft Azure, helping to ensure fast turn-around times with today's very large and complex 2.5D/3D-IC systems.

Microsoft Introduces 128-Core Arm CPU for Cloud and Custom AI Accelerator

During its Ignite conference, Microsoft introduced a duo of custom-designed silicon made to accelerate AI and excel in cloud workloads. First of the two is Microsoft's Azure Cobalt 100 CPU, a 128-core design that features a 64-bit Armv9 instruction set, implemented in a cloud-native design that is set to become a part of Microsoft's offerings. While there aren't many details regarding the configuration, the company claims that the performance target is up to 40% when compared to the current generation of Arm servers running on Azure cloud. The SoC has used Arm's Neoverse CSS platform customized for Microsoft, with presumably Arm Neoverse N2 cores.

The next and hottest topic in the server space is AI acceleration, which is needed for running today's large language models. Microsoft hosts OpenAI's ChatGPT, Microsoft's Copilot, and many other AI services. To help make them run as fast as possible, Microsoft's project Athena now has the name of Maia 100 AI accelerator, which is manufactured on TSMC's 5 nm process. It features 105 billion transistors and supports various MX data formats, even those smaller than 8-bit bit, for maximum performance. Currently tested on GPT 3.5 Turbo, we have yet to see performance figures and comparisons with competing hardware from NVIDIA, like H100/H200 and AMD, with MI300X. The Maia 100 has an aggregate bandwidth of 4.8 Terabits per accelerator, which uses a custom Ethernet-based networking protocol for scaling. These chips are expected to appear in Microsoft data centers early next year, and we hope to get some performance numbers soon.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

NVIDIA H100 Tensor Core GPU Used on New Azure Virtual Machine Series Now Available

Microsoft Azure users can now turn to the latest NVIDIA accelerated computing technology to train and deploy their generative AI applications. Available today, the Microsoft Azure ND H100 v5 VMs using NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking—enables scaling generative AI, high performance computing (HPC) and other applications with a click from a browser. Available to customers across the U.S., the new instance arrives as developers and researchers are using large language models (LLMs) and accelerated computing to uncover new consumer and business use cases.

The NVIDIA H100 GPU delivers supercomputing-class performance through architectural innovations, including fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs and the latest NVLink technology that lets GPUs talk to each other at 900 GB/s. The inclusion of NVIDIA Quantum-2 CX7 InfiniBand with 3,200 Gbps cross-node bandwidth ensures seamless performance across the GPUs at massive scale, matching the capabilities of top-performing supercomputers globally.

Microsoft Releases FY23 Q4 Earnings, Xbox Hardware Revenue Down 13%

Microsoft Corp. today announced the following results for the quarter ended June 30, 2023, as compared to the corresponding period of last fiscal year:
  • Revenue was $56.2 billion and increased 8% (up 10% in constant currency)
  • Operating income was $24.3 billion and increased 18% (up 21% in constant currency)
  • Net income was $20.1 billion and increased 20% (up 23% in constant currency)
  • Diluted earnings per share was $2.69 and increased 21% (up 23% in constant currency)
"Organizations are asking not only how - but how fast - they can apply this next generation of AI to address the biggest opportunities and challenges they face - safely and responsibly," said Satya Nadella, chairman and chief executive officer of Microsoft. "We remain focused on leading the new AI platform shift, helping customers use the Microsoft Cloud to get the most value out of their digital spend, and driving operating leverage."

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

AMD Expands 4th Gen EPYC CPU Portfolio with Processors for Cloud Native and Technical Computing Workloads

Today, at the "Data Center and AI Technology Premiere," AMD announced the addition of two new, workload optimized processors to the 4th Gen EPYC CPU portfolio. By leveraging the new "Zen 4c" core architecture, the AMD EPYC 97X4 cloud native-optimized data center CPUs further extend the EPYC 9004 Series of processors to deliver the thread density and scale needed for leadership cloud native computing. Additionally, AMD announced the 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, ideally suited for the most demanding technical computing workloads.

"In an era of workload optimized compute, our new CPUs is pushing the boundaries of what is possible in the data center, delivering new levels of performance, efficiency, and scalability," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "We closely align our product roadmap to our customers' unique environments and each offering in the 4th Gen AMD EPYC family of processors is tailored to deliver compelling and leadership performance in general purpose, cloud native or technical computing workloads."
Return to Keyword Browsing
Nov 21st, 2024 06:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts