News Posts matching #Microsoft Azure

Return to Keyword Browsing

NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite

NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.

In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals. At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.

NVIDIA "Blackwell" GB200 Server Dedicates Two-Thirds of Space to Cooling at Microsoft Azure

Late Tuesday, Microsoft Azure shared an interesting picture on its social media platform X, showcasing the pinnacle of GPU-accelerated servers—NVIDIA "Blackwell" GB200-powered AI systems. Microsoft is one of NVIDIA's largest customers, and the company often receives products first to integrate into its cloud and company infrastructure. Even NVIDIA listens to feedback from companies like Microsoft about designing future products, especially those like the now-canceled NVL36x2 system. The picture below shows a massive cluster that roughly divides the compute area into a single-third of the entire system, with a gigantic two-thirds of the system dedicated to closed-loop liquid cooling.

The entire system is connected using Infiniband networking, a standard for GPU-accelerated systems due to its lower latency in packet transfer. While the details of the system are scarce, we can see that the integrated closed-loop liquid cooling allows the GPU racks to be in a 1U form for increased density. Given that these systems will go into the wider Microsoft Azure data centers, a system needs to be easily maintained and cooled. There are indeed limits in power and heat output that Microsoft's data centers can handle, so these types of systems often fit inside internal specifications that Microsoft designs. There are more compute-dense systems, of course, like NVIDIA's NVL72, but hyperscalers should usually opt for other custom solutions that fit into their data center specifications. Finally, Microsoft noted that we can expect to see more details at the upcoming Microsoft Ignite conference in November and learn more about its GB200-powered AI systems.

AMD Introduces the Radeon PRO V710 to Microsoft Azure

AMD today introduced the Radeon PRO V710, the newest member of AMD's family of visual cloud GPUs. Available today in private preview on Microsoft Azure, the Radeon PRO V710 brings new capabilities to the public cloud. The AMD Radeon PRO V710's 54 Compute Units, along with 28 GB of VRAM, 448 GB/s memory transfer rate, and 54 MB of L3 AMD Infinity Cache technology, support small to medium ML inference workloads and small model training using open-source AMD ROCm software.

With support for hardware virtualization implemented in compliance with the PCI Express SR-IOV standard, instances based on the Radeon PRO V710 can provide robust isolation between multiple virtual machines running on the same physical GPU and between the host and guest environments. The efficient RDNA 3 architecture provides excellent performance per watt, enabling a single slot, passively cooled form factor compliant with the PCIe CEM spec.

AMD Instinct MI300X Accelerators Power Microsoft Azure OpenAI Service Workloads and New Azure ND MI300X V5 VMs

Today at Microsoft Build, AMD (NASDAQ: AMD) showcased its latest end-to-end compute and software capabilities for Microsoft customers and developers. By using AMD solutions such as AMD Instinct MI300X accelerators, ROCm open software, Ryzen AI processors and software, and Alveo MA35D media accelerators, Microsoft is able to provide a powerful suite of tools for AI-based deployments across numerous markets. The new Microsoft Azure ND MI300X virtual machines (VMs) are now generally available, giving customers like Hugging Face, access to impressive performance and efficiency for their most demanding AI workloads.

"The AMD Instinct MI300X and ROCm software stack is powering the Azure OpenAI Chat GPT 3.5 and 4 services, which are some of the world's most demanding AI workloads," said Victor Peng, president, AMD. "With the general availability of the new VMs from Azure, AI customers have broader access to MI300X to deliver high-performance and efficient solutions for AI applications."

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Microsoft and NVIDIA Announce Major Integrations to Accelerate Generative AI for Enterprises Everywhere

At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.

"Together with NVIDIA, we are making the promise of AI real, helping to drive new benefits and productivity gains for people and organizations everywhere," said Satya Nadella, Chairman and CEO, Microsoft. "From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability."

"AI is transforming our daily lives - opening up a world of new opportunities," said Jensen Huang, founder and CEO of NVIDIA. "Through our collaboration with Microsoft, we're building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world."

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

Dell Technologies Delivers Third Quarter Fiscal 2024 Financial Results

Dell Technologies announces financial results for its fiscal 2024 third quarter. Revenue was $22.3 billion, down 10% year-over-year. The company generated operating income of $1.5 billion and non-GAAP operating income of $2 billion, down 16% and 17% year-over-year, respectively. Diluted earnings per share was $1.36, and non-GAAP diluted earnings per share was $1.88. Cash flow from operations for the third quarter was $2.2 billion, driven by profitability and strong working capital performance. The company has generated $9.9 billion of cash flow from operations throughout the last 12 months.

Dell ended the quarter with remaining performance obligations of $39 billion, recurring revenue of $5.6 billion, up 4% year-over-year, and deferred revenue of $29.1 billion, up 7% year-over-year, primarily due to increases in software and hardware maintenance agreements. The company's cash and investment balance was $9.9 billion.

Ansys Collaborates with TSMC and Microsoft to Accelerate Mechanical Stress Simulation for 3D-IC Reliability in the Cloud

Ansys has collaborated with TSMC and Microsoft to validate a joint solution for analyzing mechanical stresses in multi-die 3D-IC systems manufactured with TSMC's 3DFabric advanced packaging technologies. This collaborative solution gives customers added confidence to address novel multiphysics requirements that improve the functional reliability of advanced designs using TSMC's 3DFabric, a comprehensive family of 3D silicon stacking and advanced packaging technologies.

Ansys Mechanical is the industry-leading finite element analysis software used to simulate mechanical stresses caused by thermal gradients in 3D-ICs. The solution flow has been proven to run efficiently on Microsoft Azure, helping to ensure fast turn-around times with today's very large and complex 2.5D/3D-IC systems.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

NVIDIA H100 Tensor Core GPU Used on New Azure Virtual Machine Series Now Available

Microsoft Azure users can now turn to the latest NVIDIA accelerated computing technology to train and deploy their generative AI applications. Available today, the Microsoft Azure ND H100 v5 VMs using NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking—enables scaling generative AI, high performance computing (HPC) and other applications with a click from a browser. Available to customers across the U.S., the new instance arrives as developers and researchers are using large language models (LLMs) and accelerated computing to uncover new consumer and business use cases.

The NVIDIA H100 GPU delivers supercomputing-class performance through architectural innovations, including fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs and the latest NVLink technology that lets GPUs talk to each other at 900 GB/s. The inclusion of NVIDIA Quantum-2 CX7 InfiniBand with 3,200 Gbps cross-node bandwidth ensures seamless performance across the GPUs at massive scale, matching the capabilities of top-performing supercomputers globally.

Lenovo Unveils New Data Management Solutions to Enable AI Workloads

Today, Lenovo announced its next wave of data management innovation with new ThinkSystem DG Enterprise Storage Arrays and ThinkSystem DM3010H Enterprise Storage Arrays, designed to make it easier for organizations to enable AI workloads and unlock value from their data. Also announced are two new integrated and engineered ThinkAgile SXM Microsoft Azure Stack solutions, enabling a unified hybrid cloud solution for seamless data management. As businesses continue to scale their operations to address growing data, security and sustainability requirements, the new Lenovo flash solutions provide customers with an accelerated path to deploy AI workloads efficiently and with added security features from edge to cloud, enabling workload consolidation and mobilizing faster insights fortified with ransomware protection.

As the #4 global external storage OEM and #1 storage provider in Price Bands 1-41, the news is another significant step forward in Lenovo's data management strategy as it expands its leadership in the mid-range market. "The data management landscape is increasingly complex, and customers need solutions that offer the simplicity and flexibility of cloud with the performance and security of on-premises data management," said Kamran Amini, Vice President and General Manager of Server & Storage, Lenovo Infrastructure Solutions Group. "As we continue our journey to become the world's largest end-to-end infrastructure solutions provider, we are focused on delivering innovation that enables our customers to manage, leverage and protect their data seamlessly, giving every business the ability to bring AI to the source of the data and transform their business."

NVIDIA Collaborates With Microsoft to Accelerate Enterprise-Ready Generative AI

NVIDIA today announced that it is integrating its NVIDIA AI Enterprise software into Microsoft's Azure Machine Learning to help enterprises accelerate their AI initiatives. The integration will create a secure, enterprise-ready platform that enables Azure customers worldwide to quickly build, deploy and manage customized applications using the more than 100 NVIDIA AI frameworks and tools that come fully supported in NVIDIA AI Enterprise, the software layer of NVIDIA's AI platform.

"With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation," said Manuvir Das, vice president of enterprise computing at NVIDIA. "The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production."

Ampere Computing Unveils New AmpereOne Processor Family with 192 Custom Cores

Ampere Computing today announced a new AmpereOne Family of processors with up to 192 single threaded Ampere cores - the highest core count in the industry. This is the first product from Ampere based on the company's new custom core, built from the ground up and leveraging the company's internal IP. CEO Renée James, who founded Ampere Computing to offer a modern alternative to the industry with processors designed specifically for both efficiency and performance in the Cloud, said there was a fundamental shift happening that required a new approach.

"Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance," James said. "The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry."

Intel and SAP Embark on Strategic Collaboration to Expand Cloud Capabilities

Intel and SAP SE today announced a strategic collaboration to deliver more powerful and sustainable SAP software landscapes in the cloud. Designed to help customers derive greater scalability, agility and consolidation of existing SAP software landscapes, the collaboration deepens Intel's focus on delivering extremely powerful and secure instances for SAP, powered by 4th Gen Intel Xeon Scalable processors.

Using SAP Application Performance Standard benchmarks, Intel's 4th Gen Xeon processors enable significantly higher performance numbers when compared to previous generations of Xeon processors, and these impressive results will be passed along to SAP customers around the globe. Additionally, Intel enables current virtual machine (VM) sizes up to 24 TB with a goal to ramp up to VM sizes of 32 TB with the RISE with SAP solution.

Microsoft Activision Blizzard Merger Blocked by UK Market Regulator Citing "Cloud Gaming Concerns"

The United Kingdom Competition and Markets Authority (UK-CMA) on Wednesday blocked the proposed $68.7 billion merger of Microsoft and Activision-Blizzard. In its press-releasing announcing its final decision into an investigation on the question of how the merger will affect consumer-choice and innovation in the market, the CMA says that the merger would alter the future of cloud gaming, and lead to "reduced innovation and less choice for United Kingdom gamers over the years to come." Cloud gaming in this context would be games rendered on the cloud, and consumed on the edge by gamers. NVIDIA's GeForce NOW is one such service.

Microsoft Azure is one of the big-three cloud computing providers (besides AWS and Google Cloud), and the CMA fears that Microsoft's acquisition of Activision-Blizzard IP (besides its control over the Xbox and Windows PC ecosystems), would "strengthen that advantage giving it the ability to undermine new and innovative competitors." The CMA report continues: "Cloud gaming needs a free, competitive market to drive innovation and choice. That is best achieved by allowing the current competitive dynamics in cloud gaming to continue to do their job." Microsoft and Activision-Blizzard are unsurprisingly unhappy with the verdict.

Microsoft Azure Announces New Scalable Generative AI VMs Featuring NVIDIA H100

Microsoft Azure announced their new ND H100 v5 virtual machine which packs Intel's Sapphire Rapids Xeon Scalable processors with NVIDIA's Hopper H100 GPUs, as well as NVIDIA's Quantum-2 CX7 interconnect. Inside each physical machine sits eight H100s—presumably the SXM5 variant packing a whopping 132 SMs and 528 4th generation tensor cores—interconnected by NVLink 4.0 which ties them all together with 3.6 TB/s bisectional bandwidth. Outside each local machine is a network of thousands more H100s connected together with 400 GB/s Quantum-2 CX7 InfiniBand, which Microsoft says allows 3.2 Tb/s per VM for on-demand scaling to accelerate the largest AI training workloads.

Generative AI solutions like ChatGPT have accelerated demand for multi-ExaOP cloud services that can handle the large training sets and utilize the latest development tools. Azure's new ND H100 v5 VMs offer that capability to organizations of any size, whether you're a smaller startup or a larger company looking to implement large-scale AI training deployments. While Microsoft is not making any direct claims for performance, NVIDIA has advertised H100 as running up to 30x faster than the preceding Ampere architecture that is currently offered with the ND A100 v4 VMs.

Shipments of AI Servers Will Climb at CAGR of 10.8% from 2022 to 2026

According to TrendForce's latest survey of the server market, many cloud service providers (CSPs) have begun large-scale investments in the kinds of equipment that support artificial intelligence (AI) technologies. This development is in response to the emergence of new applications such as self-driving cars, artificial intelligence of things (AIoT), and edge computing since 2018. TrendForce estimates that in 2022, AI servers that are equipped with general-purpose GPUs (GPGPUs) accounted for almost 1% of annual global server shipments. Moving into 2023, shipments of AI servers are projected to grow by 8% YoY thanks to ChatBot and similar applications generating demand across AI-related fields. Furthermore, shipments of AI servers are forecasted to increase at a CAGR of 10.8% from 2022 to 2026.

NVIDIA to Put DGX Computers in the Cloud, Becomes AI-as-a-Service Provider

NVIDIA has recently reported its Q4 earnings, and the earnings call following the report contains exciting details about the company and its plans to open up to new possibilities. NVIDIA's CEO Jensen Huang has stated that the company is on track to become an AI-as-a-Service (AIaaS) provider, which technically makes it a cloud service provider (CSP). "Today, I want to share with you the next level of our business model to help put AI within reach of every enterprise customer. We are partnering with major service -- cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners, and hosted within the world's largest clouds." Said Mr. Huang, adding that "NVIDIA AI as a service offers enterprises easy access to the world's most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world's most advanced clouds. Customers can engage NVIDIA AI cloud services at the AI supercomputer, acceleration library software, or pretrained AI model layers."

In addition to enrolling other CSPs into the race, NVIDIA is also going to offer DGX machines on demand in the cloud. Using select CSPs, you can get access to an entire DGX and harness the computing power for AI research purposes. Mr. Huang noted "NVIDIA DGX is an AI supercomputer, and the blueprint of AI factories being built around the world. AI supercomputers are hard and time-consuming to build. Today, we are announcing the NVIDIA DGX Cloud, the fastest and easiest way to have your own DGX AI supercomputer, just open your browser. NVIDIA DGX Cloud is already available through Oracle Cloud Infrastructure and Microsoft Azure, Google GCP, and others on the way."

NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2023

NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 29, 2023, of $6.05 billion, down 21% from a year ago and up 2% from the previous quarter. GAAP earnings per diluted share for the quarter were $0.57, down 52% from a year ago and up 111% from the previous quarter. Non-GAAP earnings per diluted share were $0.88, down 33% from a year ago and up 52% from the previous quarter.

For fiscal 2023, revenue was $26.97 billion, flat from a year ago. GAAP earnings per diluted share were $1.74, down 55% from a year ago. Non-GAAP earnings per diluted share were $3.34, down 25% from a year ago. "AI is at an inflection point, setting up for broad adoption reaching into every industry," said Jensen Huang, founder and CEO of NVIDIA. "From startups to major enterprises, we are seeing accelerated interest in the versatility and capabilities of generative AI. "We are set to help customers take advantage of breakthroughs in generative AI and large language models. Our new AI supercomputer, with H100 and its Transformer Engine and Quantum-2 networking fabric, is in full production.

Microsoft Extends ESU Support for Windows Server 2008 and 2008 R2 on Azure

Microsoft's Windows Server 2008 and 2008 R2 customers still represent a large group, as Microsoft has announced an additional year of Extended Security Updates (ESU) with a caveat. Only available for Microsoft Azure customers, the ESU program will allow Windows Server 2008 and R2 users on Azure cloud to get security updates until January 9, 2024. By no means is this not a free program, and Microsoft will bill this extensively as it is available internationally. Many customers are forced to join the ESU program for their Windows Server 2008 and R2 systems, as upgrading the OS to the latest version is not always possible without significant downtime or a hardware update.

The following customer base has legibility to the fourth year of the ESU program:
  • Windows Server 2008 R2 Service Pack 1 (SP1)
  • Windows Server 2008 Service Pack 2 (SP2)
  • Windows Embedded POSReady 7
  • Windows Embedded Standard 7
  • All Azure virtual machines (VMs) running Windows Server 2008 R2 and Windows Server 2008 operating systems on Azure, Azure Stack, Azure VMWare Solutions, or Azure Nutanix Solution.

AMD Pensando Distributed Services Card to Support VMware vSphere 8

AMD announced that the AMD Pensando Distributed Services Card, powered by the industry's most advanced data processing unit (DPU)1, will be one of the first DPU solutions to support VMware vSphere 8 available from leading server vendors including Dell Technologies, HPE and Lenovo.

As data center applications grow in scale and sophistication, the resulting workloads increase the demand on infrastructure services as well as crucial CPU resources. VMware vSphere 8 aims to reimagine IT infrastructure as a composable architecture with a goal of offloading infrastructure workloads such as networking, storage, and security from the CPU by leveraging the new vSphere Distributed Services Engine, freeing up valuable CPU cycles to be used for business functions and revenue generating applications.
Return to Keyword Browsing
Dec 22nd, 2024 00:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts