Apr 11th, 2025 14:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #BlueField

Return to Keyword Browsing

NVMe Hard Drives: Seagate's Answer to Growing AI Storage Demands

Seagate is advancing in NVMe technology for large-capacity hard drives as the growing storage needs in AI systems and apps continue to rise. Old and current storage setups struggle to handle the huge datasets needed for machine learning, which now reach petabyte and even exabyte sizes. Today's storage options have many drawbacks for AI tasks. SSD-based systems offer the needed speed however it is not cost-effective for storing large AI training data. SAS/SATA hard drives are cheaper however, the interfaces rely on proprietary silicon, host bus adapters (HBAs), and controller architectures that struggle with the high-throughput, low-latency needs of AI's high data flow.

Seagate's "new idea" is to use NVMe technology in large-capacity hard drives. This creates a solution that keeps hard drives cost-effective and dense while boosting performance for AI apps. NVMe hard drives will not need HBAs, protocol bridges, and additional SAS infrastructure, allowing seamless scalability by integrating high-density hard drive storage with high-speed SSD caching in a unified NVMe architecture.

NVIDIA & Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the AI Era

At GTC 2025, NVIDIA announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software—including NVIDIA NIM microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities—as well as the new NVIDIA AI-Q Blueprint.

Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library. Leading data platform and storage providers—including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA—are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries. "Data is the raw material powering industries in the age of AI," said Jensen Huang, founder and CEO of NVIDIA. "With the world's storage leaders, we're building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers."

MiTAC Computing Unveils Advanced AI Server Solutions Accelerated by NVIDIA at GTC 2025

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, will present its latest innovations in AI infrastructure at GTC 2025. At booth #1505, MiTAC Computing will showcase its cutting-edge AI server platforms, fully optimized for NVIDIA MGX architecture, including the G4527G6, which supports NVIDIA H200 NVL platform and NVIDIA RTX PRO 6000 Blackwell Server Edition to address the evolving demands of enterprise AI workloads.

Enabling Next-Generation AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solutions, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Powered by Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, four NVIDIA ConnectX -7 NICs for high-speed east-west data transfer, and an NVIDIA BlueField -3 DPU for efficient north-south connectivity. The G4527G6 further enhances workload scalability with the NVIDIA NVLink interconnect, ensuring seamless performance for enterprise AI and high-performance computing (HPC) applications.

Lenovo Announces Hybrid AI Advantage with NVIDIA Blackwell Support

Today, at NVIDIA GTC, Lenovo unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption and boost business productivity by fast-tracking agentic AI that can reason, plan and take action to reach goals faster. The validated, full-stack AI solutions enable enterprises to quickly build and deploy AI agents for a broad range of high-demand use cases, increasing productivity, agility and trust while accelerating the next wave of AI reasoning for the new era of agentic AI.

New global IDC research commissioned by Lenovo reveals that ROI remains the greatest AI adoption barrier, despite a three-fold spend increase. AI agents are revolutionizing enterprise workflows and lowering barriers to ROI by supporting employees with complex problem-solving, coding, and multistep planning that drives speed, innovation and productivity. As CIOs and business leaders seek tangible return on AI investment, Lenovo is delivering hybrid AI solutions that unleash and customize agentic AI at every scale.

Supermicro Expands Enterprise AI Portfolio With Support for Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced support for the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM-inference and fine-tuning, agentic AI, visualization, graphics & rendering, and virtualization. Many Supermicro GPU-optimized systems are NVIDIA Certified, guaranteeing compatibility and support for NVIDIA AI Enterprise to simplify the process of developing and deploying production AI.

"Supermicro leads the industry with its broad portfolio of application optimized GPU servers that can be deployed in a wide range of enterprise environments with very short lead times," said Charles Liang, president and CEO of Supermicro. "Our support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU adds yet another dimension of performance and flexibility for customers looking to deploy the latest in accelerated computing capabilities from the data center to the intelligent edge. Supermicro's broad range of PCIe GPU-optimized products also support NVIDIA H200 NVL in 2-way and 4-way NVIDIA NVLink configurations to maximize inference performance for today's state-of-the-art AI models, as well as accelerating HPC workloads."

NVIDIA Announces Blackwell Ultra Platform for Next-Gen AI

NVIDIA today announced the next evolution of the NVIDIA Blackwell AI factory platform, NVIDIA Blackwell Ultra—paving the way for the age of AI reasoning. NVIDIA Blackwell Ultra boosts training and test-time scaling inference—the art of applying more compute during inference to improve accuracy—to enable organizations everywhere to accelerate applications such as AI reasoning, agentic AI and physical AI.

Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell's revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper.

MiTAC Computing Showcases Cutting-Edge AI and HPC Servers at Supercomputing Asia 2025

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a global leader in server design and manufacturing, will showcase its latest AI and HPC innovations at Supercomputing Asia 2025, taking place from March 11 at Booth #B10. The event highlights MiTAC's commitment to delivering cutting-edge technology with the introduction of the G4520G6 AI server and the TN85-B8261 HPC server—both engineered to meet the growing demands of artificial intelligence, machine learning, and high-performance computing (HPC) applications.

G4520G6 AI Server: Performance, Scalability, and Efficiency Redefined
The G4520G6AI server redefines computing performance with an advanced architecture tailored for intensive workloads. Key features include:
  • Exceptional Compute Power- Supports dual Intel Xeon 6 Processors with TDP up to 350 W, delivering high-performance multicore processing for AI-driven applications.
  • Enhanced Memory Performance- Equipped with 32 DDR5 DIMM slots (16 per CPU) and 8 memory channels, supporting up to 8,192 GB DDR5 RDIMM/3DS RDIMM at 6400 MT/s for superior memory bandwidth.

NVIDIA's Leading Partners Adopt Cybersecurity AI to Safeguard Critical Infrastructure

The rapid evolution of generative AI has created countless opportunities for innovation across industry and research. As is often the case with state-of-the-art technology, this evolution has also shifted the landscape of cybersecurity threats, creating new security requirements. Critical infrastructure cybersecurity is advancing to thwart the next wave of emerging threats in the AI era. Leading operational technology (OT) providers today showcased at the S4 conference for industrial control systems (ICS) and OT cybersecurity how they're adopting the NVIDIA cybersecurity AI platform to deliver real-time threat detection and critical infrastructure protection.

Armis, Check Point, CrowdStrike, Deloitte and World Wide Technology (WWT) are integrating the platform to help customers bolster critical infrastructure, such as energy, utilities and manufacturing facilities, against cyber threats. Critical infrastructure operates in highly complex environments, where the convergence of IT and OT, often accelerated by digital transformation, creates a perfect storm of vulnerabilities. Traditional cybersecurity measures are no longer sufficient to address these emerging threats. By harnessing NVIDIA's cybersecurity AI platform, these partners can provide exceptional visibility into critical infrastructure environments, achieving robust and adaptive security while delivering operational continuity.

Aetina Debuts at SC24 With NVIDIA MGX Server for Enterprise Edge AI

Aetina, a subsidiary of the Innodisk Group and an expert in edge AI solutions, is pleased to announce its debut at Supercomputing (SC24) in Atlanta, Georgia, showcasing the innovative SuperEdge NVIDIA MGX short-depth edge AI server, AEX-2UA1. By integrating an enterprise-class on-premises large language model (LLM) with the advanced retrieval-augmented generation (RAG) technique, Aetina NVIDIA MGX short-depth server demonstrates exceptional enterprise edge AI performance, setting a new benchmark in Edge AI innovation. The server is powered by the latest Intel Xeon 6 processor and dual high-end double-width NVIDIA GPUs, delivering ultimate AI computing power in a compact 2U form factor, accelerating Gen AI at the edge.

The SuperEdge NVIDIA MGX server expands Aetina's product portfolio from specialized edge devices to comprehensive AI server solutions, propelling a key milestone in Innodisk Group's AI roadmap, from sensors and storage to AI software, computing platforms, and now AI edge servers.

Supermicro Delivers Direct-Liquid-Optimized NVIDIA Blackwell Solutions

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing the highest-performing SuperCluster, an end-to-end AI data center solution featuring the NVIDIA Blackwell platform for the era of trillion-parameter-scale generative AI. The new SuperCluster will significantly increase the number of NVIDIA HGX B200 8-GPU systems in a liquid-cooled rack, resulting in a large increase in GPU compute density compared to Supermicro's current industry-leading liquid-cooled NVIDIA HGX H100 and H200-based SuperClusters. In addition, Supermicro is enhancing the portfolio of its NVIDIA Hopper systems to address the rapid adoption of accelerated computing for HPC applications and mainstream enterprise AI.

"Supermicro has the expertise, delivery speed, and capacity to deploy the largest liquid-cooled AI data center projects in the world, containing 100,000 GPUs, which Supermicro and NVIDIA contributed to and recently deployed," said Charles Liang, president and CEO of Supermicro. "These Supermicro SuperClusters reduce power needs due to DLC efficiencies. We now have solutions that use the NVIDIA Blackwell platform. Using our Building Block approach allows us to quickly design servers with NVIDIA HGX B200 8-GPU, which can be either liquid-cooled or air-cooled. Our SuperClusters provide unprecedented density, performance, and efficiency, and pave the way toward even more dense AI computing solutions in the future. The Supermicro clusters use direct liquid cooling, resulting in higher performance, lower power consumption for the entire data center, and reduced operational expenses."

ASUS Presents Next-Gen Infrastructure Solutions With Advanced Cooling Portfolio at SC24

ASUS today announced its next-generation infrastructure solutions at SC24, unveiling an extensive server lineup and advanced cooling solutions, all designed to propel the future of AI. The product showcase will reveal how ASUS is working with NVIDIA and Ubitus/Ubilink to prove the immense computational power of supercomputers, using AI-powered avatar and robot demonstrations that leverage the newly-inaugurated data center. It is Taiwan's largest supercomputing facility, constructed by ASUS, and is also notable for offering flexible green-energy options to customers that desire them. As a total solution provider with a proven track record in pioneering AI supercomputing, ASUS continuously drives maximized value for customers.

To fuel digital transformation in enterprise through high-performance computing (HPC) and AI-driven architecture, ASUS provides a full line-up of server systems—ready for every scenario. ASUS AI POD, a complete rack solution equipped with NVIDIA GB200 NVL72 platform, integrates GPUs, CPUs and switches in seamless, high-speed direct communication, enhancing the training of trillion-parameter LLMs and enabling real-time inference. It features the NVIDIA GB200 Grace Blackwell Superchip and fifth-generation NVIDIA NVLink technology, while offering both liquid-to-air and liquid-to-liquid cooling options to maximize AI computing performance.

NVIDIA Ethernet Networking Accelerates World's Largest AI Supercomputer, Built by xAI

NVIDIA today announced that xAI's Colossus supercomputer cluster comprising 100,000 NVIDIA Hopper GPUs in Memphis, Tennessee, achieved this massive scale by using the NVIDIA Spectrum-X Ethernet networking platform, which is designed to deliver superior performance to multi-tenant, hyperscale AI factories using standards-based Ethernet, for its Remote Direct Memory Access (RDMA) network.

Colossus, the world's largest AI supercomputer, is being used to train xAI's Grok family of large language models, with chatbots offered as a feature for X Premium subscribers. xAI is in the process of doubling the size of Colossus to a combined total of 200,000 NVIDIA Hopper GPUs.

Supermicro Adds New Petascale JBOF All-Flash Storage Solution Integrating NVIDIA BlueField-3 DPU for AI Data Pipeline Acceleration

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is launching a new optimized storage system for high performance AI training, inference and HPC workloads. This JBOF (Just a Bunch of Flash) system utilizes up to four NVIDIA BlueField-3 data processing units (DPUs) in a 2U form factor to run software-defined storage workloads. Each BlueField-3 DPU features 400 Gb Ethernet or InfiniBand networking and hardware acceleration for high computation storage and networking workloads such as encryption, compression and erasure coding, as well as AI storage expansion. The state-of-the-art, dual port JBOF architecture enables active-active clustering ensuring high availability for scale up mission critical storage applications as well as scale-out storage such as object storage and parallel file systems.

"Supermicro's new high performance JBOF Storage System is designed using our Building Block approach which enables support for either E3.S or U.2 form-factor SSDs and the latest PCIe Gen 5 connectivity for the SSDs and the DPU networking and storage platform," said Charles Liang, president and CEO of Supermicro. "Supermicro's system design supports 24 or 36 SSD's enabling up to 1.105PB of raw capacity using 30.71 TB SSDs. Our balanced network and storage I/O design can saturate the full 400 Gb/s BlueField-3 line-rate realizing more than 250 GB/s bandwidth of the Gen 5 SSDs."

ASRock Rack Unveils GPU Servers, Offers AI GPU Choices from All Three Brands

ASRock Rack sells the entire stack of servers a data-center could possibly want, and at Computex 2024, the company showed us their servers meant for AI GPUs. The 6U8M-GENOA2, as its name suggests, is a 6U server based on 2P AMD EPYC 9004 series "Genoa" processors in the SP5 package. You can configure it with even the variants of "Genoa" that come with 3D V-cache, for superior compute performance from the large cache. Each of the two SP5 sockets is wired to 12 DDR5 RDIMM slots, for a total of 24 memory channels. The server supports eight AMD Instinct MI300X or MI325X AI GPUs, which it wires out using Infinity Fabric links and PCIe Gen 5 x16 individually. A 3 kW 80 Plus Titanium PSU keeps the server fed. There are vacant Gen 5 x16 slots left even after connecting the GPUs, so you could give it a DPU-based 40 GbE NIC.

The 6U8X-EGS2 B100 is a 6U AI GPU server modeled along the 6U8M-GENOA2, with a couple of big changes. To begin with, the EPYC "Genoa" chips make way for a 2P Intel Xeon Socket E (LGA4677) CPU setup, for 2P Xeon 5 "Emerald Rapids" processors. Each socket is wired to 16 DDR5 DIMM slots (the processor itself has 8-channel DDR5, but this is a 2 DIMM-per-channel setup). The server integrates an NVIDIA NVSwitch that wires out NVLinks to eight NVIDIA B100 "Blackwell" AI GPUs. The server features eight HHHL PCIe Gen 5 x16, and five FHHL PCIe Gen 5 x16 connectors. There are vacant x16 slots for your DPU/NIC, you can even use an AIC NVIDIA BlueField card. The same 3 kW PSU as the "Genoa" system is also featured here.

NVIDIA Supercharges Ethernet Networking for Generative AI

NVIDIA today announced widespread adoption of the NVIDIA Spectrum -X Ethernet networking platform as well as an accelerated product release schedule. CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global and Yotta are among the first AI cloud service providers embracing NVIDIA Spectrum-X to bring extreme networking performance to their AI infrastructures. Additionally, several NVIDIA partners have announced Spectrum-based products, including ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn, which join Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro in incorporating the platform into their offerings.

"Rapid advancements in groundbreaking technologies like generative AI underscore the necessity for every business to prioritize networking innovation to gain a competitive edge," said Gilad Shainer, senior vice president of networking at NVIDIA. "NVIDIA Spectrum-X revolutionizes Ethernet networking to let businesses fully harness the power of their AI infrastructures to transform their operations and their industries."

ASRock Rack Unveils GPU Servers Supporting NVIDIA Blackwell GB200

ASRock Rack Inc., a leading innovative server company, is announcing its 6U8X-EGS2 series at booth 1617 during the NVIDIA GTC global AI conference in San Jose, USA. The 6U8X-EGS2 NVIDIA H100 and 6U8X-EGS2 NVIDIA H200 are ASRock Rack's most powerful AI training systems, capable of accommodating NVIDIA HGX H200 8-GPUs. The 6U rack mounts are able of providing airflow for the highest CPU and GPU performance. In addition to the eight-way configuration, the 6U8X-EGS2 series offers 12 PCIe Gen 5 NVMe drive bays and multiple PCIe 5.0 x16 slots, as well as a 4+4 PSU for full redundancy.

ASRock Rack is also developing servers that support the new NVIDIA HGX B200 8-GPU to handle the most demanding generative AI applications, accelerate large language models, and cater to data analytics and high-performance computing workloads. "At GTC, NVIDIA announced its new NVIDIA Blackwell platform, and we are glad to contribute to the new era of computing by providing a wide range of server hardware products that will support it," said Hunter Chen, Vice President at ASRock Rack. "Our products provide organizations with the foundation to transform their businesses and leverage the advancements of accelerated computing."

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

NVIDIA Grace Hopper Systems Gather at GTC

The spirit of software pioneer Grace Hopper will live on at NVIDIA GTC. Accelerated systems using powerful processors - named in honor of the pioneer of software programming - will be on display at the global AI conference running March 18-21, ready to take computing to the next level. System makers will show more than 500 servers in multiple configurations across 18 racks, all packing NVIDIA GH200 Grace Hopper Superchips. They'll form the largest display at NVIDIA's booth in the San Jose Convention Center, filling the MGX Pavilion.

MGX Speeds Time to Market
NVIDIA MGX is a blueprint for building accelerated servers with any combination of GPUs, CPUs and data processing units (DPUs) for a wide range of AI, high performance computing and NVIDIA Omniverse applications. It's a modular reference architecture for use across multiple product generations and workloads. GTC attendees can get an up-close look at MGX models tailored for enterprise, cloud and telco-edge uses, such as generative AI inference, recommenders and data analytics. The pavilion will showcase accelerated systems packing single and dual GH200 Superchips in 1U and 2U chassis, linked via NVIDIA BlueField-3 DPUs and NVIDIA Quantum-2 400 Gb/s InfiniBand networks over LinkX cables and transceivers. The systems support industry standards for 19- and 21-inch rack enclosures, and many provide E1.S bays for nonvolatile storage.

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

NVIDIA AI-Ready Servers From World's Leading System Manufacturers to Supercharge Generative AI for Enterprises

NVIDIA today announced the world's leading system manufacturers will deliver AI-ready servers that support VMware Private AI Foundation with NVIDIA, announced separately today, to help companies customize and deploy generative AI applications using their proprietary business data. NVIDIA AI-ready servers will include NVIDIA L40S GPUs, NVIDIA BlueField -3 DPUs and NVIDIA AI Enterprise software to enable enterprises to fine-tune generative AI foundation models and deploy generative AI applications like intelligent chatbots, search and summarization tools. These servers also provide NVIDIA-accelerated infrastructure and software to power VMware Private AI Foundation with NVIDIA.

NVIDIA L40S-powered servers from leading global system manufacturers - Dell Technologies, Hewlett Packard Enterprise and Lenovo . will be available by year-end to accelerate enterprise AI. "A new computing era has begun," said Jensen Huang, founder and CEO of NVIDIA. "Companies in every industry are racing to adopt generative AI. With our ecosystem of world-leading software and system partners, we are bringing generative AI to the world's enterprises."

NVIDIA Ada Lovelace Successor Set for 2025

According to the NVIDIA roadmap that was spotted in the recently published MLCommons training results, the Ada Lovelace successor is set to come in 2025. The roadmap also reveals the schedule for Hopper Next and Grace Next GPUs, as well as the BlueField-4 DPU.

While the roadmap does not provide a lot of details, it does give us a general idea of when to expect NVIDIA's next GeForce architecture. Since NVIDIA usually launches a new GeForce architecture every two years or so, the latest schedule might sound like a small delay, at least if it plans to launch the Ada Lovelace Next in early 2025 and not later. NVIDIA Pascal was launched in May 2016, Turing in September 2018, Ampere in May 2020, and Ada Lovelace in October 2022.

BBT.live Software-defined Connectivity to Accelerate Secure Access Service Edge Transformation with NVIDIA BlueField DPU Platforms

BBT.live, the Tel Aviv-based startup that has developed an all-in-one, tech-agnostic, software-defined connectivity solution, has announced a new technology innovation powered by NVIDIA. As a result, BBT.live, the software-defined connectivity platform, will run on NVIDIA BlueField data processing units (DPUs) to unlock the benefits of cloud-based connectivity solutions to businesses at every scale.

Modern workloads are experiencing an ever-growing need for network efficiency, privacy, and security. Businesses and enterprises that depend on solutions require additional hardware and integration, which introduces additional complexity and points of failure. BBT.live's proprietary technology, recognized by the Israel Innovation Authority, is device agnostic. It integrates with a variety of different hardware platforms (uCPE) without the need for time-consuming customization.

NVIDIA & Dell Deliver New Data Center Solution for Zero-Trust Security and the Era of AI

NVIDIA today announced a new data center solution with Dell Technologies designed for the era of AI, bringing state-of-the-art AI training, AI inference, data processing, data science and zero-trust security capabilities to enterprises worldwide. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs and NVIDIA AI Enterprise software, and is optimized for VMware vSphere 8 enterprise workload platform, also announced today.

"AI and zero-trust security are powerful forces driving the world's enterprises to rearchitect their data centers as computing and networking workloads are skyrocketing," said Manuvir Das, head of Enterprise Computing at NVIDIA. "VMware vSphere 8 offloads, accelerates, isolates and better secures data center infrastructure services onto the NVIDIA BlueField DPU, and frees the computing resources to process the intelligence factories of the world's enterprises."

NVIDIA Quantum-2 Takes Supercomputing to New Heights, Into the Cloud

NVIDIA today announced NVIDIA Quantum-2, the next generation of its InfiniBand networking platform, which offers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers.

The most advanced end-to-end networking platform ever built, NVIDIA Quantum-2 is a 400 Gbps InfiniBand networking platform that consists of the NVIDIA Quantum-2 switch, the ConnectX-7 network adapter, the BlueField-3 data processing unit (DPU) and all the software that supports the new architecture.
Return to Keyword Browsing
Apr 11th, 2025 14:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts