News Posts matching #Cloud

Return to Keyword Browsing

Lenovo launches ThinkShield Firmware Assurance for Deep Protection Above and Below the Operating System

Today, Lenovo announced the introduction of ThinkShield Firmware Assurance as part of its portfolio of enterprise-grade cybersecurity solutions. ThinkShield Firmware Assurance is one of the only computer OEM solutions to enable deep visibility and protection below the operating system (OS) by embracing Zero Trust Architecture (ZTA) component-level visibility to generate more accurate and actionable risk management insights.

As a security paradigm, ZTA explicitly identifies users and devices to grant appropriate levels of access so a business can operate with less risk and minimal friction. ZTA is a critical framework to reduce risk as organizations endeavor to complete Zero-Trust implementations.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."

TerraMaster Launches Five New BBS Integrated Backup Servers

In an era where data has become a core asset for modern enterprises, TerraMaster, a global leader in data storage and management solutions, has announced the official launch of five high-performance integrated backup servers: T9-500 Pro, T12-500 Pro, U4-500, U8-500 Plus, and U12-500 Plus. This product release not only enriches TerraMaster enterprise-level product line but also provides enterprise users with an integrated, efficient, and secure data backup solution - from hardware to software - by pairing these devices with the company proprietary BBS Business Backup Suite.

Key Features of the New Integrated Backup Servers
  • T9-500 Pro & T12-500 Pro: As new members of TerraMaster high-end series, these products feature compact designs and are easy to manage. With powerful processors, large memory capacities, and dual 10GbE network interfaces, they ensure high-efficiency data backup tasks, catering to the large-scale data storage and backup needs of small and medium-sized enterprises.
  • U4-500: Designed for SOHO, small offices, and remote work scenarios, the U4-500 features a compact 4-bay design and convenient network connectivity, making it an ideal data backup solution. Its user-friendly management interface allows for easy deployment and maintenance.
  • U8-500 Plus & U12-500 Plus: These two rackmount 8-bay and 12-bay upgraded models feature fully optimized designs, high-performance processors, and standard dual 10GbE high-speed interfaces. They not only improve data processing speeds but also enhance data security, making them particularly suitable for small and medium-sized enterprises that need to handle large volumes of data backup and recovery.

Microsoft Brings Copilot AI Assistant to Windows Terminal

Microsoft has taken another significant step in its AI integration strategy by introducing "Terminal Chat," an AI assistant now available in Windows Terminal. This latest feature brings conversational AI capabilities directly to the command-line interface, marking a notable advancement in making terminal operations more accessible to users of all skill levels. The new feature, currently available in Windows Terminal (Canary), leverages various AI services, including ChatGPT, GitHub Copilot, and Azure OpenAI, to provide interactive assistance for command-line operations. What sets Terminal Chat apart is its context-aware functionality, which automatically recognizes the specific shell environment being used—whether it's PowerShell, Command Prompt, WSL Ubuntu, or Azure Cloud Shell—and tailors its responses accordingly.

Users can interact with Terminal Chat through a dedicated interface within Windows Terminal, where they can ask questions, troubleshoot errors, and request guidance on specific commands. The system provides shell-specific suggestions, automatically adjusting its recommendations based on whether a user is working in Windows PowerShell, Linux, or other environments. For example, when asked about creating a directory, Terminal Chat will suggest "New-Item -ItemType Directory" for PowerShell users while providing "mkdir" as the appropriate command for Linux environments. This intelligent adaptation helps bridge the knowledge gap between different command-line interfaces. Below are some examples courtesy of Windows Latest and their testing:

Microsoft Announces its FY25 Q1 Earnings Release

Microsoft Corp. today announced the following results for the quarter ended September 30, 2024, as compared to the corresponding period of last fiscal year:
  • Revenue was $65.6 billion and increased 16%
  • Operating income was $30.6 billion and increased 14%
  • Net income was $24.7 billion and increased 11% (up 10% in constant currency)
  • Diluted earnings per share was $3.30 and increased 10%
"AI-driven transformation is changing work, work artifacts, and workflow across every role, function, and business process," said Satya Nadella, chairman and chief executive officer of Microsoft. "We are expanding our opportunity and winning new customers as we help them apply our AI platforms and tools to drive new growth and operating leverage."

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Google Shows Production NVIDIA "Blackwell" GB200 NVL System for Cloud

Last week, we got a preview of Microsoft's Azure production-ready NVIDIA "Blackwell" GB200 system, showing that only a third of the rack that goes in the data center is actually holding the compute elements, with the other two-thirds holding the cooling compartment to cool down the immense heat output from tens of GB200 GPUs. Today, Google is showing off a part of its own infrastructure ahead of the Google Cloud App Dev & Infrastructure Summit, taking place on October 30, digitally as an event. Shown below are two racks standing side by side, connecting NVIDIA "Blackwell" GB200 NVL cards with the rest of the Google infrastructure. Unlike Microsoft Azure, Google Cloud uses a different data center design in its facilities.

There is one rack with power distribution units, networking switches, and cooling distribution units, all connected to the compute rack, which houses power supplies, GPUs, and CPU servers. Networking equipment is present, and it connects to Google's "global" data center network, which is Google's own data center fabric. We are not sure what is the fabric connection of choice between these racks; as for optimal performance, NVIDIA recommends InfiniBand (Mellanox acquisition). However, given that Google's infrastructure is set up differently, there may be Ethernet switches present. Interestingly, Google's design of GB200 racks differs from Azure's, as it uses additional rack space to distribute the coolant to its local heat exchangers, i.e., coolers. We are curious to see if Google releases more information on infrastructure, as it has been known as the infrastructure king because of its ability to scale and keep everything organized.

Supermicro Adds New Petascale JBOF All-Flash Storage Solution Integrating NVIDIA BlueField-3 DPU for AI Data Pipeline Acceleration

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is launching a new optimized storage system for high performance AI training, inference and HPC workloads. This JBOF (Just a Bunch of Flash) system utilizes up to four NVIDIA BlueField-3 data processing units (DPUs) in a 2U form factor to run software-defined storage workloads. Each BlueField-3 DPU features 400 Gb Ethernet or InfiniBand networking and hardware acceleration for high computation storage and networking workloads such as encryption, compression and erasure coding, as well as AI storage expansion. The state-of-the-art, dual port JBOF architecture enables active-active clustering ensuring high availability for scale up mission critical storage applications as well as scale-out storage such as object storage and parallel file systems.

"Supermicro's new high performance JBOF Storage System is designed using our Building Block approach which enables support for either E3.S or U.2 form-factor SSDs and the latest PCIe Gen 5 connectivity for the SSDs and the DPU networking and storage platform," said Charles Liang, president and CEO of Supermicro. "Supermicro's system design supports 24 or 36 SSD's enabling up to 1.105PB of raw capacity using 30.71 TB SSDs. Our balanced network and storage I/O design can saturate the full 400 Gb/s BlueField-3 line-rate realizing more than 250 GB/s bandwidth of the Gen 5 SSDs."

Jabil Intros New Servers Powered by AMD 5th Gen EPYC and Intel Xeon 6 Processors

Jabil Inc. announced today that it is expanding its server portfolio with the J421E-S and J422-S servers, powered by AMD 5th Generation EPYC and Intel Xeon 6 processors. These servers are purpose-built for scalability in a variety of cloud data center applications, including AI, high-performance computing (HPC), fintech, networking, storage, databases, and security — representing the latest generation of server innovation from Jabil.

Built with customization and innovation in mind, the design-ready J422-S and J421E-S servers will allow engineering teams to meet customers' specific requirements. By fine-tuning Jabil's custom BIOS and BMC firmware, Jabil can create a competitive advantage for customers by developing the server configuration needed for higher performance, data management, and security. The server platforms are now available for sampling and will be in production by the first half of 2025.

Astera Labs Introduces New Portfolio of Fabric Switches Purpose-Built for AI Infrastructure at Cloud-Scale

Astera Labs, Inc, a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced a new portfolio of fabric switches, including the industry's first PCIe 6 switch, built from the ground up for demanding AI workloads in accelerated computing platforms deployed at cloud-scale. The Scorpio Smart Fabric Switch portfolio is optimized for AI dataflows to deliver maximum predictable performance per watt, high reliability, easy cloud-scale deployment, reduced time-to-market, and lower total cost of ownership.

The Scorpio Smart Fabric Switch portfolio features two application-specific product lines with a multi-generational roadmap:
  • Scorpio P-Series for GPU-to-CPU/NIC/SSD PCIe 6 connectivity- architected to support mixed traffic head-node connectivity across a diverse ecosystem of PCIe hosts and endpoints.
  • Scorpio X-Series for back-end GPU clustering-architected to deliver the highest back-end GPU-to-GPU bandwidth with platform-specific customization.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

Supermicro Currently Shipping Over 100,000 GPUs Per Quarter in its Complete Rack Scale Liquid Cooled Servers

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing a complete liquid cooling solution that includes powerful Coolant Distribution Units (CDUs), cold plates, Coolant Distribution Manifolds (CDMs), cooling towers and end to end management software. This complete solution reduces ongoing power costs and Day 0 hardware acquisition and data center cooling infrastructure costs. The entire end-to-end data center scale liquid cooling solution is available directly from Supermicro.

"Supermicro continues to innovate, delivering full data center plug-and-play rack scale liquid cooling solutions," said Charles Liang, CEO and president of Supermicro. "Our complete liquid cooling solutions, including SuperCloud Composer for the entire life-cycle management of all components, are now cooling massive, state-of-the-art AI factories, reducing costs and improving performance. The combination of Supermicro deployment experience and delivering innovative technology is resulting in data center operators coming to Supermicro to meet their technical and financial goals for both the construction of greenfield sites and the modernization of existing data centers. Since Supermicro supplies all the components, the time to deployment and online are measured in weeks, not months."

Supermicro Adds New Max-Performance Intel-Based X14 Servers

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today adds new maximum performance GPU, multi-node, and rackmount systems to the X14 portfolio, which are based on the Intel Xeon 6900 Series Processors with P-Cores (formerly codenamed Granite Rapids-AP). The new industry-leading selection of workload-optimized servers addresses the needs of modern data centers, enterprises, and service providers. Joining the efficiency-optimized X14 servers leveraging the Xeon 6700 Series Processors with E-cores launched in June 2024, today's additions bring maximum compute density and power to the Supermicro X14 lineup to create the industry's broadest range of optimized servers supporting a wide variety of workloads from demanding AI, HPC, media, and virtualization to energy-efficient edge, scale-out cloud-native, and microservices applications.

"Supermicro X14 systems have been completely re-engineered to support the latest technologies including next-generation CPUs, GPUs, highest bandwidth and lowest latency with MRDIMMs, PCIe 5.0, and EDSFF E1.S and E3.S storage," said Charles Liang, president and CEO of Supermicro. "Not only can we now offer more than 15 families, but we can also use these designs to create customized solutions with complete rack integration services and our in-house developed liquid cooling solutions."

Supermicro Announces FlexTwin Multi-Node Liquid Cooled Servers

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge is announcing the all-new FlexTwin family of systems which has been designed to address the needs of scientists, researchers, governments, and enterprises undertaking the world's most complex and demanding computing tasks. Featuring flexible support for the latest CPU, memory, storage, power and cooling technologies, FlexTwin is purpose-built to support demanding HPC workloads including financial services, scientific research, and complex modeling. These systems are cost-optimized for performance per dollar and can be customized to suit specific HPC applications and customer requirements thanks to Supermicro's modular Building Block Solutions design.

"Supermicro's FlexTwin servers set a new standard of performance density for rack-scale deployments with up to 96 dual processor compute nodes in a standard 48U rack," said Charles Liang, president and CEO of Supermicro. "At Supermicro, we're able to offer a complete one-stop solution that includes servers, racks, networking, liquid cooling components, and liquid cooling towers, speeding up the time to deployment and resulting in higher quality and reliability across the entire infrastructure, enabling customers faster time to results. Up to 90% of the server generated heat is removed with the liquid cooling solution, saving significant amounts of energy and enabling higher compute performance."

SambaNova Launches Fastest AI Platform Based on Its SN40L Chip

SambaNova Systems, provider of the fastest and most efficient chips and AI models, announced SambaNova Cloud, the world's fastest AI inference service enabled by the speed of its SN40L AI chip. Developers can log on for free via an API today — no waiting list — and create their own generative AI applications using both the largest and most capable model, Llama 3.1 405B, and the lightning-fast Llama 3.1 70B. SambaNova Cloud runs Llama 3.1 70B at 461 tokens per second (t/s) and 405B at 132 t/s at full precision.

"SambaNova Cloud is the fastest API service for developers. We deliver world record speed and in full 16-bit precision - all enabled by the world's fastest AI chip," said Rodrigo Liang, CEO of SambaNova Systems. "SambaNova Cloud is bringing the most accurate open source models to the vast developer community at speeds they have never experienced before."

Intel Announces Deployment of Gaudi 3 Accelerators on IBM Cloud

IBM and Intel announced a global collaboration to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. This offering, which is expected to be available in early 2025, aims to help more cost-effectively scale enterprise AI and drive innovation underpinned with security and resiliency. This collaboration will also enable support for Gaudi 3 within IBM's watsonx AI and data platform. IBM Cloud is the first cloud service provider (CSP) to adopt Gaudi 3, and the offering will be available for both hybrid and on-premise environments.

"Unlocking the full potential of AI requires an open and collaborative ecosystem that provides customers with choice and accessible solutions. By integrating Gaudi 3 AI accelerators and Xeon CPUs with IBM Cloud, we are creating new AI capabilities and meeting the demand for affordable, secure and innovative AI computing solutions," said Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group.

India Targets 2026 for Its First Domestic AI Chip Development

Ola, an Indian automotive company, is venturing into AI chip development with its artificial intelligence branch, Krutrim, planning to launch India's first domestically designed AI chip by 2026. The company is leveraging ARM architecture for this initiative. CEO Bhavish Aggarwal emphasizes the importance of India developing its own AI technology rather than relying on external sources.

While detailed specifications are limited, Ola claims these chips will offer competitive performance and efficiency. For manufacturing, the company plans to partner with a global tier I or II foundry, possibly TSMC or Samsung. "We are still exploring foundries, we will go with a global tier I or II foundry. Taiwan is a global leader, and so is Korea. I visited Taiwan a couple of months back and the ecosystem is keen on partnering with India," Aggarwal said.

Supermicro Launches Plug-and-Play SuperCluster for NVIDIA Omniverse

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing a new addition to its SuperCluster portfolio of plug-and-play AI infrastructure solutions for the NVIDIA Omniverse platform to deliver the high-performance generative AI-enhanced 3D workflows at enterprise scale. This new SuperCluster features the latest Supermicro NVIDIA OVX systems and allows enterprises to easily scale as workloads increase.

"Supermicro has led the industry in developing GPU-optimized products, traditionally for 3D graphics and application acceleration, and now for AI," said Charles Liang, president and CEO of Supermicro. "With the rise of AI, enterprises are seeking computing infrastructure that combines all these capabilities into a single package. Supermicro's SuperCluster features fully interconnected 4U PCIe GPU NVIDIA-Certified Systems for NVIDIA Omniverse, with up to 256 NVIDIA L40S PCIe GPUs per scalable unit. The system helps deliver high performance across the Omniverse platform, including generative AI integrations. By developing this SuperCluster for Omniverse, we're not just offering a product; we're providing a gateway to the future of application development and innovation."

Microsoft's €20m European Cloud Providers Settlement Draws Mixed Reactions

Microsoft has agreed to pay €20 million to settle an antitrust complaint filed by Cloud Infrastructure Services Providers in Europe (CISPE), a European cloud providers association. The deal aims to address concerns about Microsoft's cloud product licensing practices, also, Microsoft will develop Azure Stack HCI for European cloud providers and compensate CISPE members for recent licensing costs. On the other side, CISPE will withdraw its EU complaint, cease supporting similar global complaints, and establish an independent European Cloud Observatory to monitor the product's development.

The settlement excludes major providers like AWS, Google Cloud, and AliCloud. While CISPE hails this as a victory, critics argue it's insufficient. AWS spokesperson Alia Ilyas said that Microsoft was only making "limited concessions for some CISPE members that demonstrate there are no technical barriers preventing it from doing what's right for every cloud customer". Google Cloud suggests more action is needed against anti-competitive behavior, and UK-based cloud company Civo's CEO Mark Boost questions the deal's long-term impact on the industry. Boost stated, "However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis". Despite resolving the CISPE complaint, Microsoft faces ongoing regulatory scrutiny worldwide. The UK's Competition and Markets Authority launched a cloud computing market investigation in October 2023 while the US Federal Trade Commission is conducting two separate probes involving Microsoft. The first FTC investigation, initiated in January 2024, examines AI services and partnerships of major tech companies, including Microsoft, Amazon, Alphabet, Anthropic, and OpenAI. The second focuses specifically on Microsoft, OpenAI, and Nvidia, assessing their impact and behavior in the AI sector.

Broadcom Unveils Newest Innovations for VMware Cloud Foundation

Broadcom Inc. today unveiled the latest updates to VMware Cloud Foundation (VCF), the company's flagship private cloud platform. The latest advancements in VCF support customers' digital innovation with faster infrastructure modernization, improved developer productivity, and better cyber resiliency and security with low total cost of ownership.

"VMware Cloud Foundation is the industry's first private-cloud platform to offer the combined power of public and private clouds with unmatched operational simplicity and proven total cost of ownership value," said Paul Turner, Vice President of Products, VMware Cloud Foundation Division, Broadcom. "With our latest release, VCF is delivering on key requirements driven by customer input. The new VCF Import functionality will be a game changer in accelerating VCF adoption and improving time to value. We are also delivering a set of new capabilities that helps IT more quickly meet the needs of developers without increasing business risk. This latest release of VCF puts us squarely on the path to delivering on the full promise of VCF for our customers."

AWS Launches 896-Core Instance, Double What Competitors Offer

Liftr Insights, a pioneer in market intelligence driven by unique data, revealed today that it detected AWS's recent launch of an 896-core instance type, surpassing the previous highest core counts by any cloud provider. This is important to companies looking to improve performance. If they are not using these, their competitors might be.

Liftr data show the previous AWS high core-count instance had 448 cores and first appeared in May 2021. Prior to that, the largest instance available in the six largest cloud providers (representing over 75% of the public cloud space) was a 384-core instance first offered by Azure in 2019.

GIGABYTE Rolls Out Cloud-scale Solutions for Intel Xeon 6 Processor

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in servers and industrial motherboards, today launched products supporting Intel Xeon 6 Processor (LGA 4710) with Efficient-cores in a single or dual socket configuration. GIGABYTE's new products span its entire enterprise portfolio from GPU servers to liquid cooling servers to motherboards. With Intel's new processor roadmap addressing a broad range of compute needs in the server market, Efficient-cores (E-cores) and Performance-cores (P-cores) have their unique advantages, and both are supported on this platform. E-cores are available today, with P-core products becoming available in the future. The solutions within have distinct advantages for cloud, networking and media, and data services workloads that value consistent performance, greater energy efficiency, and dense rack and core density.

GIGABYTE's enterprise portfolio for the Intel Xeon 6 processor spans GPU servers, direct liquid cooling (DLC) servers, multi-node servers, general purpose motherboards and servers, and edge servers. By releasing a wide range of servers, data centers can quickly integrate it to achieve dynamic goals such as in GIGA POD, a rack-scale supercomputing AI infrastructure by GIGABYTE. The new Xeon based servers by GIGABYTE are also highly compatible with cloud-native software stacks for portability and consistency in containerized applications in distributed systems.

MiTAC, TYAN Unveil Intel Xeon 6 Servers for AI, HPC, Cloud, and Enterprise at Computex 2024

MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership." said Rick Hwang, President of MiTAC Computing Technology Corporation.

MSI Unveils New AI and Computing Platforms with 4th Gen AMD EPYC Processors at Computex 2024

MSI, a leading global server provider, will introduce its latest server platforms based on the 4th Gen AMD EPYC processors at Computex 2024, booth #M0806 in Taipei, Taiwan, from June 4-7. These new platforms, designed for growing cloud-native environments, deliver a combination of performance and efficiency for data centers.

"Leveraging the advantages of 4th Gen AMD EPYC processors, MSI's latest server platforms feature scalability and flexibility with new adoption of CXL technology and DC-MHS architecture, helping data centers achieve the most scalable cloud applications while delivering leading performance," said Danny Hsu, General Manager of Enterprise Platform Solutions.

GIGABYTE Joins COMPUTEX to Unveil Energy Efficiency and AI Acceleration Solutions

Giga Computing, a subsidiary of GIGABYTE and an industry leader in AI servers and green computing, today announced its participation in COMPUTEX and unveiling of solutions tackling complex AI workloads at scale, as well as advanced cooling infrastructure that will lead to greater energy efficiency. Additionally, to support innovations in accelerated computing and generative AI, GIGABYTE will have NVIDIA GB200 NVL72 systems available in Q1 2025. Discussions around GIGABYTE products will be held in booth #K0116 in Hall 1 at the Taipei Nangang Exhibition Center. As an NVIDIA-Certified System provider, GIGABYTE servers also support NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform.

Redefining AI Servers and Future Data Centers
All new and upcoming CPU and accelerated computing technologies are being showcased at the GIGABYTE booth alongside GIGA POD, a rack-scale AI solution by GIGABYTE. The flexibility of GIGA POD is demonstrated with the latest solutions such as the NVIDIA HGX B100, NVIDIA HGX H200, NVIDIA GH200 Grace Hopper Superchip, and other OAM baseboard GPU systems. As a turnkey solution, GIGA POD is designed to support baseboard accelerators at scale with switches, networking, compute nodes, and more, including support for NVIDIA Spectrum -X to deliver powerful networking capabilities for generative AI infrastructures.
Return to Keyword Browsing
Nov 22nd, 2024 00:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts