News Posts matching #AI

Return to Keyword Browsing

NVIDIA Releases GeForce Studio 565.90 WHQL Graphics Drivers

NVIDIA today released the latest version of its GeForce Studio drivers. These are drivers with optimization targeting creators and creative professionals that use GeForce GPUs at work. The new NVIDIA GeForce Studio 565.90 WHQL is based on the GeForce Game Ready 565.90 WHQL drivers that the company released a couple of weeks ago. Version 565.90 WHQL of the Studio drivers come with support for AI-accelerated features of Adobe Creative Cloud suite announced at Adobe MAX 2024. NVIDIA has been working with Adobe to get all its creativity applications to benefit from the AI acceleration capabilities of GeForce RTX GPUs. Apps such as Premiere Pro, After Effects, and the Substance 3D should benefit the most. The drivers also add support for CUDA 12.7. The drivers also include some of the game-specific fixes introduced in the Game Ready version of GeForce 565.90 WHQL. Grab the drivers from the link below.

DOWNLOAD: NVIDIA Studio Drivers 565.90 WHQL (October 2024 Update)

Flex Announces Liquid-Cooled Rack and Power Solutions for AI Data Centers at 2024 OCP Global Summit

Flex today announced new reference platforms for liquid-cooled servers, rack, and power products that will enable customers to sustainably accelerate data center growth. These innovations build on Flex's ability to address technical challenges associated with power, heat generation, and scale to support artificial intelligence (AI) and high-performance computing (HPC) workloads.

"Flex delivers integrated data center IT and power infrastructure solutions that address the growing power and compute demands in the AI era," said Michael Hartung, president and chief commercial officer, Flex. "We are expanding our unique portfolio of advanced manufacturing capabilities, innovative products, and lifecycle services, enabling customers to deploy IT and power infrastructure at scale and drive AI data center expansion."

Rittal Unveils Modular Cooling Distribution Unit With Over 1 MW Capacity

In close cooperation with hyperscalers and server OEMs, Rittal has developed a modular cooling distribution unit (CDU) that delivers a cooling capacity of over 1 MW. It will be the centerpiece exhibit at Rittal's booth A24 at 2024 OCP Global Summit. The CDU uses direct liquid cooling based on water - and is thus an example for new IT infrastructure technologies that are enablers for AI applications.

New technology, familiar handling?
"To put the technology into practice, it is not enough to simply provide the cooling capacity and integrate the solution into the facility - which also still poses challenges," says Lars Platzhoff, Head of Rittal's Business Unit Cooling Solutions: "Despite the new technology, the solutions must remain manageable by the data center team as part of the usual service. At best, this should be taken into account already at the design stage."

Jabil Intros New Servers Powered by AMD 5th Gen EPYC and Intel Xeon 6 Processors

Jabil Inc. announced today that it is expanding its server portfolio with the J421E-S and J422-S servers, powered by AMD 5th Generation EPYC and Intel Xeon 6 processors. These servers are purpose-built for scalability in a variety of cloud data center applications, including AI, high-performance computing (HPC), fintech, networking, storage, databases, and security — representing the latest generation of server innovation from Jabil.

Built with customization and innovation in mind, the design-ready J422-S and J421E-S servers will allow engineering teams to meet customers' specific requirements. By fine-tuning Jabil's custom BIOS and BMC firmware, Jabil can create a competitive advantage for customers by developing the server configuration needed for higher performance, data management, and security. The server platforms are now available for sampling and will be in production by the first half of 2025.

Minisforum Unveils the EliteMini AI370 With AMD Ryzen AI 9 HX 370 CPU

Minisforum is thrilled to announce the launch of the EliteMini AI370, a powerful mini PC that combines groundbreaking CPU performance with a sleek, compact design. This innovative device is designed to meet the demands of gamers, creators, and tech enthusiasts who need high performance in a small footprint.

The Next-Level Performance
At the core of the EliteMini AI370 is the new AMD Ryzen AI 9 HX 370 processor. Featuring 12 cores and 24 threads, this advanced chip delivers effortless power for gaming and creative applications. It expertly balances high performance and energy efficiency, with integrated AI enhancements that optimize workflows and daily tasks like never before.

Eliyan Delivers Highest Performing Chiplet Interconnect PHY at 64Gbps in 3nm Process

Eliyan Corporation, credited for the invention of the semiconductor industry's highest-performance and most efficient chiplet interconnect, today revealed the successful delivery of first silicon for its NuLink -2.0 PHY, manufactured in a 3 nm process. The device achieves 64 Gbps/bump, the industry's highest performance for a die-to-die PHY solution for multi-die architectures. While compatible with UCIe standard, the milestone further confirms Eliyan's ability to extend die-to-die connectivity by 2x higher bandwidth, on standard as well as advanced packaging, at unprecedented power, area, and latency.

The NuLink-2.0 is a multi-mode PHY solution that also supports UMI (Universal Memory Interconnect), a novel chiplet interconnect technology that improves Die-to-Memory bandwidth efficiency by more than 2x. UMI leverages a dynamic bidirectional PHY whose specifications are currently being finalized with the Open Compute Project (OCP) as BoW 2.1.

Lenovo Accelerates Business Transformation with New ThinkSystem Servers Engineered for Optimal AI and Powered by AMD

Today, Lenovo announced its industry-leading ThinkSystem infrastructure solutions powered by AMD EPYC 9005 Series processors, as well as AMD Instinct MI325X accelerators. Backed by 225 of AMD's world-record performance benchmarks, the Lenovo ThinkSystem servers deliver an unparalleled combination of AMD technology-based performance and efficiency to tackle today's most demanding edge-to-cloud workloads, including AI training, inferencing and modeling.

"Lenovo is helping organizations of all sizes and across various industries achieve AI-powered business transformations," said Vlad Rozanovich, Senior Vice President, Lenovo Infrastructure Solutions Group. "Not only do we deliver unmatched performance, we offer the right mix of solutions to change the economics of AI and give customers faster time-to-value and improved total value of ownership."

HP Launches HPE ProLiant Compute XD685 Servers Powered by 5th Gen AMD EPYC Processors and AMD Instinct MI325X Accelerators

Hewlett Packard Enterprise today announced the HPE ProLiant Compute XD685 for complex AI model training tasks, powered by 5th Gen AMD EPYC processors and AMD Instinct MI325X accelerators. The new HPE system is optimized to quickly deploy high-performing, secure and energy-efficient AI clusters for use in large language model training, natural language processing and multi-modal training.

The race is on to unlock the promise of AI and its potential to dramatically advance outcomes in workforce productivity, healthcare, climate sciences and much more. To capture this potential, AI service providers, governments and large model builders require flexible, high-performance solutions that can be brought to market quickly.

Edifier Proudly Announces New True Wireless Earbuds - NeoDots

Edifier International, the award-winning audio electronics designer, announces the NeoDots True Wireless earbuds featuring a hybrid driver unit combined with a digital signal processor for the optimum in sound quality.

The combination of dynamic drivers and balanced armature drivers in audio equipment allows a broad frequency response, where dynamic drivers effectively manage low frequencies, delivering deep and rich bass, whilst the balanced armature drivers handle mid and high frequencies ensuring clear and detailed treble. This synergy results in a well-rounded sound profile that enhances the listening experience across different music genres and audio content.

MiTAC Announces New Servers Featuring AMD EPYC 9005 Series CPUs and AMD Instinct MI325X GPUs

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), today announced the launch of its new high-performance servers, featuring the latest AMD EPYC 9005 Series CPUs and AMD Instinct MI325X accelerators.

"AMD is the trusted data center solutions provider of choice for leading enterprises worldwide, whether they are enabling corporate AI initiatives, building large-scale cloud deployments, or hosting critical business applications on-premises," said Ravi Kuppuswamy, senior vice president, Server Business Unit, AMD. "Our latest 5th Gen AMD EPYC CPUs provide the performance, flexibility and reliability - with compatibility across the x86 data center ecosystem - to deliver tailored solutions that meet the diverse demands of the modern data center."

ASRock Rack Unveils New Server Platforms Supporting AMD EPYC 9005 Series Processors and AMD Instinct MI325X Accelerators at AMD Advancing AI 2024

ASRock Rack Inc., a leading innovative server company, announced upgrades to its extensive lineup to support AMD EPYC 9005 Series processors. Among these updates is the introduction of the new 6U8M-TURIN2 GPU server. This advanced platform features AMD Instinct MI325X accelerators, specifically optimized for intensive enterprise AI applications, and will be showcased at AMD Advancing AI 2024.

ASRock Rack Introduce GPU Servers Powered by AMD EPYC 9005 series processors
AMD today revealed the 5th Generation AMD EPYC processors, offering a wide range of core counts (up to 192 cores), frequencies (up to 5 GHz), and expansive cache capacities. Select high-frequency processors, such as the AMD EPYC 9575F, are optimized for use as host CPUs in GPU-enabled systems. Additionally, the just launched AMD Instinct MI325X accelerators feature substantial HBM3E memory and 6 TB/s of memory bandwidth, enabling quick access and efficient handling of large datasets and complex computations.

AMD Launches New Ryzen AI PRO 300 Series Processors to Power Next Generation of AI PCs

Today, AMD (NASDAQ: AMD) announced its third generation commercial AI mobile processors, designed specifically to transform business productivity with Copilot+ features including live captioning and language translation in conference calls and advanced AI image generators. The new Ryzen AI PRO 300 Series processors deliver industry-leading AI compute, with up to three times the AI performance than the previous generation, and offer uncompromising performance for everyday workloads. Enabled with AMD PRO Technologies, the Ryzen AI PRO 300 Series processors offer world-class security and manageability features designed to streamline IT operations and ensure exceptional ROI for businesses.

Ryzen AI PRO 300 Series processors feature new AMD "Zen 5" architecture, delivering outstanding CPU performance, and are the world's best line up of commercial processors for Copilot+ enterprise PCs. Laptops equipped with Ryzen AI PRO 300 Series processors are designed to tackle business' toughest workloads, with the top-of-stack Ryzen AI 9 HX PRO 375 offering up to 40% higher performance and up to 14% faster productivity performance compared to Intel's Core Ultra 7 165U. With the addition of XDNA 2 architecture powering the integrated NPU, AMD Ryzen AI PRO 300 Series processors offer a cutting-edge 50+ NPU TOPS (Trillions of Operations Per Second) of AI processing power, exceeding Microsoft's Copilot+ AI PC requirements and delivering exceptional AI compute and productivity capabilities for the modern business. Built on a 4 nm process and with innovative power management, the new processors deliver extended battery life ideal for sustained performance and productivity on the go.

AMD Launches Instinct MI325X Accelerator for AI Workloads: 256 GB HBM3E Memory and 2.6 PetaFLOPS FP8 Compute

During its "Advancing AI" conference today, AMD has updated its AI accelerator portfolio with the Instinct MI325X accelerator, designed to succeed its MI300X predecessor. Built on the CDNA 3 architecture, Instinct MI325X brings a suite of improvements over the old SKU. Now, the MI325X features 256 GB of HBM3E memory running at 6 TB/s bandwidth. The capacity memory alone is a 1.8x improvement over the old MI300 SKU, which features 192 GB of regular HBM3 memory. Providing more memory capacity is crucial as upcoming AI workloads are training models with parameter counts measured in trillions, as opposed to billions with current models we have today. When it comes to compute resources, the Instinct MI325X provides 1.3 PetaFLOPS at FP16 and 2.6 PetaFLOPS at FP8 training and inference. This represents a 1.3x improvement over the Instinct MI300.

A chip alone is worthless without a good platform, and AMD decided to make the Instinct MI325X OAM modules a drop-in replacement for the current platform designed for MI300X, as they are both pin-compatible. In systems packing eight MI325X accelerators, there are 2 TB of HBM3E memory running at 48 TB/s memory bandwidth. Such a system achieves 10.4 PetaFLOPS of FP16 and 20.8 PetaFLOPS of FP8 compute performance. The company uses NVIDIA's H200 HGX as reference claims for its performance competitiveness, where the company claims that the Instinct MI325X outperforms NVIDIA H200 HGX system by 1.3x across the board in memory bandwidth, FP16 / FP8 compute performance and 1.8x in memory capacity.

MSI Unveils Next-Generation AI Gaming Desktops Powered by Intel's Arrow Lake-S

MSI unveils its new lineup of AI gaming desktops, following the introduction of Intel's Arrow Lake-S desktop processors. The lineup features two advanced models: the MPG Infinite X3 AI and the MEG Vision X. These desktops harness the power of Intel Core Ultra processors (Series 2) with built-in Neural Processing Units (NPUs), coupled with NVIDIA GeForce RTX graphics cards. This powerful combination delivers enhanced performance in both AI-accelerated gaming and complex processing tasks, aiming to optimize the gaming experiences.

These new desktops are equipped with up to Intel Core Ultra 9 processor 285K, boasting 8 P-Cores, 16 E-Cores, and an integrated 13 trillion operations per second (TOPS) NPU. When paired with up to a GeForce RTX 4090 graphics card, the systems achieve an impressive total of over 1300 TOPS, enabling them to handle advanced AI tasks effortlessly while enhancing gaming performance and AI-generated content (AIGC) efficiency. The integrated NPU significantly enhances processing capabilities, particularly for AI-related tasks. In applications like DIGIME software, it boosts AI inference efficiency while simultaneously reducing GPU load. This optimization not only improves performance in specific applications but also benefits overall AI computation across various tasks.

ScaleFlux Announces Two New SSD Controllers and One CXL Controller

In the past 13 years, global data production has surged, increasing an estimated 74 times. (1) Looking forward, McKinsey projects AI to spur 35% annual growth in enterprise SSD capacity demand, from 181 Exabytes (EB) in 2024 to 1,078EB in 2030. (2) To address this growing demand, ScaleFlux, a leader in data storage and memory technology, is announcing a significant expansion of its product portfolio. The company is introducing cutting-edge controllers for both NVMe SSDs and Compute Express Link (CXL) modules, reinforcing its leadership in innovative technology for the data pipeline. "With the release of three new ASIC controllers and key updates to its existing lineup, ScaleFlux continues to push the boundaries of SSD and memory performance, power efficiency, and data integrity," points out Hao Zhong, CEO and Co-Founder of the company.

Three New SoC Controllers to Transform Data Center Storage
ScaleFlux is proud to unveil three new SoC controllers designed to enhance data center, AI and enterprise infrastructure:

Micron Updates Corporate Logo with "Ahead of The Curve" Design

Today, Micron updated its corporate logo with new symbolism. The redesign comes as Micron celebrates over four decades of technological advancement in the semiconductor industry. The new logo features a distinctive silicon color, paying homage to the wafers at the core of Micron's products. Its curved lettering represents the company's ability to stay ahead of industry trends and adapt to rapid technological changes. The design also incorporates vibrant gradient colors inspired by light reflections on wafers, which are the core of Mircorn's memory and storage products.

This rebranding effort coincides with Micron's expanding role in AI, where memory and storage innovations are increasingly crucial. The company has positioned itself beyond a commodity memory supplier, now offering leadership in solutions for AI data centers, high-performance computing, and AI-enabled devices. The company has come far from its original 64K DRAM in 1981 to HBM3E DRAM today. Micron offers different HBM memory products, graphics memory powering consumer GPUs, CXL memory modules, and DRAM components and modules.

Gigabyte Unveils Ground Breaking Z890 Motherboards

GIGABYTE Technology, one of the top global manufacturers of motherboards, graphics cards, and hardware solutions, today announced the launch of its revolutionary Z890 motherboards. These next-generation motherboards are set to redefine the standards in performance, AI integration, and user experience for enthusiasts and professionals alike. Powered by state-of-the-art artificial intelligence, these motherboards push the boundaries of what's possible in computing.⁠

Infinite Memory Performance
GIGABYTE Z890 lineup equips the D5 Bionic Corsa technology to create phenomenal new peaks in memory performance to DDR5 XMP 9500 and above. A true marvel of AI-enhanced overclocking technology for DDR5 memory, D5 Bionic Corsa boasts four key technologies from software, hardware to firmware side. The AORUS AI SNATCH and AI SNATCH Engine present AI-overclocking for ultimate performance, while AI-Driven PCB Design and HyperTune BIOS deliver AI-design for signal enhancement on motherboards. The AORUS AI SNATCH is an auto-overclocking software by AI model, and enables users to unleash utmost performance with one-click activation. The AI SNATCH Engine is the AI model served as the core of AORUS AI SNATCH software, trained by AI TOP on diverse overclocking datasets to improve precision and optimize performance with stability. The AI-Driven PCB Technology employs AI algorithms to optimize vias, routing and stackups, while the Hypertune BIOS uses AI to optimize MRC and adapt to signals for peak efficiency, significant memory clock boost, and enhanced overall performance.

Fibocom Unveils 5G AI FWA Solution Based on Snapdragon X75 Modem-RF System

Fibocom, a global leading provider of IoT (Internet of Things) wireless solutions and wireless communication modules, unveils a pioneering AI-powered FWA solution based on Snapdragon X75 5G Modem-RF System during Network X 2024. The solution aims to simplify configuration, enhance user experience and foster service personalization. The integration of AI into the 5G FWA devices significantly revolutionizes the user interaction with end devices and improves experience.

The 5G AI FWA solution is designed to utilize AI intelligence to understand and respond to users' requests and streamline task management processes through voice input or text messages to OpenAI's Whisper service, or the conversations can be realized through the web interface. Offering a seamless, intuitive communication system, significantly expands the CPE capability and improves user experience. The AI-powered FWA solution also extends its intelligence to network optimization, latency reduction to meet customer satisfaction, serving as a hub for unified task management.

NVIDIA "Blackwell" GB200 Server Dedicates Two-Thirds of Space to Cooling at Microsoft Azure

Late Tuesday, Microsoft Azure shared an interesting picture on its social media platform X, showcasing the pinnacle of GPU-accelerated servers—NVIDIA "Blackwell" GB200-powered AI systems. Microsoft is one of NVIDIA's largest customers, and the company often receives products first to integrate into its cloud and company infrastructure. Even NVIDIA listens to feedback from companies like Microsoft about designing future products, especially those like the now-canceled NVL36x2 system. The picture below shows a massive cluster that roughly divides the compute area into a single-third of the entire system, with a gigantic two-thirds of the system dedicated to closed-loop liquid cooling.

The entire system is connected using Infiniband networking, a standard for GPU-accelerated systems due to its lower latency in packet transfer. While the details of the system are scarce, we can see that the integrated closed-loop liquid cooling allows the GPU racks to be in a 1U form for increased density. Given that these systems will go into the wider Microsoft Azure data centers, a system needs to be easily maintained and cooled. There are indeed limits in power and heat output that Microsoft's data centers can handle, so these types of systems often fit inside internal specifications that Microsoft designs. There are more compute-dense systems, of course, like NVIDIA's NVL72, but hyperscalers should usually opt for other custom solutions that fit into their data center specifications. Finally, Microsoft noted that we can expect to see more details at the upcoming Microsoft Ignite conference in November and learn more about its GB200-powered AI systems.

Global PC Shipments Dip Slightly Despite Recovery Economy, But AI Integration is the Key to Future Market Success

Even though the global economy shows signs of recovery, worldwide shipments of traditional PCs dipped 2.4% year-over-year (YoY) to 68.8 million units, during the third quarter of 2024 (3Q24), according to preliminary results from the International Data Corporation (IDC) Worldwide Quarterly Personal Computing Device Tracker. Factors including rising costs and inventory replenishment led to a surge in shipments in the previous quarter, resulting in a slightly slower sales cycle.

"Demand, without a doubt, has returned for PCs amongst consumers and commercial buyers," said Jitesh Ubrani, research manager with IDC's Worldwide Mobile Device Trackers. "However, much of the demand was still concentrated at the entry-level thanks to a recovering economy and the back-to-school season in North America. That said, newer AI PCs such as Copilot+ PCs from Qualcomm along with Intel and AMD's equivalent chips as well as Apple's expected M4-based Macs are expected to drive the premium segment in coming months."

NVIDIA cuLitho Computational Lithography Platform is Moving to Production at TSMC

TSMC, the world leader in semiconductor manufacturing, is moving to production with NVIDIA's computational lithography platform, called cuLitho, to accelerate manufacturing and push the limits of physics for the next generation of advanced semiconductor chips. A critical step in the manufacture of computer chips, computational lithography is involved in the transfer of circuitry onto silicon. It requires complex computation - involving electromagnetic physics, photochemistry, computational geometry, iterative optimization and distributed computing. A typical foundry dedicates massive data centers for this computation, and yet this step has traditionally been a bottleneck in bringing new technology nodes and computer architectures to market.

Computational lithography is also the most compute-intensive workload in the entire semiconductor design and manufacturing process. It consumes tens of billions of hours per year on CPUs in the leading-edge foundries. A typical mask set for a chip can take 30 million or more hours of CPU compute time, necessitating large data centers within semiconductor foundries. With accelerated computing, 350 NVIDIA H100 Tensor Core GPU-based systems can now replace 40,000 CPU systems, accelerating production time, while reducing costs, space and power.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

ASUS Republic of Gamers and ASUS Metaverse Announce the Release of the SL@SH206 Virtual Experience

ASUS Republic of Gamers (ROG) and ASUS Metaverse today announced the release of the SL@SH206 Web3 project, which creates a gamified experience and Web3 community where players can explore the virtual world and unleash their creativity. The AI-generated stories are supported by cloud technology from Taiwan Web Service Corporation (TWSC), a subsidiary of ASUS. SL@SH206 is targeting users worldwide and is expected to support both Chinese and English languages.

Innovative Web3 gaming ecosystem
SL@SH206 is the first step in collaboration on virtual experiences between ASUS Metaverse and the ROG brand; it is built using Web3 technology that immerses players in a cyberpunk world. Players take on the role of an amnesiac robot, ZEI-6, in a society where humanity has sought refuge in the virtual world. Through AI-generated content and a gamified experience, they will gradually uncover the secrets of the SL@SH206 world. Along the way, players can discover various virtual objects, trigger game events, and be able to get SL@SH206 digital collectibles by completing missions. Additionally, players can join the official Discord community, where they can interact and participate in mini-games to accumulate points and earn valuable rewards.

Three Cutting-Edge MSI Laptops Recognized as "Featured Finalists" at the International Design Excellence Awards (IDEA) 2024

The 2024 International Design Excellence Awards (IDEA) has named three groundbreaking laptops—Cyborg 14 AI, Stealth 18 AI Studio, and Titan 18 HX—as "Featured Finalists" for their outstanding innovation and design. These laptops, each representing the cutting edge of gaming technology, push the boundaries of user experience with their unique aesthetics and high-performance capabilities.

The Cyborg 14 AI offers a visually striking fusion of biological and mechanical elements, The 14-inch compact and lightweight body echoes the convenience of gaming on-the-go. The Stealth 18 AI Studio, modeled after stealth combat aircraft with top-tier gaming performance, precisely carried out the task, where thin meets powerful. The Titan 18 HX, MSI's flagship gaming laptop shares a kindred spirit with hardcore gamers. The voracious gaming appetites with PC specifications and high-grade thermal efficiency by all means sets the bar extraordinarily high.

Inflection AI and Intel Launch Enterprise AI System

Today, Inflection AI and Intel announced a collaboration to accelerate the adoption and impact of AI for enterprises as well as developers. Inflection AI is launching Inflection for Enterprise, an industry-first, enterprise-grade AI system powered by Intel Gaudi and Intel Tiber AI Cloud (AI Cloud), to deliver empathetic, conversational, employee-friendly AI capabilities and provide the control, customization and scalability required for complex, large-scale deployments. This system is available presently through the AI Cloud and will be shipping to customers as an industry-first AI appliance powered by Gaudi 3 in Q1 2025.

"Through this strategic collaboration with Inflection AI, we are setting a new standard with AI solutions that deliver immediate, high-impact results. With support for open-source models, tools, and competitive performance per watt, Intel Gaudi 3 solutions make deploying GenAI accessible, affordable, and efficient for enterprises of any size." -Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group
Return to Keyword Browsing
Dec 3rd, 2024 12:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts