News Posts matching #AI

Return to Keyword Browsing

Ventana Introduces Veyron V2 - World's Highest Performance Data Center-Class RISC-V Processor and Platform

Ventana Micro Systems Inc. today announced the second generation of its Veyron family of RISC-V processors. The new Veyron V2 is the highest performance RISC-V processor available today and is offered in the form of chiplets and IP. Ventana Founder and CEO Balaji Baktha will share the details of Veyron V2 today during his keynote speech at the RISC-V Summit North America 2023 in Santa Clara, California.

"Veyron V2 represents a leap forward in our quest to lead the industry in high-performance RISC-V CPUs that are ready for rapid customer adoption," said Balaji Baktha, Founder and CEO of Ventana. "It substantiates our commitment to customer innovation, workload acceleration, and overall optimization to achieve best in class performance per Watt per dollar. V2 enhancements unleash innovation across data center, automotive, 5G, AI, and client applications."

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

Alibaba Readies PCIe 5.0 SSD Controller Based on RISC-V ISA

Alibaba's T-Head unit, responsible for the design and development of in-house IC design, has announced the first domestic SSD controller based on the PCIe 5.0 specification standard. Called the Zhenyue 510, the SSD controller is aimed at enterprise SSD offerings. Interestingly, the Zhenyue 510 is powered by T-Head's custom Xuantie C910 cores based on RISC-V instruction set architecture (ISA). Supporting the PCIe 5.0 standard for interfacing, the SSD controller uses DDR5 memory as a cache buffer. Regarding the performance, there are no official figures yet, but the company claims to have 30% lower input/output latencies compared to competing offerings. T-Head claims the SSD has an IO processing capability of "3400 Kilo IOs per second, a data bandwidth of 14 Gbytes/s, and an extremely high energy efficiency of 420 Kilo IO per second for every Watt".

This is an essential step towards Chinese self-sufficiency as T-Head has designed various ICs for processing different tasks. Still, now Alibaba's chip design unit has a domestic design for storage as well. Claiming low latency figures, the Zhenyue 510 is suitable for enterprise workloads like big data analysis, as well as AI inference/training systems workloads. The development of Zhenyue 510 started in 1H 2021, and it took the company more than two years to complete the design and validation of the chip to prepare it for deployment. This is the second Chinese-made SSD controller after Yingren Technology (InnoGrit) announced their chip in September.

AMD Instinct MI300X Could Become Company's Fastest Product to Rake $1 Billion in Sales

AMD in its post Q3-2023 financial results call stated that it expects the Instinct MI300X accelerator to be the fastest product in AMD history to rake in $1 billion in sales. This would be the time it took for a product in its lifecycle to register $1 billion in sales. With the MI300 series, the company hopes to finally break into the AI-driven HPC accelerator market that's dominated by NVIDIA, and at scale. This growth is attributable to two distinct factors. The first of which is that NVIDIA is supply bottlenecked, and customers and looking for alternatives, and finally found a suitable one with the MI300 series; and the second is that with the MI300 series, AMD has finally ironed out the software ecosystem backing the hardware that looks incredible on paper.

It's also worth noting here, that AMD is rumored to be sacrificing its market presence in the enthusiast-class gaming GPU segment with its next-generation, with the goal of maximizing its foundry allocation for HPC accelerators such as the MI300X. HPC accelerators are a significantly higher margin class of products than gaming GPUs such as the Radeon RX 7900 XTX. The RX 7900 XTX and its refresh under the RX 7950 series, are not expected to have a successor in the RDNA4 generation. "We now expect datacenter GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year," said Dr. Lisa Su, CEO AMD, at the company's earnings call with analysts and investors. "This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history."

Microsoft Windows 11 23H2 Major Update Begins Rolling Out, Bets Big on Generative AI

Microsoft on Tuesday began rolling out Windows 11 23H2, the year's major update to its PC operating system. This release sees a major integration of AI into several features across the OS. To begin with, Microsoft Copilot, which made its debut with 365 and Office, is getting integrated with Windows. Powered by Bing Chat, Copilot is a GPT-based chatbot that not just gathers information from web-search, but can also be made to do a variety of OS level functions. For example, you can ask it to pair a Bluetooth device, or find something on your machine, find stuff within the files of your machine. The WIN+C key now brings up Copilot. Next up, Microsoft Paint gets its biggest feature update, with the generative AI-based Paint Cocreator feature. Not only will Paint assist your brush-strokes, in getting the shapes and contents right, but much like Stable Diffusion and Midjourney, Paint now has a prompt-based image generation feature. For now, Paint Cocreator is being released as a preview feature.

Microsoft Clipchamp, the video editor included with Windows, now has a set of generative AI enhancements of its own, with tools such as Auto Compose, which assists in building a movie with scenes, getting the sequence of clips and transitions right, getting the effects and filters right; and audio features such as narration and background score. Clipchamp also has integration with social platforms including Tik Tok, YouTube, and LinkedIn. Snipping Tool, the screengrab application of Windows, now gets a couple of AI enhancements too, such as scanning an image to extract and redact information. Photos gets AI-accelerated image recognition and categorization. Much like Google Photos, you can look for a picture by describing what you're looking for. As with each such annual major release, Microsoft will be releasing 23H2 in a phased manner through Windows Update, but if you're impatient and want to immediately update, or perform a clean installation, visit the link below.

DOWNLOAD: Windows 11 23H2 (Installation Assistant, Media Creator, ISOs)

AMD Reports Third Quarter 2023 Financial Results, Revenue Up 4% YoY

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2023 of $5.8 billion, gross margin of 47%, operating income of $224 million, net income of $299 million and diluted earnings per share of $0.18. On a non-GAAP basis, gross margin was 51%, operating income was $1.3 billion, net income was $1.1 billion and diluted earnings per share was $0.70.

"We delivered strong revenue and earnings growth driven by demand for our Ryzen 7000 series PC processors and record server processor sales," said AMD Chair and CEO Dr. Lisa Su. "Our data center business is on a significant growth trajectory based on the strength of our EPYC CPU portfolio and the ramp of Instinct MI300 accelerator shipments to support multiple deployments with hyperscale, enterprise and AI customers."

NVIDIA Might be Forced to Cancel US$5 Billion Worth of Orders from China

The U.S. Commerce Department seems to have thrown a big spanner into the NVIDIA machinery, by informing the company that some US$5 billion worth of AI chip orders for China falls under the latest US export restrictions. The orders are said to have been heading for Alibaba, ByteDance and Baidu, as well as possibly other major tech companies in China. This made NVIDIA's shares drop sharply when the market opened in the US earlier today, by close to five percent, dropping NVIDIA's market cap below the US$1 Trillion mark. The share price recovered somewhat in the afternoon, putting NVIDIA back in the trillion dollar club.

Based on a statement to Reuters, NVIDIA doesn't seem overly concerned, despite what appears to be huge loss in sales, with a company spokesperson issuing the following statement "These new export controls will not have a meaningful impact in the near term". The US government will implement these new export restrictions from November, which obviously didn't give NVIDIA much of a chance to avoid them and it looks as if the company is going to have to find new customers for the AI chips. Considering the current demand for NVIDIA's chips, this might not be too much of a challenge for the company though.

IBM Unleashes the Potential of Data and AI with its Next-Generation IBM Storage Scale System 6000

Today, IBM introduced the new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet today's data intensive and AI workload demands, and the latest offering in the IBM Storage for Data and AI portfolio.

For the seventh consecutive year and counting, IBM is a 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage Leader, recognized for its vision and execution. The new IBM Storage Scale System 6000 seeks to build on IBM's leadership position with an enhanced high performance parallel file system designed for data intensive use-cases. It provides up to 7M IOPs and up to 256 GB/s throughput for read only workloads per system in a 4U (four rack units) footprint.

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

Jabil to Take Over Intel Silicon Photonics Business

Jabil Inc., a global leader in design, manufacturing, and supply chain solutions, today announced it will take over the manufacture and sale of Intel's current Silicon Photonics-based pluggable optical transceiver ("module") product lines and the development of future generations of such modules.

"This deal better positions Jabil to cater to the needs of our valued customers in the data center industry, including hyperscale, next-wave clouds, and AI cloud data centers. These complex environments present unique challenges, and we are committed to tackling them head-on and delivering innovative solutions to support the evolving demands of the data center ecosystem," stated Matt Crowley, Senior Vice President of Cloud and Enterprise Infrastructure at Jabil. "This deal enables Jabil to expand its presence in the data center value chain."

Intel Joins the MLCommons AI Safety Working Group

Today, Intel announced it is joining the new MLCommons AI Safety (AIS) working group alongside artificial intelligence experts from industry and academia. As a founding member, Intel will contribute its expertise and knowledge to help create a flexible platform for benchmarks that measure the safety and risk factors of AI tools and models. As testing matures, the standard AI safety benchmarks developed by the working group will become a vital element of our society's approach to AI deployment and safety.

"Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we're pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere," said Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions.

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced that the Rambus HBM3 Memory Controller IP now delivers up to 9.6 Gigabits per second (Gbps) performance supporting the continued evolution of the HBM3 standard. With a 50% increase over the HBM3 Gen 1 data rate of 6.4 Gbps, the Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.

"HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies," said Neeraj Paliwal, general manager of Silicon IP at Rambus. "Thanks to Rambus innovation and engineering excellence, we're delivering the industry's leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP."

Cisco Partners with NVIDIA to Unleash the Power of Hybrid Workspaces

Cisco today unveiled the next milestone of its ongoing work with NVIDIA to deliver AI-powered meetings for hybrid workers. Cisco announced the launch of Room Kit EQX and the expansion of its Cinematic Meetings capabilities—both powered by NVIDIA's AI engine—with the goal of enhancing collaboration experiences with audio and video intelligence and enabling more equitable hybrid meetings.

"In order for people to want to come to the office, companies must fundamentally reimagine and reconfigure workspaces to provide seamless and immersive collaboration experience," said Jeetu Patel, Executive Vice President and General Manager, Cisco Security and Collaboration. "Our collaboration with NVIDIA helps make this possible as we expand our portfolio of AI-powered solutions that unlock the potential of hybrid workers."

SiFive to Lay Off Hundreds of Staff Amid Changing RISC-V Market Dynamics

SiFive is a team of one of the pioneering engineers that helped create RISC-V instruction set architecture (ISA) and helped the ecosystem grow. The company has been an active member of the RISC-V community and contributed its guidance on various RISC-V extensions. However, according to sources close to More Than Moore, the company is reportedly downsizing its team, and layoffs are imminent. The impact of the downsizing is about 20% of the workforce, which equals around 120-130 staff. However, that is only part of the story. SiFive is reportedly also canceling its pre-designed core portfolio and shifting focus on custom-design core IP that it would sell to customers. This is in line with the slowing demand for their pre-designed offerings and the growing demand for AI-enhanced custom silicon. The company issued a statement for Moore Than Moore.
SiFive PR for Moore Than MooreAs we adjust to the rapidly changing semiconductor end markets, SiFive is realigning across all of our teams and geographies to better take advantage of the opportunities ahead, reduce operational complexities and increase our ability to respond quickly to customer product requirements. Unfortunately, as a result some positions were eliminated last week. The employees are being offered severance and outplacement assistance. SiFive continues to be excited about the momentum and long-term outlook for our business and RISC-V.
Additionally, there was another statement for More Than Moore, which you can see entirely below.

SK Hynix's LPDDR5T, World's Fastest Mobile DRAM, Completes Compatibility Validation with Qualcomm

SK hynix Inc. announced today that it has started commercialization of the LPDDR5T (Low Power Double Data Rate 5 Turbo), the world's fastest DRAM for mobile with 9.6 Gbps speed. The company said that it has obtained the validation that the LPDDR5T is compatible with Qualcomm Technologies' new Snapdragon 8 Gen 3 Mobile Platform, marking the industry's first case for such product to be verified by the U.S. company.

SK hynix has proceeded with the compatibility validation of the LPDDR5T, following the completion of the development in January, with support from Qualcomm Technologies. The completion of the process means that it is compatible with Snapdragon 8 Gen 3. With the validation process with Qualcomm Technologies, a leader in wireless telecommunication products and services, and other major mobile AP (Application Processor) providers successfully completed, SK hynix expects the range of the LPDDR5T adoption to grow rapidly.

Qualcomm Launches Premium Snapdragon 8 Gen 3 to Bring Generative AI to the Next Wave of Flagship Smartphones

At Snapdragon Summit, Qualcomm Technologies, Inc. today announced its latest premium mobile platform, the Snapdragon 8 Gen 3—a true titan of on-device intelligence, premium-tier performance, and power efficiency. As the premium Android smartphone SoC leader, Qualcomm Technologies' latest processor will be adopted for flagship devices by global OEMs and smartphone brands including ASUS, Honor, iQOO, MEIZU, NIO, Nubia, OnePlus, OPPO, realme, Redmi, RedMagic, Sony, vivo, Xiaomi, and ZTE.

"Snapdragon 8 Gen 3 infuses high-performance AI across the entire system to deliver premium-level performance and extraordinary experiences to consumers. This platform unlocks a new era of generative AI enabling users to generate unique content, help with productivity, and other breakthrough use cases." said Chris Patrick, senior vice president and general manager of mobile handsets, Qualcomm Technologies, Inc. "Each year, we set out to design leading features and technologies that will power our latest Snapdragon 8-series mobile platform and the next generation of flagship Android devices. The Snapdragon 8 Gen 3 delivers."

Qualcomm Unleashes Snapdragon X Elite: The AI Super-Charged Platform to Revolutionize the PC

At Snapdragon Summit, Qualcomm Technologies, Inc. today announced the most powerful computing processor it has ever created for the PC: Snapdragon X Elite. This groundbreaking platform ushers in a new era of premium computing by delivering a massive leap forward with best-in-class CPU performance, leading on-device AI inferencing, and one of the most efficient processors in a PC with up to multiple days of battery life. As AI transforms how we interact with our PCs, Snapdragon X Elite is designed to support the intelligent and power-intensive tasks of the future that will enable powerful productivity, rich creativity, and immersive entertainment experiences from anywhere.

"Snapdragon X Elite represents a dramatic leap in innovation for computing as we deliver our new, custom Qualcomm Oryon CPU for super-charged performance that will delight consumers with incredible power efficiency and take their creativity and productivity to the next level," said Kedar Kondap, Senior Vice President & General Manager of Compute & Gaming, Qualcomm Technologies, Inc. "Powerful on-device AI experiences will enable seamless multitasking and new intuitive user experiences, empowering consumers and businesses alike to create and accomplish more." PCs powered by Snapdragon X Elite are expected mid-2024.

Qualcomm Snapdragon Elite X SoC for Laptop Leaks: 12 Cores, LPDDR5X Memory, and WiFi7

Thanks to the information from Windows Report, we have received numerous details regarding Qualcomm's upcoming Snapdragon Elite X chip for laptops. The Snapdragon Elite X SoC is built on top of Nuvia-derived Oryon cores, which Qualcomm put 12 off in the SoC. While we don't know their base frequencies, the all-core boost reaches 3.8 GHz. The SoC can reach up to 4.3 GHz on single and dual-core boosting. However, the slide notes that this is all pure "big" core configuration of the SoC, so no big.LITTLE design is done. The GPU part of Snapdragon Elite X is still based on Qualcomm's Adreno IP; however, the performance figures are up significantly to reach 4.6 TeraFLOPS of supposedly FP32 single-precision power. Accompanying the CPU and GPU, there are dedicated AI and image processing accelerators, like Hexagon Neural Processing Unit (NPU), which can process 45 trillion operations per second (TOPS). For the camera, the Spectra Image Sensor Processor (ISP) is there to support up to 4K HDR video capture on a dual 36 MP or a single 64 MP camera setup.

The SoC supports LPDDR5X memory running at 8533 MT/s and a maximum capacity of 64 GB. Apparently, the memory controller is an 8-channel one with a 16-bit width and a maximum bandwidth of 136 GB/s. Snapdragon Elite X has PCIe 4.0 and supports UFS 4.0 for outside connection. All of this is packed on a die manufactured by TSMC on a 4 nm node. In addition to marketing excellent performance compared to x86 solutions, Qualcomm also advertises the SoC as power efficient. The slide notes that it uses 1/3 of the power at the same peak PC performance of x86 offerings. It is also interesting to note that the package will support WiFi7 and Bluetooth 5.4. Officially coming in 2024, the Snapdragon Elite X will have to compete with Intel's Meteor Lake and/or Arrow Lake, in addition to AMD Strix Point.

NVIDIA AI Now Available in Oracle Cloud Marketplace

Training generative AI models just got easier. NVIDIA DGX Cloud AI supercomputing platform and NVIDIA AI Enterprise software are now available in Oracle Cloud Marketplace, making it possible for Oracle Cloud Infrastructure customers to access high-performance accelerated computing and software to run secure, stable and supported production AI in just a few clicks. The addition - an industry first - brings new capabilities for end-to-end development and deployment on Oracle Cloud. Enterprises can get started from the Oracle Cloud Marketplace to train models on DGX Cloud, and then deploy their applications on OCI with NVIDIA AI Enterprise.

Oracle Cloud and NVIDIA Lift Industries Into Era of AI
Thousands of enterprises around the world rely on OCI to power the applications that drive their businesses. Its customers include leaders across industries such as healthcare, scientific research, financial services, telecommunications and more. Oracle Cloud Marketplace is a catalog of solutions that offers customers flexible consumption models and simple billing. Its addition of DGX Cloud and NVIDIA AI Enterprise lets OCI customers use their existing cloud credits to integrate NVIDIA's leading AI supercomputing platform and software into their development and deployment pipelines. With DGX Cloud, OCI customers can train models for generative AI applications like intelligent chatbots, search, summarization and content generation.

SK hynix Displays Next-Gen Solutions Set to Unlock AI and More at OCP Global Summit 2023

SK hynix showcased its next-generation memory semiconductor technologies and solutions at the OCP Global Summit 2023 held in San Jose, California from October 17-19. The OCP Global Summit is an annual event hosted by the world's largest data center technology community, the Open Compute Project (OCP), where industry experts gather to share various technologies and visions. This year, SK hynix and its subsidiary Solidigm showcased advanced semiconductor memory products that will lead the AI era under the slogan "United Through Technology".

SK hynix presented a broad range of its solutions at the summit, including its leading HBM(HBM3/3E), CXL, and AiM products for generative AI. The company also unveiled some of the latest additions to its product portfolio including its DDR5 RDIMM, MCR DIMM, enterprise SSD (eSSD), and LPDDR CAMM devices. Visitors to the HBM exhibit could see HBM3, which is utilized in NVIDIA's H100, a high-performance GPU for AI, and also check out the next-generation HBM3E. Due to their low-power consumption and ultra-high-performance, these HBM solutions are more eco-friendly and are particularly suitable for power-hungry AI server systems.

Intel Launches Industry's First AI PC Acceleration Program

Building on the AI PC use cases shared at Innovation 2023, Intel today launched the AI PC Acceleration Program, a global innovation initiative designed to accelerate the pace of AI development across the PC industry.

The program aims to connect independent hardware vendors (IHVs) and independent software vendors (ISVs) with Intel resources that include AI toolchains, co-engineering, hardware, design resources, technical expertise and co-marketing opportunities. These resources will help the ecosystem take full advantage of Intel Core Ultra processor technologies and corresponding hardware to maximize AI and machine learning (ML) application performance, accelerate new use cases and connect the wider PC industry to the solutions emerging in the AI PC ecosystem. More information is available on the AI PC Acceleration Program website.

Lenovo Opens New Global Innovation Centre in Budapest

Lenovo today announced the Europe based Innovation Centre specializing in HPC and AI, will now operate with enhanced customer experience from the in-house manufacturing facility in Budapest.

Running the new Innovation Center operations from the Budapest factory with onsite inventory stock, allows Lenovo customers to access the most advanced power and cooling infrastructure solutions and latest generation technology within the supply chain. This access ensures that workloads are tested on accurate representations of end purchased solutions, enabling customers to know with certainty the Lenovo solution installed will perform successfully for the intended workload.

AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Standardize Next-Generation Narrow Precision Data Formats for AI

Realizing the full potential of next-generation deep learning requires highly efficient AI infrastructure. For a computing platform to be scalable and cost efficient, optimizing every layer of the AI stack, from algorithms to hardware, is essential. Advances in narrow-precision AI data formats and associated optimized algorithms have been pivotal to this journey, allowing the industry to transition from traditional 32-bit floating point precision to presently only 8 bits of precision (i.e. OCP FP8).

Narrower formats allow silicon to execute more efficient AI calculations per clock cycle, which accelerates model training and inference times. AI models take up less space, which means they require fewer data fetches from memory, and can run with better performance and efficiency. Additionally, fewer bit transfers reduces data movement over the interconnect, which can enhance application performance or cut network costs.

Gigabyte Announces AI Strategy for Consumer Products to Map the Future of AI

GIGABYTE, a leader in cloud computing and AI server markets, announced a new strategic framework for AI outlining a blueprint for the company's direction in the AI-driven future of the consumer PC market. The framework features three fundamental pillars: offering a comprehensive AI operating platform, implementing AI-based product design, and engaging in the AI ecosystem with the goal of introducing consumers to a new AI-driven experience.

Providing a comprehensive AI operating platform to meet all-end computing applications
GIGABYTE's AI operating platform caters to all-end computing applications, spanning from the cloud to the edge. In the cloud, GIGABYTE's AI servers deliver robust computing power for demanding AI workloads, encompassing generative AI services and machine learning applications like ChatGPT. At the edge, GIGABYTE's consumer products, such as high-performance graphics cards and gaming laptops, furnish users with instant and reliable AI computing power for a diverse array of applications, ranging from real-time video processing to AI-driven gaming. In scenarios involving AI collaboration systems like Microsoft Copilot, GIGABYTE offers a power-saving, secure, and user-friendly AI operating platform explicitly engineered for the next-generation AI processors like NPUs.

NVIDIA Partners With Foxconn to Build Factories and Systems for the AI Industrial Revolution

NVIDIA today announced that it is collaborating with Hon Hai Technology Group (Foxconn) to accelerate the AI industrial revolution. Foxconn will integrate NVIDIA technology to develop a new class of data centers powering a wide range of applications—including digitalization of manufacturing and inspection workflows, development of AI-powered electric vehicle and robotics platforms, and a growing number of language-based generative AI services.

Announced in a fireside chat with NVIDIA founder and CEO Jensen Huang and Foxconn Chairman and CEO Young Liu at Hon Hai Tech Day, in Taipei, the collaboration starts with the creation of AI factories—an NVIDIA GPU computing infrastructure specially built for processing, refining and transforming vast amounts of data into valuable AI models and tokens—based on the NVIDIA accelerated computing platform, including the latest NVIDIA GH200 Grace Hopper Superchip and NVIDIA AI Enterprise software.
Return to Keyword Browsing
Feb 21st, 2025 23:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts