News Posts matching #Meta

Return to Keyword Browsing

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Meta Shows Open-Architecture NVIDIA "Blackwell" GB200 System for Data Center

During the Open Compute Project (OCP) Summit 2024, Meta, one of the prime members of the OCP project, showed its NVIDIA "Blackwell" GB200 systems for its massive data centers. We previously covered Microsoft's Azure server rack with GB200 GPUs featuring one-third of the rack space for computing and two-thirds for cooling. A few days later, Google showed off its smaller GB200 system, and today, Meta is showing off its GB200 system—the smallest of the bunch. To train a dense transformer large language model with 405B parameters and a context window of up to 128k tokens, like the Llama 3.1 405B, Meta must redesign its data center infrastructure to run a distributed training job on two 24,000 GPU clusters. That is 48,000 GPUs used for training a single AI model.

Called "Catalina," it is built on the NVIDIA Blackwell platform, emphasizing modularity and adaptability while incorporating the latest NVIDIA GB200 Grace Blackwell Superchip. To address the escalating power requirements of GPUs, Catalina introduces the Orv3, a high-power rack capable of delivering up to 140kW. The comprehensive liquid-cooled setup encompasses a power shelf supporting various components, including a compute tray, switch tray, the Orv3 HPR, Wedge 400 fabric switch with 12.8 Tbps switching capacity, management switch, battery backup, and a rack management controller. Interestingly, Meta also upgraded its "Grand Teton" system for internal usage, such as deep learning recommendation models (DLRMs) and content understanding with AMD Instinct MI300X. Those are used to inference internal models, and MI300X appears to provide the best performance per Dollar for inference. According to Meta, the computational demand stemming from AI will continue to increase exponentially, so more NVIDIA and AMD GPUs is needed, and we can't wait to see what the company builds.

Marvell Collaborates with Meta for Custom Ethernet Network Interface Controller Solution

Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced the development of FBNIC, a custom 5 nm network interface controller (NIC) ASIC in collaboration with Meta to meet the company's infrastructure and use case requirements. The FBNIC board design will also be contributed by Marvell to the Open Compute Project (OCP) community. FBNIC combines a customized network controller designed by Marvell and Meta, a co-designed board, and Meta's ASIC, firmware and software. This custom design delivers innovative capabilities, optimizes performance, increases efficiencies, and reduces the average time needed to resolve potential network and server issues.

"The future of large-scale, data center computing will increasingly revolve around optimizing semiconductors and other components for specific applications and cloud infrastructure architectures," said Raghib Hussain, President of Products and Technologies at Marvell. "It's been exciting to partner with Meta on developing their custom FBNIC on our industry-leading 5 nm accelerated infrastructure silicon platform. We look forward to the OCP community leveraging the board design for future innovations."

Microsoft Discontinues HoloLens 2, Shifts Mixed-Reality Strategy

Microsoft has officially ended production of its HoloLens 2 mixed-reality headset, according to sources confirmed by The Register. The tech giant recently notified its partners that the HoloLens 2, introduced in 2019 as an enterprise-focused augmented reality device, is no longer available for purchase. This marks a significant shift in Microsoft's AR strategy, with the company stating, "Support for HoloLens 2, including security updates, will end on December 31, 2027." Despite aggressive marketing efforts, the HoloLens 2 struggled to gain widespread adoption, reflecting broader challenges in the AR/VR market where high-end headsets like HoloLens 2 and Apple Vision Pro retail for around $3,500, limiting their appeal. Some Microsoft employees reportedly expressed surprise that the project continued as long as it did, suggesting internal doubts about its viability.

Rather than continuing as a hardware provider, Microsoft plans to pivot its role in the mixed reality space, focusing on "first-party software solutions and services, partnering with the broader mobile phone and mixed reality hardware ecosystem." This decision aligns with the current state of the AR/VR industry, where the ecosystem is still in its early stages, and companies like Meta are heavily investing in its development. Microsoft's shift from hardware production to ecosystem investment mirrors trends in the broader tech industry and could position the company for future opportunities as the mixed-reality market matures. As the ecosystem develops and more use cases emerge, Microsoft's investment in software and services could prove valuable despite the current challenges in justifying investments in a field that's still searching for compelling widespread applications.

Logitech Releases MX Ink Mixed Reality Stylus for Meta Quest

Logitech announced the availability of MX Ink, the first Mixed Reality (MR) stylus specifically designed for Meta Quest. A precision tool with a familiar pen-like feel, MX Ink allows users to navigate, annotate and create freely across 2D spaces like papers, desks, or whiteboards, as well as immersive 3D environments. The pressure-sensitive tip of MX Ink enables natural writing and gaming motions, merging the tactile sensation of a physical tool with the limitless possibilities of the virtual creative space.

MX ink is currently supported by a wide range of applications across the creativity and productivity landscape, as well as in industries such as medicine, architecture, and education, with new applications being added regularly.

Meta Announces the Quest 3S, its Most Affordable Mixed Reality Headset to Date Starting at US$300

Today at Connect, we unveiled Meta Quest 3S, a headset with the same mixed reality capabilities and fast performance as Meta Quest 3, but at a lower price point. Starting at just $299.99 USD, Quest 3S is the best headset for those new to mixed reality and immersive experiences, or who might have been waiting for a low-cost upgrade from Quest and Quest 2.

From watching your favorite TV shows on a cinema-sized screen to your own personal trainer that you can take with you anywhere you go, plus multitasking capabilities, gaming and more, there's no better mixed reality device on the market at this price.

VR/MR Device Shipments to Reach 37 Million Units by 2030, with OLEDoS and LCD Dominating High-End and Mainstream Markets

TrendForce's latest report reveals that shipments of near-eye displays are expected to increase year-by-year over the next few years following inventory clearance. It is anticipated that OLEDoS will dominate the high-end VR/MR market, with its technological share rising to 23% by 2030, while LCD will continue to occupy the mainstream market, holding a 63% share in near-eye displays.

TrendForce defines VR/MR devices as near-eye displays that achieve an immersive experience through a single display. Devices emphasizing transparency and the integration of virtual and real-world applications are classified as AR devices.

Global AI Server Demand Surge Expected to Drive 2024 Market Value to US$187 Billion; Represents 65% of Server Market

TrendForce's latest industry report on AI servers reveals that high demand for advanced AI servers from major CSPs and brand clients is expected to continue in 2024. Meanwhile, TSMC, SK hynix, Samsung, and Micron's gradual production expansion has significantly eased shortages in 2Q24. Consequently, the lead time for NVIDIA's flagship H100 solution has decreased from the previous 40-50 weeks to less than 16 weeks.

TrendForce estimates that AI server shipments in the second quarter will increase by nearly 20% QoQ, and has revised the annual shipment forecast up to 1.67 million units—marking a 41.5% YoY growth.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

CSPs to Expand into Edge AI, Driving Average NB DRAM Capacity Growth by at Least 7% in 2025

TrendForce has observed that in 2024, major CSPs such as Microsoft, Google, Meta, and AWS will continue to be the primary buyers of high-end AI servers, which are crucial for LLM and AI modeling. Following establishing a significant AI training server infrastructure in 2024, these CSPs are expected to actively expand into edge AI in 2025. This expansion will include the development of smaller LLM models and setting up edge AI servers to facilitate AI applications across various sectors, such as manufacturing, finance, healthcare, and business.

Moreover, AI PCs or notebooks share a similar architecture to AI servers, offering substantial computational power and the ability to run smaller LLM and generative AI applications. These devices are anticipated to serve as the final bridge between cloud AI infrastructure and edge AI for small-scale training or inference applications.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

RISC-V Adoption to Grow 50% Yearly Due to AI Processor Demand

The open-source RISC-V instruction set architecture is shaping up for explosive growth over the next several years, primarily fueled by the increasing demand for artificial intelligence (AI) across industries. A new forecast from tech research firm Omdia predicts that shipments of RISC-V-based chips will skyrocket at an astonishing 50% annual growth rate between 2024 and 2030, sitting at a staggering 17 billion RISC-V units in 2030. The automotive sector is expected to see the most significant growth in RISC-V adoption, with a forecasted annual increase of 66%. This growth is largely attributed to the unique benefits RISC-V offers in this industry, including its flexibility and customizability.

The rise of AI in the automotive sector, particularly in applications such as autonomous driving and advanced driver assistance systems (ADAS), is also expected to contribute to RISC-V's success. Industrial applications will continue to be the largest domain for RISC-V, accounting for approximately 46% of sales. However, the growth in the automotive sector is expected to outpace other industries, driven by the increasing demand for AI-enabled technologies in this sector. The forecast from Omdia is based on current trends and the growing adoption of RISC-V by major players in the tech industry, including Google and Meta, which are investing in RISC-V to power their custom solutions. Additionally, chip producers like Qualcomm are creating their RISC-V chips for consumer use, further solidifying the technology's future position in the market.

Microsoft Prepares MAI-1 In-House AI Model with 500B Parameters

According to The Information, Microsoft is developing a new AI model, internally named MAI-1, designed to compete with the leading models from Google, Anthropic, and OpenAI. This significant step forward in the tech giant's AI capabilities is boosted by Mustafa Suleyman, the former Google AI leader who previously served as CEO of Inflection AI before Microsoft acquired the majority of its staff and intellectual property for $650 million in March. MAI-1 is a custom Microsoft creation that utilizes training data and technology from Inflection but is not a transferred model. It is also distinct from Inflection's previously released Pi models, as confirmed by two Microsoft insiders familiar with the project. With approximately 500 billion parameters, MAI-1 will be significantly larger than its predecessors, surpassing the capabilities of Microsoft's smaller, open-source models.

For comparison, OpenAI's GPT-4 boasts 1.8 trillion parameters in a Mixture of Experts sparse design, while open-source models from Meta and Mistral feature 70 billion parameters dense. Microsoft's investment in MAI-1 highlights its commitment to staying competitive in the rapidly evolving AI landscape. The development of this large-scale model represents a significant step forward for the tech giant, as it seeks to challenge industry leaders in the field. The increased computing power, training data, and financial resources required for MAI-1 demonstrate Microsoft's dedication to pushing the boundaries of AI capabilities and intention to compete on its own. With the involvement of Mustafa Suleyman, a renowned expert in AI, the company is well-positioned to make significant strides in this field.

Razer Introduces New Meta Quest 3 Accessories

As the Product Evangelist for Razer's VR accessories, it's my absolute pleasure to introduce an exciting leap forward in virtual reality gaming: the launch of our new Razer Facial Interface and Razer Adjustable Head Strap System for Meta Quest 3. In developing these products, we aimed to merge Razer's cutting-edge technology with the needs of the modern VR gamer, creating a truly immersive and comfortable experience. These join our current line of products for Meta Quest, including the Razer Hammerhead HyperSpeed for Meta Quest 3.

Crafted for Comfort, Designed for Gamers
Our journey began with a vision to redefine what gamers can expect from their VR equipment. Partnering with ResMed, human factor experts with over 30 years of experience, we drew upon over three decades of expertise to ensure our accessories not only push the envelope in terms of design but also set a new standard for comfort. Our previous generation of VR accessories for the Meta Quest platform was recognized with the Australian Good Design Award, a testament to our commitment to innovation.

Meta Opens OS Powering Meta Quest Devices to Third-Party Hardware Makers, ASUS ROG Gaming Headset Incoming

Today we're taking the next step toward our vision for a more open computing platform for the metaverse. We're opening up the operating system powering our Meta Quest devices to third-party hardware makers, giving more choice to consumers and a larger ecosystem for developers to build for. We're working with leading global technology companies to bring this new ecosystem to life and making it even easier for developers to build apps and reach their audiences on the platform.

Introducing Meta Horizon OS
This new hardware ecosystem will run on Meta Horizon OS, the mixed reality operating system that powers our Meta Quest headsets. We chose this name to reflect our vision of a computing platform built around people and connection—and the shared social fabric that makes this possible. Meta Horizon OS combines the core technologies powering today's mixed reality experiences with a suite of features that put social presence at the center of the platform.

Meta Announces New MTIA AI Accelerator with Improved Performance to Ease NVIDIA's Grip

Meta has announced the next generation of its Meta Training and Inference Accelerator (MTIA) chip, which is designed to train and infer AI models at scale. The newest MTIA chip is a second-generation design of Meta's custom silicon for AI, and it is being built on TSMC's 5 nm technology. Running at the frequency of 1.35 GHz, the new chip is getting a boost to 90 Watts of TDP per package compared to just 25 Watts for the first-generation design. Basic Linear Algebra Subprograms (BLAS) processing is where the chip shines, and it includes matrix multiplication and vector/SIMD processing. At GEMM matrix processing, each chip can process 708 TeraFLOPS at INT8 (presumably meant FP8 in the spec) with sparsity, 354 TeraFLOPS without, 354 TeraFLOPS at FP16/BF16 with sparsity, and 177 TeraFLOPS without.

Classical vector and processing is a bit slower at 11.06 TeraFLOPS at INT8 (FP8), 5.53 TeraFLOPS at FP16/BF16, and 2.76 TFLOPS single-precision FP32. The MTIA chip is specifically designed to run AI training and inference on Meta's PyTorch AI framework, with an open-source Triton backend that produces compiler code for optimal performance. Meta uses this for all its Llama models, and with Llama3 just around the corner, it could be trained on these chips. To package it into a system, Meta puts two of these chips onto a board and pairs them with 128 GB of LPDDR5 memory. The board is connected via PCIe Gen 5 to a system where 12 boards are stacked densely. This process is repeated six times in a single rack for 72 boards and 144 chips in a single rack for a total of 101.95 PetaFLOPS, assuming linear scaling at INT8 (FP8) precision. Of course, linear scaling is not quite possible in scale-out systems, which could bring it down to under 100 PetaFLOPS per rack.
Below, you can see images of the chip floorplan, specifications compared to the prior version, as well as the system.

Homeworld Franchise Comes to Virtual Reality for the First Time With 'Homeworld: Vast Reaches', a New Game Arriving in 2024

FarBridge, Inc., a leading game development studio, in partnership with Gearbox Entertainment, is excited to announce Homeworld: Vast Reaches, a bold new story in the beloved Homeworld saga that reimagines strategic space battles for Virtual Reality and Mixed Reality. This new game in the Homeworld universe is launching on the Meta Quest 2 and Meta Quest 3 headsets later this year. Players can now wishlist the game at HomeworldVastReaches.com.

In the award-winning Homeworld games for PC, you play as Fleet Command, a human commander who controls a fleet of spaceships. Players will take on the same role in Homeworld: Vast Reaches in vicious combat against a mysterious new foe.

Jensen Huang Will Discuss AI's Future at NVIDIA GTC 2024

NVIDIA's GTC 2024 AI conference will set the stage for another leap forward in AI. At the heart of this highly anticipated event: the opening keynote by Jensen Huang, NVIDIA's visionary founder and CEO, who speaks on Monday, March 18, at 1 p.m. Pacific, at the SAP Center in San Jose, California.

Planning Your GTC Experience
There are two ways to watch. Register to attend GTC in person to secure a spot for an immersive experience at the SAP Center. The center is a short walk from the San Jose Convention Center, where the rest of the conference takes place. Doors open at 11 a.m., and badge pickup starts at 10:30 a.m. The keynote will also be livestreamed at www.nvidia.com/gtc/keynote/.

Meta to Delete Oculus Accounts This Month, Forcing Holdouts to Switch to Meta Accounts

When Meta acquired Oculus, it created a problem where Meta wanted to integrate the software ecosystem of the VR headsets with its common Meta accounts system that spans Facebook, Instagram, and Threads; whereas some users from the pre-acquisition days, held out on the older Oculus accounts, continuing to use them for their cloud saves, purchase information, acquired software, DLCs, and more. Meta set up a system to port these Oculus accounts over to a Meta account, which would let you retain all your digital assets; but apparently not everyone switched. Such holdouts need to switch by the end of March 29, 2024, because Meta has decided to drop the hammer on the Oculus accounts system. All Oculus accounts will simply be deleted, any purchases, DLCs, or other digital assets you have, will be lost. If you're one of the holdouts, check your e-mail for a message from Meta with a unique link to port your account.

IBM Announces Availability of Open-Source Mistral AI Model on watsonx

IBM announced the availability of the popular open-source Mixtral-8x7B large language model (LLM), developed by Mistral AI, on its watsonx AI and data platform, as it continues to expand capabilities to help clients innovate with IBM's own foundation models and those from a range of open-source providers. IBM offers an optimized version of Mixtral-8x7B that, in internal testing, was able to increase throughput—or the amount of data that can be processed in a given time period—by 50 percent when compared to the regular model. This could potentially cut latency by 35-75 percent, depending on batch size—speeding time to insights. This is achieved through a process called quantization, which reduces model size and memory requirements for LLMs and, in turn, can speed up processing to help lower costs and energy consumption.

The addition of Mixtral-8x7B expands IBM's open, multi-model strategy to meet clients where they are and give them choice and flexibility to scale enterprise AI solutions across their businesses. Through decades-long AI research and development, open collaboration with Meta and Hugging Face, and partnerships with model leaders, IBM is expanding its watsonx.ai model catalog and bringing in new capabilities, languages, and modalities. IBM's enterprise-ready foundation model choices and its watsonx AI and data platform can empower clients to use generative AI to gain new insights and efficiencies, and create new business models based on principles of trust. IBM enables clients to select the right model for the right use cases and price-performance goals for targeted business domains like finance.

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

LG and Meta Forge Collaboration to Accelerate XR Business

LG Electronics (LG) is ramping up its strategic collaboration with the global tech powerhouse, Meta Platforms, Inc. (Meta), aiming to expedite its extended reality (XR) ventures. The aim is to combine the strengths of both companies across products, content, services and platforms to drive innovation in customer experiences within the burgeoning virtual space.

Forging an XR Collaboration With Meta
On February 28, LG's top management, including CEO William Cho and Park Hyoung-sei, president of the Home Entertainment Company, met with Meta Founder and CEO Mark Zuckerberg at LG Twin Towers in Yeouido, Seoul. This meeting coincided with Zuckerberg's tour of Asia. The two-hour session saw discussions on business strategies and considerations for next-gen XR device development. CEO Cho, while experiencing the Meta Quest 3 headset and Ray-Ban Meta smart glasses, expressed a keen interest in Meta's advanced technology demonstrations, notably focusing on Meta's large language models and its potential for on-device AI integration.

Meta Anticipating Apple Vision Pro Launch - AR/VR Could Become Mainstream

Apple's Vision Pro mixed reality headset is due to launch on February 2—many rival companies in the AR/VR market space will be taking notes once the slickly designed device (with a $3499 starting price) reaches customers. The Wall Street Journal claims that the executive team at Meta is hopeful that Apple's headset carves out a larger space within a niche segment. The latter's "more experimental" products sometimes have surprising reach, although it may take a second (i.e cheaper) iteration of the Vision Pro to reach a mainstream audience. Meta is reported to have invested around $50 billion into its Quest hardware and software development push—industry experts reckon that this product line generates only ~1% of the social media giant's total revenue.

Insider sources suggest that CEO Mark Zuckerberg and his leadership team are keen to see their big money "gamble" finally pay off—Apple's next release could boost global interest in mixed reality headsets. The Wall Street Journal states that Meta staffers "see the Quest and its software ecosystem emerging as a primary alternative to Apple in the space, filling the role played by Google's Android in smartphones." They hope that the Quest's relatively reasonable cost-of-entry will look a lot more attractive when compared to the premium Vision Pro. The report also shines a light on Meta's alleged push to focus more on mixed reality applications, since taking "inspiration" from Apple's WWDC23 presentation: "In addition, some developers are simplifying their apps and favor Apple's design that allows wearers to use their eyes and fingers to control or manipulate what they see. Meta's Quest primarily relies on the use of controllers for games or applications, although it can work with finger gestures."

Microsoft Pulls the Plug on Windows Mixed Reality, Reportedly Downsizing VR Division

Microsoft is discontinuing Windows Mixed Reality. This was discovered when the company added it to a list of deprecated Windows features. The Windows Mixed Reality platform, along with its accompanying Mixed Reality Portal app, and Mixed Reality for Steam VR, are on the list. For now it is deprecated, and Microsoft says that it will be removed in a future release of Windows. Mixed Reality was released in 2017, during the thick of the VR craze in the tech industry, a time when Facebook, having acquired Oculus, and betting big on the Metaverse, an endeavor that cost the company over $20 billion since. Mixed Reality served as a gateway to games and apps in the VR space. The company developed its own HoloLens Mixed Reality headset rivaling Oculus Rift, and got its OEM partners, such as Acer, Dell, Lenovo, ASUS, and HP, to invest in ones of their own. In all this, it doesn't look like Microsoft is winding down its enterprise-focused HoloLens 2 headset just yet.

AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Standardize Next-Generation Narrow Precision Data Formats for AI

Realizing the full potential of next-generation deep learning requires highly efficient AI infrastructure. For a computing platform to be scalable and cost efficient, optimizing every layer of the AI stack, from algorithms to hardware, is essential. Advances in narrow-precision AI data formats and associated optimized algorithms have been pivotal to this journey, allowing the industry to transition from traditional 32-bit floating point precision to presently only 8 bits of precision (i.e. OCP FP8).

Narrower formats allow silicon to execute more efficient AI calculations per clock cycle, which accelerates model training and inference times. AI models take up less space, which means they require fewer data fetches from memory, and can run with better performance and efficiency. Additionally, fewer bit transfers reduces data movement over the interconnect, which can enhance application performance or cut network costs.
Return to Keyword Browsing
Nov 18th, 2024 02:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts