News Posts matching #AI

Return to Keyword Browsing

Acer Launches New Under 1kg TravelMate P6 14 AI Laptop

Acer today unveiled the new TravelMate P6 14 AI, leading the market with Copilot+ PCs under 1 kg. This new Windows 11 Pro laptop delivers exceptional performance, mobility and AI capabilities for businesses and institutions. It is powered by Intel Core Ultra processors (Series 2) with a built-in NPU boasting up to 120 total platform TOPS AI performance in a compact, carbon fiber chassis.

This latest TravelMate boasts a stunning 14-inch WQXGA+ (2880x1800) 16:10 display with IPS technology or WUXGA (1920x1200) panel featuring high 400 nit brightness and 100% sRGB color gamut, supporting outstanding picture quality. Thin bezels and a high 82% screen-to-body ratio maximize the screen area for more immersive viewing experiences.

Acer Debuts Its First Handheld Gaming PC - the Nitro Blaze 7

Acer today announced its entry into the handheld gaming space with the launch of the new Acer Nitro Blaze 7 (GN771). The device combines cutting-edge technology and a compact design to always bring next-level gaming and entertainment within reach. Acer's first-generation handheld AI gaming PC features an AMD Ryzen 7 8840HS processor, with Ryzen AI that optimizes performance and responsiveness across a wide range of games and applications.

The design allows users to easily slip the device into their bags or pockets for instant playing time on the go. It features a 7-inch Full HD (FHD) IPS display with a touch interface, plus AMD FreeSync Premium technology, and a blazing-fast 144 Hz refresh rate. This allows players to experience enhanced visuals and responsive controls while playing their favorite AAA titles. The system runs on Windows 11 and features the new Acer Game Space application which supports the addition of games from multiple platforms.

Qualcomm Announces Snapdragon X Plus 8-core Processors

Ahead of IFA 2024, Qualcomm Technologies, Inc. announced the expansion of its Snapdragon X Series portfolio with the introduction of Snapdragon X Plus 8-core, a breakthrough platform that unleashes multiday battery life, unprecedented performance and AI-powered Copilot+ experiences to even more people.

The 8-core Qualcomm Oryon CPU powering this Snapdragon X Plus platform enables lightning-fast responsiveness and efficiency, delivering 61% faster CPU performance while competitor peak performance requires 179% more power. An integrated GPU and support for up to three external monitors ensures exceptional graphics and immersive visual experiences. At the heart of the Snapdragon X Plus 8-core is a powerful 45 TOPS NPU of AI processing power and leading performance per watt which, paired with the platform's significant advancements in connectivity, will push productivity to new heights in ultra-portable designs with incredible battery life. Whether creating presentations on-the-go or videoconferencing, the versatile functionality of this platform will enable transformative experiences.

Innodisk Unveils Advanced CXL Memory Module to Power AI Servers

Innodisk, a leading global AI solution provider, continues to push the boundaries of innovation with the launch of its cutting-edge Compute Express Link (CXL) Memory Module, which is designed to meet the rapid growth demands of AI servers and cloud data centers. As one of the few module manufacturers offering this technology, Innodisk is at the forefront of AI and high-performance computing.

The demand for AI servers is rising quickly, with these systems expected to account for approximately 65% of the server market by 2024, according to Trendforce (2024). This growth has created an urgent need for greater memory bandwidth and capacity, as AI servers now require at least 1.2 TB of memory to operate effectively. Traditional DDR memory solutions are increasingly struggling to meet these demands, especially as the number of CPU cores continues to multiply, leading to challenges such as underutilized CPU resources and increasing latency between different protocols.

LG gram Ready to Define the Next-Gen AI Laptop With New Intel Core Ultra Processors

LG Electronics (LG) is excited to announce that its newest LG gram laptop featuring the Intel Core Ultra processor (Series 2) will be showcased at the Intel Core Ultra Global Launch Event from September 3-8. Renowned for its powerful performance and ultra-lightweight design, the LG gram series now integrates advanced AI capabilities powered by the latest Intel Core Ultra processor. The LG gram 16 Pro, the first model to feature these new Intel processors, will be unveiled before its release at the end of 2024.

As the first on-device AI laptop from the LG gram series, it offers up to an impressive 48 neural processing unit (NPU) tera operations per second (TOPS), setting a new standard for AI PCs and providing the exceptional performance required for Copilot experiences. Powered by the latest Intel Core Ultra processor, the LG gram 16 Pro is now more efficient thanks to advanced AI functionalities such as productivity assistants, text and image creation and collaboration tools. What's more, its extended battery life helps users handle tasks without worry.

Samsung Announces New Galaxy Book5 Pro 360

Samsung Electronics today announced the Galaxy Book5 Pro 360, a Copilot+ PC and the first in the all-new Galaxy Book5 series. Performance upgrades made possible by the Intel Core Ultra processors (Series 2) bring next-level computing power, with up to 47 total TOPs NPU - and more than 300 AI-accelerated features across 100+ creativity, productivity, gaming and entertainment apps. Microsoft Phone Link provides access to your Galaxy phone screen on a larger, more immersive PC display, enabling use of fan-favorite Galaxy AI features like Circle to Search with Google, Chat Assist, Live Translate and more. And with the Intel ARC GPU, graphics performance is improved by 17%. When paired with stunning features like the Dynamic AMOLED 2X display with Vision Booster and 10-point multi-touchscreen, Galaxy Book5 Pro 360 allows creation anytime, anywhere.

"The Galaxy Book5 series brings even more cutting-edge AI experiences to Galaxy users around the world who want to enhance and simplify their everyday tasks - a vision made possible by our continued collaboration with longtime industry partners," said Dr. Hark-Sang Kim, EVP & Head of New Computing R&D Team, Mobile eXperience Business at Samsung Electronics. "As one of our most powerful PCs, Galaxy Book5 Pro 360 brings together top-tier performance with Galaxy's expansive mobile AI ecosystem for the ultimate AI PC experience."

MSI Launches Next-Gen AI+ Gaming and Business and Productivity Laptops

MSI, a leading brand in gaming, content creation, and business & productivity laptops, proudly launched several next-gen AI+ gaming and business productivity laptops featuring the new Intel Core Ultra processor (Series 2) and AMD Ryzen AI 300 Series at IFA 2024. These laptops offer more AI computing power, making it the most robust platform for AI PC development, with more AI models, frameworks, and runtimes enable. Additionally, MSI officially launched the new Claw 8 AI+ Windows 11 gaming handheld device, powered by Intel Core Ultra processors (Series 2) and an 8-inch screen, providing a smoother and broader mobile gaming experience. MSI also announced the whole new Venture series laptops, redefining the combination of thin, light and powerful. Equipped with Intel Core Ultra processors (Series 2) and varieties of different size, from 14, 15.6, 16, to 17 inches.

"MSI not only brings the industry's most comprehensive AI+ PC lineup but also introduces multiple new laptops and handheld devices designed for gamers worldwide," said Eric Kuo, MSI's Executive Vice President and General Manager of NB Business Unit. "We welcome global guests to visit the MSI booth to experience next-gen AI computing and exciting gaming products."

Intel Announces New Mobile Lunar Lake Core Ultra 200V Series Processors

Intel today launched its most efficient family of x86 processors ever, the Intel Core Ultra 200V series processors. They deliver exceptional performance, breakthrough x86 power efficiency, a massive leap in graphics performance, no-compromise application compatibility, enhanced security and unmatched AI compute. The technology will power the industry's most complete and capable AI PCs with more than 80 consumer designs from more than 20 of the world's top manufacturing partners, including Acer, ASUS, Dell Technologies, HP, Lenovo, LG, MSI and Samsung. Pre-orders begin today with systems available globally on-shelf and online at over 30 global retailers starting Sept. 24. All designs featuring Intel Core Ultra 200V series processors and running the latest version of Windows are eligible to receive Copilot+ PC features as a free update starting in November.

"Intel's newest Core Ultra processors set the industry standard for mobile AI and graphics performance, and smash misconceptions about x86 efficiency. Only Intel has the scale through our partnerships with ISVs and OEMs, and the broader technology ecosystem, to provide consumers with a no-compromise AI PC experience."
--Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group

Microsoft Unveils New Details on Maia 100, Its First Custom AI Chip

Microsoft provided a detailed view of Maia 100 at Hot Chips 2024, their initial specialized AI chip. This new system is designed to work seamlessly from start to finish, with the goal of improving performance and reducing expenses. It includes specially made server boards, unique racks, and a software system focused on increasing the effectiveness and strength of sophisticated AI services, such as Azure OpenAI. Microsoft introduced Maia at Ignite 2023, sharing that they had created their own AI accelerator chip. More information was provided earlier this year at the Build developer event. The Maia 100 is one of the biggest processors made using TSMC's 5 nm technology, designed for handling extensive AI tasks on Azure platform.

Maia 100 SoC architecture features:
  • A high-speed tensor unit (16xRx16) offers rapid processing for training and inferencing while supporting a wide range of data types, including low precision data types such as the MX data format, first introduced by Microsoft through the MX Consortium in 2023.
  • The vector processor is a loosely coupled superscalar engine built with custom instruction set architecture (ISA) to support a wide range of data types, including FP32 and BF16.
  • A Direct Memory Access (DMA) engine supports different tensor sharding schemes.
  • Hardware semaphores enable asynchronous programming on the Maia system.

Intel Announces Deployment of Gaudi 3 Accelerators on IBM Cloud

IBM and Intel announced a global collaboration to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. This offering, which is expected to be available in early 2025, aims to help more cost-effectively scale enterprise AI and drive innovation underpinned with security and resiliency. This collaboration will also enable support for Gaudi 3 within IBM's watsonx AI and data platform. IBM Cloud is the first cloud service provider (CSP) to adopt Gaudi 3, and the offering will be available for both hybrid and on-premise environments.

"Unlocking the full potential of AI requires an open and collaborative ecosystem that provides customers with choice and accessible solutions. By integrating Gaudi 3 AI accelerators and Xeon CPUs with IBM Cloud, we are creating new AI capabilities and meeting the demand for affordable, secure and innovative AI computing solutions," said Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group.

Spot Market for Memory Struggles in First Half of 2024; Price Challenges Loom in Second Half

TrendForce reports that memory module makers have been aggressively increasing their DRAM inventories since 3Q23, with inventory levels rising to 11-17 weeks by 2Q24. However, demand for consumer electronics has not rebounded as expected. For instance, smartphone inventories in China have reached excessive levels, and notebook purchases have been delayed as consumers await new AI-powered PCs, leading to continued market contraction.

This has led to a weakening in spot prices for memory products primarily used in consumer electronics, with Q2 prices dropping over 30% compared to Q1. Although spot prices remained disconnected from contract prices through August, this divergence may signal potential future trends for contract pricing.

Cerebras Launches the World's Fastest AI Inference

Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, Cerebras Inference is 20 times faster than NVIDIA GPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.

Unlike alternative approaches that compromise accuracy for performance, Cerebras offers the fastest performance while maintaining state of the art accuracy by staying in the 16-bit domain for the entire inference run. Cerebras Inference is priced at a fraction of GPU-based competitors, with pay-as-you-go pricing of 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B.

Japan Unveils Plans for Zettascale Supercomputer: 100 PFLOPs of AI Compute per Node

The zettascale era is officially on the map, as Japan has announced plans to develop a successor to its renowned Fugaku supercomputer. The Ministry of Education, Culture, Sports, Science and Technology (MEXT) has set its sights on creating a machine capable of unprecedented processing power, aiming for 50 ExaFLOPS of peak AI performance with zettascale capabilities. The ambitious "Fugaku Next" project, slated to begin development next year, will be headed by RIKEN, one of Japan's leading research institutions, in collaboration with tech giant Fujitsu. With a target completion date of 2030, the new supercomputer aims to surpass current technological boundaries, potentially becoming the world's fastest once again. MEXT's vision for the "Fugaku Next" includes groundbreaking specifications for each computational node.

The ministry anticipates peak performance of several hundred FP64 TFLOPS for double-precision computations, around 50 FP16 PFLOPS for AI-oriented half-precision calculations, and approximately 100 PFLOPS for AI-oriented 8-bit precision calculations. These figures represent a major leap from Fugaku's current capabilities. The project's initial funding is set at ¥4.2 billion ($29.06 million) for the first year, with total government investment expected to exceed ¥110 billion ($761 million). While the specific architecture remains undecided, MEXT suggests the use of CPUs with special-purpose accelerators or a CPU-GPU combination. The semiconductor node of choice will likely be a 1 nm node or even more advanced nodes available at the time, with advanced packaging also used. The supercomputer will also feature an advanced storage system to handle traditional HPC and AI workloads efficiently. We already have an insight into Monaka, Fujitsu's upcoming CPU design with 150 Armv9 cores. However, Fugaku Next will be powered by the Monaka Next design, which will likely be much more capable.

FuriosaAI Unveils RNGD Power-Efficient AI Processor at Hot Chips 2024

Today at Hot Chips 2024, FuriosaAI is pulling back the curtain on RNGD (pronounced "Renegade"), our new AI accelerator designed for high-performance, highly efficient large language model (LLM) and multimodal model inference in data centers. As part of his Hot Chips presentation, Furiosa co-founder and CEO June Paik is sharing technical details and providing the first hands-on look at the fully functioning RNGD card.

With a TDP of 150 watts, a novel chip architecture, and advanced memory technology like HBM3, RNGD is optimized for inference with demanding LLMs and multimodal models. It's built to deliver high performance, power efficiency, and programmability all in a single product - a trifecta that the industry has struggled to achieve in GPUs and other AI chips.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

ASUS Launches AMD X870E and X870 Chipset Motherboards Across its Motherboard Brands

AMD's next-gen Ryzen 9000 Series CPUs have arrived, setting a new bar for gaming performance. For Gamescom 2024, we're introducing our X870E and X870 motherboard family. These boards unleash the full power of your new AMD CPU with upgraded connectivity, a host of smart features, and an arsenal of performance-boosting refinements.

Your most feature-rich, high-end options for an AMD Ryzen 9000 Series CPU use the X870E chipset. The ROG Crosshair X870E Hero sits at the top of the stack. Premium metallic textures, nickel-plated surfaces, and second-gen Polymo Lighting II make this a true showcase motherboard. But this board doesn't just look the part—it's fully prepped to take your gaming to the next level with the power of advanced AI.

NVIDIA ACE Brings AI-Powered Interactions To Mecha BREAK

NVIDIA ACE is a revolutionary suite of digital human technologies that brings digital humans to life with generative AI. Since its debut at Computex 2023 in the Ramen Shop tech demo, ACE's capabilities have evolved rapidly.

At Gamescom 2024, we announced our first digital human technology on-device small language model improving the conversation abilities of game characters. We also announced that the first game to showcase these ACE and digital human technologies is Amazing Seasun Game's Mecha BREAK, bringing its characters to life and providing a more dynamic and immersive gameplay experience on GeForce RTX AI PCs.

AMD Acquires Hyperscale Solutions Provider ZT Systems

AMD today announced the signing of a definitive agreement to acquire ZT Systems, a leading provider of AI infrastructure for the world's largest hyperscale computing companies. The strategic transaction marks the next major step in AMD's AI strategy to deliver leadership AI training and inferencing solutions based on innovating across silicon, software and systems. ZT Systems' extensive experience designing and optimizing cloud computing solutions will also help cloud and enterprise customers significantly accelerate the deployment of AMD-powered AI infrastructure at scale. AMD has agreed to acquire ZT Systems in a cash and stock transaction valued at $4.9 billion, inclusive of a contingent payment of up to $400 million based on certain post-closing milestones. AMD expects the transaction to be accretive on a non-GAAP basis by the end of 2025.

"Our acquisition of ZT Systems is the next major step in our long-term AI strategy to deliver leadership training and inferencing solutions that can be rapidly deployed at scale across cloud and enterprise customers," said AMD Chair and CEO Dr. Lisa Su. "ZT adds world-class systems design and rack-scale solutions expertise that will significantly strengthen our data center AI systems and customer enablement capabilities. This acquisition also builds on the investments we have made to accelerate our AI hardware and software roadmaps. Combining our high-performance Instinct AI accelerator, EPYC CPU, and networking product portfolios with ZT Systems' industry-leading data center systems expertise will enable AMD to deliver end-to-end data center AI infrastructure at scale with our ecosystem of OEM and ODM partners."

Arm to Dip its Fingers into Discrete GPU Game, Plans on Competing with Intel, AMD, and NVIDIA

According to a recent report from Globes, Arm, the chip design giant and maker of the Arm ISA, is reportedly developing a new discrete GPU at its Ra'anana development center in Israel. This development signals Arm's intention to compete directly with industry leaders like Intel, AMD, and NVIDIA in the massive discrete GPU market. Sources close to the matter reveal that Arm has assembled a team of approximately 100 skilled chip and software development engineers at its Israeli facility. The team is focused on creating GPUs primarily aimed at the video game market. However, industry insiders speculate that this technology could potentially be adapted for AI processing in the future, mirroring the trajectory of NVIDIA, which slowly integrated AI hardware accelerators into its lineup.

The Israeli development center is playing a crucial role in this initiative. The hardware teams are overseeing the development of key components for these GPUs, including the flagship Immortalis and Mali GPU. Meanwhile, the software teams are creating interfaces for external graphics engine developers, working with both established game developers and startups. Arm is already entering the PC market through its partners like Qualcomm with Snapdragon X chips. However, these chips run an integrated GPU, and Arm wants to provide discrete GPUs and compete there. While details are still scarce, Arm could make GPUs to accompany Arm-based Copilot+ PCs and some desktop builds. The final execution plan still needs to be discovered, and we are still waiting to see which stage Arm's discrete GPU project is in.

India Targets 2026 for Its First Domestic AI Chip Development

Ola, an Indian automotive company, is venturing into AI chip development with its artificial intelligence branch, Krutrim, planning to launch India's first domestically designed AI chip by 2026. The company is leveraging ARM architecture for this initiative. CEO Bhavish Aggarwal emphasizes the importance of India developing its own AI technology rather than relying on external sources.

While detailed specifications are limited, Ola claims these chips will offer competitive performance and efficiency. For manufacturing, the company plans to partner with a global tier I or II foundry, possibly TSMC or Samsung. "We are still exploring foundries, we will go with a global tier I or II foundry. Taiwan is a global leader, and so is Korea. I visited Taiwan a couple of months back and the ecosystem is keen on partnering with India," Aggarwal said.

TSMC Reportedly to Manufacture SoftBank's AI Chips, Replacing Intel

SoftBank has reportedly decided against using Intel's foundry for its ambitious AI venture, Project Izanagi, and is opting for TSMC instead. The conglomerate aims to challenge NVIDIA in the AI accelerator market by developing its own AI processors. This decision marks another setback for Intel, which has faced several challenges recently. In February 2024, reports emerged that SoftBank's CEO, Masayoshi Son, planned to invest up to $100 billion to create a company similar to NVIDIA, focused on selling AI accelerators. Although SoftBank initially worked with Intel, it recently switched to TSMC, citing concerns about Intel's ability to meet demands for "volume and speed."

The decision, reported by the Financial Times, raises questions about Intel's future involvement and how SoftBank's ownership of Arm Holdings will factor into the project. While TSMC is now SoftBank's choice, the foundry is already operating at full capacity, making it uncertain how it will accommodate this new venture. Neither SoftBank, Intel nor TSMC has commented on the situation, but given the complexities involved, it will likely take time for this plan to materialize. SoftBank will need to replicate NVIDIA's entire ecosystem, from chip design to data centers and a software stack rivaling CUDA, a bold and ambitious goal.

Geekbench AI Hits 1.0 Release: CPUs, GPUs, and NPUs Finally Get AI Benchmarking Solution

Primate Labs, the developer behind the popular Geekbench benchmarking suite, has launched Geekbench AI—a comprehensive benchmark tool designed to measure the artificial intelligence capabilities of various devices. Geekbench AI, previously known as Geekbench ML during its preview phase, has now reached version 1.0. The benchmark is available on multiple operating systems, including Windows, Linux, macOS, Android, and iOS, making it accessible to many users and developers. One of Geekbench AI's key features is its multifaceted approach to scoring. The benchmark utilizes three distinct precision levels: single-precision, half-precision, and quantized data. This evaluation aims to provide a more accurate representation of AI performance across different hardware designs.

In addition to speed, Geekbench AI places a strong emphasis on accuracy. The benchmark assesses how closely each test's output matches the expected results, offering insights into the trade-offs between performance and precision. The release of Geekbench AI 1.0 brings support for new frameworks, including OpenVINO, ONNX, and Qualcomm QNN, expanding its compatibility across various platforms. Primate Labs has also implemented measures to ensure fair comparisons, such as enforcing minimum runtime durations for each workload. The company noted that Samsung and NVIDIA are already utilizing the software to measure their chip performance in-house, showing that adoption is already strong. While the benchmark provides valuable insights, real-world AI applications are still limited, and reliance on a few benchmarks may paint a partial picture. Nevertheless, Geekbench AI represents a significant step forward in standardizing AI performance measurement, potentially influencing future consumer choices in the AI-driven tech market. Results from the benchmark runs can be seen here.

Huawei Reportedly Developing New Ascend 910C AI Chip to Rival NVIDIA's H100 GPU

Amidst escalating tensions in the U.S.-China semiconductor industry, Huawei is reportedly working on a new AI chip called the Ascend 910C. This development appears to be the Chinese tech giant's attempt to compete with NVIDIA's AI processors in the Chinese market. According to a Wall Street Journal report, Huawei has begun testing the Ascend 910C with various Chinese internet and telecom companies to evaluate its performance and capabilities. Notable firms such as ByteDance, Baidu, and China Mobile are said to have received samples of the chip.

Huawei has reportedly informed its clients that the Ascend 910C can match the performance of NVIDIA's H100 chip. The company has been conducting tests for several weeks, suggesting that the new processor is nearing completion. The Wall Street Journal indicates that Huawei could start shipping the chip as early as October 2024. The report also mentions that Huawei and potential customers have discussed orders for over 70,000 chips, potentially worth $2 billion.

Tachyum Builds Last FPGA Prototypes Batch Ahead of Tape-Out

Tachyum today announced the final build of its Prodigy FPGA emulation system in advance of chip production and general availability next year. As part of the announcement, the company is also ending its purchase program for prototype systems that was previously offered to commercial and federal customers.

These last hardware FPGA prototype units will ensure Tachyum hits its extreme-reliability test targets of more than 10 quadrillion cycles prior to tape-out and before the first Prodigy chips hit the market. Tachyum's software emulation system - and access to it - is expanding with additional availability of open-source software ported ahead of Prodigy's upstreaming.

SiFive Announces Performance P870-D RISC-V Datacenter Processor

Today SiFive, Inc., the gold standard for RISC-V computing, announced its new SiFive Performance P870-D datacenter processor to meet customer requirements for highly parallelizable infrastructure workloads including video streaming, storage, and web appliances. When used in combination with products from the SiFive Intelligence product family, datacenter architects can also build an extremely high-performance, energy efficient compute subsystem for AI-powered applications.

Building on the success of the P870, the P870-D supports the open AMBA CHI protocol so customers have more flexibility to scale the number of clusters. This scalability allows customers to boost performance while minimizing power consumption. By harnessing a standard CHI bus, the P870-D enables SiFive's customers to scale up to 256 cores while harnessing industry-standard protocols, including Compute Express Link (CXL) and CHI chip to chip (C2C), to enable coherent high core count heterogeneous SoCs and chiplet configurations.
Return to Keyword Browsing
Nov 21st, 2024 07:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts