News Posts matching #AI

Return to Keyword Browsing

Sam Altman to Return as OpenAI CEO, Days After Board's Decision for Removal

Over the past few days, the OpenAI drama has continued to give more details about the relationship between OpenAI's board, employees, and even the executive layer of the company. As we have covered previously, the OpenAI board on last Friday, November 17, fired the company's CEO, Sam Altman. Over the weekend, Mr. Altman was approached by Microsoft CEO Satya Nadella and offered to lead the AI unit within the Redmond giant; however, the employment was not yet finalized. Today, we learned that Sam Altman has reached an agreement with the board to return to OpenAI along with Greg Brockman and many other OpenAI employees.

After starting a wave of posts on the X/Twitter platform saying, "OpenAI is nothing without its people," the employees of OpenAI signed a letter requesting the board to bring back Sam Altman. With the deal now happening, employees are expected to continue working for OpenAI under Sam Altman's leadership. The new initial board of OpenAI is composed of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. Sam Altman said in a post on X: "i love openai, and everything i've done over the past few days has been in service of keeping this team and its mission together. when i decided to join msft on sun evening, it was clear that was the best path for me and the team. with the new board and w satya's support, i'm looking forward to returning to openai, and building on our strong partnership with msft." While Microsoft CEO Satya Nadella added "We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners."

MAINGEAR Unveils Powerful Workstation PCs Designed for Creatives and Professionals

MAINGEAR, the leader in premium-quality, high-performance, custom PCs, today announced the launch of its latest lineup of Pro Series Workstation PCs, meticulously engineered and configurable with the industry's most powerful components, to cater to the diverse needs of professionals across multiple industries.

Ideal for game developers, photo editors, graphics designers, videographers, 3D rendering artists, music producers, CAD engineers, data scientists, and AI/Machine Learning developers, the MAINGEAR ProWS Series introduces a range of desktop workstations crafted to crush the most intensive tasks, elevate productivity and streamline workflow.

MediaTek's New Dimensity 8300 Chipset Redefines Premium Experiences in 5G Smartphones

MediaTek today announced the Dimensity 8300, a power-efficient chipset designed for premium 5G smartphones. As the newest SoC in the Dimensity 8000 lineup, this chipset combines generative AI capabilities, low-power savings, adaptive gaming technology, and fast connectivity to bring flagship-level experiences to the premium 5G smartphone segment.

Based on TSMC's 2nd generation 4 nm process, the Dimensity 8300 has an octa-core CPU with four Arm Cortex-A715 cores and four Cortex-A510 cores built on Arm's latest v9 CPU architecture. With this powerful core configuration, the Dimensity 8300 boasts 20% faster CPU performance and 30% peak gains in power efficiency compared to the previous generation chipset. Additionally, the Dimensity 8300's Mali-G615 MC6 GPU upgrade provides up to 60% greater performance and 55% better power efficiency. Plus, the chipset's impressive memory and storage speeds ensure users can enjoy smooth and dynamic experiences in gaming, lifestyle applications, photography, and more.

NVIDIA's New Ethernet Networking Platform for AI Available Soon From Dell Technologies, Hewlett Packard Enterprise, Lenovo

NVIDIA today announced that Dell Technologies, Hewlett Packard Enterprise and Lenovo will be the first to integrate NVIDIA Spectrum-X Ethernet networking technologies for AI into their server lineups to help enterprise customers speed up generative AI workloads. Purpose-built for generative AI, Spectrum-X offers enterprises a new class of Ethernet networking that can achieve 1.6x higher networking performance for AI communication versus traditional Ethernet offerings. The new systems coming from three of the top system makers bring together Spectrum-X with NVIDIA Tensor Core GPUs, NVIDIA AI Enterprise software and NVIDIA AI Workbench software to provide enterprises the building blocks to transform their businesses with generative AI.

"Generative AI and accelerated computing are driving a generational transition as enterprises upgrade their data centers to serve these workloads," said Jensen Huang, founder and CEO of NVIDIA. "Accelerated networking is the catalyst for a new wave of systems from NVIDIA's leading server manufacturer partners to speed the shift to the era of generative AI."

SK hynix Showcases Next-Gen AI and HPC Solutions at SC23

SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12-17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company's memory solutions on AI and HPC.

Displaying Advanced HPC & AI Products
At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, HBM3E attracted attention as the HBM solution meets the industry's highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside NVIDIA's H100, a high-performance GPU for AI that uses HBM3 for its memory.

Rapidus and Tenstorrent Partner to Accelerate Development of AI Edge Device Domain Based on 2 nm Logic

Rapidus Corporation, a company involved in the research, development, design, manufacture, and sales of advanced logic semiconductors, today announced an agreement with Tenstorrent Inc., a next-generation computing company building computers for AI, to jointly develop semiconductor IP (design assets) in the field of AI edge devices based on 2 nm logic semiconductors.

In addition to its AI processors and servers, Tenstorrent built and owns the world's most performant RISC-V CPU IP and licenses that technology to its customers around the world. Through this technological partnership with Rapidus, Tenstorrent will accelerate the development of cutting-edge devices to meet the needs of the ever-evolving digital society.

Dropbox and NVIDIA Team to Bring Personalized Generative AI to Millions of Customers

Today, Dropbox, Inc. and NVIDIA announced a collaboration to supercharge knowledge work and improve productivity for millions of Dropbox customers through the power of AI. The companies' collaboration will expand Dropbox's extensive AI functionality with new uses for personalized generative AI to improve search accuracy, provide better organization, and simplify workflows for its customers across their cloud content.

Dropbox plans to leverage NVIDIA's AI foundry consisting of NVIDIA AI Foundation Models, NVIDIA AI Enterprise software and NVIDIA accelerated computing to enhance its latest AI-powered product experiences. These include Dropbox Dash, universal search that connects apps, tools, and content in a single search bar to help customers find what they need; Dropbox AI, a tool that allows customers to ask questions and get summaries on large files across their entire Dropbox; among other AI capabilities in Dropbox.

ASRock Launches AI QuickSet Software Tool Experience AI In One Click

Leading global motherboard manufacturer, ASRock, today launched ASRock AI QuickSet software tool. ASRock AI QuickSet software tool can help users quickly download, install and set up artificial intelligence (AI) applications. The first version is launched based on the Microsoft Windows 11 64-bit platform and utilizes the powerful computing performance of ASRock's own AMD Radeon RX 7000 series graphics cards to optimize the operating performance of two well-known open source artificial intelligence (AI) drawing applications, Shark and Stable Diffusion web UI, so that interested users can quickly experience the fun of artificial intelligence (AI) at their fingertips.

You Can Now Create a Digital Clone of Yourself with Eternity.AC, an AI Startup Paving a Path to Immortality

Science fiction is coming to life with eternity.ac, a new startup offering personal digital cloning where anyone can challenge the boundaries of physical limitations with an affordable artificial intelligence that looks, talks, and converses just like you. The new venture empowers individuals to preserve their unique appearance, thoughts, experiences, and memories with a simple 3-step clone creation process.

The innovation opens up a new spectrum of meaningful AI uses, such as allowing future generations to interact with loved ones, enabling fans and followers to engage with their favorite public figures, and helping people understand the viewpoints and experiences of others. Once created, people can interact with the clone via written chat or through vocal conversations.

Semiconductor Market to Grow 20.2% in 2024 to $633 Billion, According to IDC

International Data Corporation (IDC) has upgraded its Semiconductor Market Outlook by calling a bottom and return to growth that accelerates next year. IDC raised its September 2023 revenue outlook from $518.8 billion to $526.5 billion in a new forecast. Revenue expectations for 2024 were also raised from $625.9 billion to $632.8 billion as IDC believes the U.S. market will remain resilient from a demand standpoint and China will begin recovering by the second half of 2024 (2H24).

IDC sees better semiconductor growth visibility as the long inventory correction subsides in two of the largest market segments: PCs and smartphones. Automotive and Industrials elevated inventory levels are expected to return to normal levels in 2H24 as electrification continues to drive semiconductor content over the next decade. Technology and large flagship product introductions will drive more semiconductor content and value across market segments in 2024 through 2026, including the introduction of AI PCs and AI Smartphones next year and a much-needed improvement in memory ASPs and DRAM bit volume.

AMD Brings New AI and Compute Capabilities to Microsoft Customers

Today at Microsoft Ignite, AMD and Microsoft featured how AMD products, including the upcoming AMD Instinct MI300X accelerator, AMD EPYC CPUs and AMD Ryzen CPUs with AI engines, are enabling new services and compute capabilities across cloud and generative AI, Confidential Computing, Cloud Computing and smarter, more intelligent PCs.

"AMD is fostering AI everywhere - from the cloud, to the enterprise and end point devices - all powered by our CPUs, GPUs, accelerators and AI engines," said Vamsi Boppana, Senior Vice President, AI, AMD. "Together with Microsoft and a rapidly growing ecosystem of software and hardware partners, AMD is accelerating innovation to bring the benefits of AI to a broad portfolio of compute engines, with expanding software capabilities."

Microsoft Introduces 128-Core Arm CPU for Cloud and Custom AI Accelerator

During its Ignite conference, Microsoft introduced a duo of custom-designed silicon made to accelerate AI and excel in cloud workloads. First of the two is Microsoft's Azure Cobalt 100 CPU, a 128-core design that features a 64-bit Armv9 instruction set, implemented in a cloud-native design that is set to become a part of Microsoft's offerings. While there aren't many details regarding the configuration, the company claims that the performance target is up to 40% when compared to the current generation of Arm servers running on Azure cloud. The SoC has used Arm's Neoverse CSS platform customized for Microsoft, with presumably Arm Neoverse N2 cores.

The next and hottest topic in the server space is AI acceleration, which is needed for running today's large language models. Microsoft hosts OpenAI's ChatGPT, Microsoft's Copilot, and many other AI services. To help make them run as fast as possible, Microsoft's project Athena now has the name of Maia 100 AI accelerator, which is manufactured on TSMC's 5 nm process. It features 105 billion transistors and supports various MX data formats, even those smaller than 8-bit bit, for maximum performance. Currently tested on GPT 3.5 Turbo, we have yet to see performance figures and comparisons with competing hardware from NVIDIA, like H100/H200 and AMD, with MI300X. The Maia 100 has an aggregate bandwidth of 4.8 Terabits per accelerator, which uses a custom Ethernet-based networking protocol for scaling. These chips are expected to appear in Microsoft data centers early next year, and we hope to get some performance numbers soon.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

NVIDIA Announces up to 5x Faster TensorRT-LLM for Windows, and ChatGPT API-like Interface

Even as CPU vendors are working to mainstream accelerated AI for client PCs, and Microsoft setting the pace for more AI in everyday applications with Windows 11 23H2 Update; NVIDIA is out there reminding you that every GeForce RTX GPU is an AI accelerator. This is thanks to its Tensor cores, and the SIMD muscle of the ubiquitous CUDA cores. NVIDIA has been making these for over 5 years now, and has an install base of over 100 million. The company is hence focusing on bring generative AI acceleration to more client- and enthusiast relevant use-cases, such as large language models.

NVIDIA at the Microsoft Ignite event announced new optimizations, models, and resources to bring accelerated AI to everyone with an NVIDIA GPU that meets the hardware requirements. To begin with, the company introduced an update to TensorRT-LLM for Windows, a library that leverages NVIDIA RTX architecture for accelerating large language models (LLMs). The new TensorRT-LLM version 0.6.0 will release later this month, and improve LLM inference performance by up to 5 times in terms of tokens per second, when compared to the initial release of TensorRT-LLM from October 2023. In addition, TensorRT-LLM 0.6.0 will introduce support for popular LLMs, including Mistral 7B and Nemtron-3 8B. Accelerating these two will require a GeForce RTX 30-series "Ampere" or 40-series "Ada" GPU with at least 8 GB of main memory.

Opal Launches Tadpole, the First Webcam Designed for Laptops

Today Opal Camera Inc. announced Tadpole, the first webcam built exclusively for laptops and tablets. Opal's newest product delivers ultra-portability with the best image quality on the market - at just one-eighth the size of an average webcam, Tadpole is the world's smallest webcam ever made. Powered with the first directional mic to be used on any webcam, along with AI noise isolation and a capacitive touch sensor for easy tap-to-mute functionality, Tadpole offers unprecedented audio quality designed for work anywhere.

"The way we work has changed. Modern work is fluid today. It doesn't just exist in an office or in a home. It happens on couches, in coffee shops, over poor hotel room WiFi," said Veeraj Chugh, CEO and Co-Founder of Opal Camera. "We wanted to build a great product specifically for the way people work today. Tadpole is the first product of its kind: it's tiny, works on any laptop, and comes with a huge sensor, really cool audio features and reliable stability."

TYAN Unveils its Robuste Immersion Cooling Solution that Delivering Significant PUE Enhancement at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, unveils an immersion cooling solution that delivering significant PUE (Power Usage Effectiveness) enhancement and showcases its latest server platforms powered by 4th Gen. Intel Xeon Scalable Processors targeting HPC, AI and Cloud Computing applications at SC23, Booth #1917.

Significant PUE Enhancement shown in an Immersion-cooling Tank vs. Conventional Air-cooling Operation Cabinet
The immersion cooling system live demonstrated at TYAN booth during SC23 is a 4U hybrid single phase tank enclosure equipped with 4 units of TYAN GC68A-B7136 cloud computing servers. Comparing to conventional Air-cooling operating cabinet, this hybrid immersion cooling system could offer huge improvement of PUE which makes it become an ideal mission-critical solution for the users aimed in energy-saving and green products.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

CyberLink and Intel Work Together to Lead the Gen-AI Era, Enhancing the AI ​​Content Creation Experience

CyberLink, a leader in digital creative editing software and artificial intelligence (AI), attended the Intel Innovation Taipei 2023. As a long-standing Intel independent software vendor (ISV) partner, CyberLink demonstrated how its latest generative AI technology is used for easily creating amazing photo and video content with tools such as: AI Business Outfits, AI Product Background, and AI Video to Anime. During the forum, CyberLink Chairman and CEO Jau Huang shared how Intel's upcoming AI PC is expected to benefit content creators by popularizing generative AI creativity from cloud computing to personal computers, to not only reduce the cost of AI computing but, simultaneously eliminate users' privacy concerns, fostering an entirely new AI content creation experience where it's even easier to unleash creativity with generative AI.

The Intel Innovation Taipei was kicked off by Intel CEO Pat Gelsinger. The event highlighted four major themes: artificial intelligence, edge to cloud, next-generation systems and platforms, and advance technologies, as well as the latest results of cooperation with Taiwan ecosystem partners, including the latest AI PCs, etc.

Lenovo Announces the ThinkStation P8 Powered by AMD Ryzen Threadripper PRO 7000 WX-Series and NVIDIA RTX Graphics

Today, Lenovo announced the new ThinkStation P8 tower workstation powered by AMD Ryzen Threadripper PRO 7000 WX-Series processors and NVIDIA RTX GPUs. Designed to deliver unparalleled performance, reliability and flexibility for professionals who demand the best from their workstations, the bold new ThinkStation P8 builds on the success of the award-winning P620, the world's first workstation powered by AMD Ryzen Threadripper PRO processors. Featuring an optimized thermal design in a versatile Aston Martin inspired chassis, the ThinkStation P8 combines Lenovo's legendary reliability, customer experience and innovation with breakthrough compute architecture courtesy of AMD and NVIDIA. ThinkStation P8 raises the bar for intense workloads across multiple segments focused on outcome-based workflow agility.

"At Lenovo, we understand that our customers need high-quality workstations that can adapt to their changing and diverse needs. That's why we collaborated with AMD and NVIDIA to create the ThinkStation P8, a workstation that combines power, flexibility and enterprise-grade features," said Rob Herman, vice president and general manager, Workstation and Client AI Business Unit, Lenovo. "Designed to offer unparalleled performance and scalability, whether to run complex simulations, render stunning visuals, or develop cutting-edge AI applications, the ThinkStation P8 can handle it all. And with Lenovo's certifications, security and support, you can trust that the ThinkStation P8 will exceed expectations."

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGX systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Supermicro is also introducing the industry's highest density server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U system, utilizing the latest Supermicro liquid cooling solution. The industry's most compact high performance GPU server enables data center operators to reduce footprints and energy costs while offering the highest performance AI training capacity available in a single rack. With the highest density GPU systems, organizations can reduce their TCO by leveraging cutting-edge liquid cooling solutions.

Intel Advances Scientific Research and Performance for New Wave of Supercomputers

At SC23, Intel showcased AI-accelerated high performance computing (HPC) with leadership performance for HPC and AI workloads across Intel Data Center GPU Max Series, Intel Gaudi 2 AI accelerators and Intel Xeon processors. In partnership with Argonne National Laboratory, Intel shared progress on the Aurora generative AI (genAI) project, including an update on the 1 trillion parameter GPT-3 LLM on the Aurora supercomputer that is made possible by the unique architecture of the Max Series GPU and the system capabilities of the Aurora supercomputer. Intel and Argonne demonstrated the acceleration of science with applications from the Aurora Early Science Program (ESP) and the Exascale Computing Project. The company also showed the path to Intel Gaudi 3 AI accelerators and Falcon Shores.

"Intel has always been committed to delivering innovative technology solutions to meet the needs of the HPC and AI community. The great performance of our Xeon CPUs along with our Max GPUs and CPUs help propel research and science. That coupled with our Gaudi accelerators demonstrate our full breadth of technology to provide our customers with compelling choices to suit their diverse workloads," said Deepak Patil, Intel corporate vice president and general manager of Data Center AI Solutions.

Leaked Flyer Hints at Possible AMD Ryzen 9000 Series Powered by Zen 5

A curious piece of marketing material on the Chiphell forum has sent ripples through the tech community, featuring what appears to be an Alienware desktop equipped with an unannounced AMD Ryzen 9000-series processor. The authenticity of this flyer is up for debate, with possibilities ranging from a simple typo by Alienware to a fabricated image, or it could even suggest that AMD is on the cusp of unveiling its next-generation Ryzen CPUs for desktop PCs. While intrigue is high, it's important to approach such revelations cautiously, with a big grain of salt. AMD's existing roadmap points toward a 2024 release for its Zen 5-based Ryzen desktop processors and EPYC server CPUs, which casts further doubt on the Ryzen 9000 series appearing ahead of schedule.

We have to wait for AMD's major upcoming events, including the "Advancing AI" event on December 6, where the company will showcase how its partners and AMD use AI for applications. Next, we hope to hear from AMD about upcoming events such as CES in January and Computex in May, but we don't have any official information on product launches in the near term. If the company is preparing anything, the Alienware flyer pictured below should indicate it, if the source is confirmed. However, the doubt remains, and we should be skeptical of its truthfulness.

Master & Dynamic Announces the MW09 TWS Earbuds with up to 12 Hour Battery Life with ANC Enabled

We are thrilled to announce the all-new MW09 Active Noise-Cancelling True Wireless Earphones, the latest launch in our line of True Wireless Earphones, offering the ultimate listening experience in a streamlined, powerful package. The MW09 is optimized from the inside out, and features updated acoustic architecture, a refined ergonomic body, up to 16 hours of listening, and aluminium and Kevlar cases with wireless charging.

"For the MW09, we focused our efforts on producing significant performance enhancements without compromising our obsession with design and materials. We're most excited about our proprietary AI-enhanced talk solution and adaptive ANC. With all these improved features, I'm confident the MW09 will sound as good as it looks," says Master & Dynamic Founder & CEO Jonathan Levine.

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

Synopsys Expands Its ARC Processor IP Portfolio with New RISC-V Family

Synopsys, Inc. (Nasdaq: SNPS) today announced it has extended its ARC Processor IP portfolio to include new RISC-V ARC-V Processor IP, enabling customers to choose from a broad range of flexible, extensible processor options that deliver optimal power-performance efficiency for their target applications. Synopsys leveraged decades of processor IP and software development toolkit experience to develop the new ARC-V Processor IP that is built on the proven microarchitecture of Synopsys' existing ARC Processors, with the added benefit of the expanding RISC-V software ecosystem.

Synopsys ARC-V Processor IP includes high-performance, mid-range, and ultra-low power options, as well as functional safety versions, to address a broad range of application workloads. To accelerate software development, the Synopsys ARC-V Processor IP is supported by the robust and proven Synopsys MetaWare Development Toolkit that generates highly efficient code. In addition, the Synopsys.ai full-stack AI-driven EDA suite is co-optimized with ARC-V Processor IP to provide an out-of-the-box development and verification environment that helps boost productivity and quality-of-results for ARC-V-based SoCs.
Return to Keyword Browsing
Feb 21st, 2025 22:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts