News Posts matching #SDK

Return to Keyword Browsing

Mobilint Debuts New AI Chips at Silicon Valley Summit

Mobilint, an edge AI chip company led by CEO Dongjoo Shin, is set to make waves at the upcoming AI Hardware & Edge AI Summit 2024 in Silicon Valley. The three-day event, starting on September 10th, will showcase Mobilint's latest innovations in AI chip technology. The company will demonstrate live demos of its high-efficiency SoC 'REGULUS' for on-device AI and high-performance acceleration chip 'ARIES' for on-premises AI.

The AI Hardware Summit is an annual event where global IT giants such as Microsoft, NVIDIA, Google, Meta, and AMD, along with prominent startups, gather to share their developments in AI and machine learning. This year's summit features world-renowned AI experts as speakers, including Andrew Ng, CEO of Landing AI, and Mark Russinovich, CTO of Microsoft Azure.

AMD Releases FidelityFX SDK v1.1 to GPUOpen, Includes FSR 3.1 Source Code

AMD today released the FidelityFX SDK 1.1 to the public through its GPUOpen initiative. This update includes the source code to FSR 3.1, which should make it easier for game developers to understand the technology, and integrate it with their games. FSR 3.1 requires an AMD Radeon RX 5000 series (or later) GPU, or an NVIDIA GeForce RTX 20-series (or later) GPU, although the company recommends at least an RX 6000 series or RTX 30-series GPU, regardless of model. You get the full upscaling and frame-generation capabilities of FSR 3.1 on all supported GPUs, across AMD and NVIDIA, which is the main pull for the tech, as the rival DLSS 3 Frame Generation technology only works on RTX 40-series (or later) GPUs.

AMD FSR 3.1 builds on top of FSR 3 by introducing updates to the upscaler. If you recall, the star attraction with FSR 3 has been frame-generation, but the underlying upscaling tech had been carried over from FSR 2.2. FSR 3.1 introduces some much-needed updates to the quality of upscaling, and introduces new upscaler quality presets, including a native AA mode analogous to NVIDIA's DLAA. These increases in upscaler quality lets you trade in quality for performance better. You can find all the resources you need on FSR 3.1 here.

AMD Releases Adrenalin Edition 23.40.14.01 for Agility SDK Support

AMD today released its first drivers to implement Microsoft's DirectX Agility SDK version 1.613, which introduces the new DirectX 12 Work Graphs 1.0 API. AMD has extensively worked on implementing the new technology, which among other things, significantly reduces the CPU's role in most common shader graphics workloads, and improve GPU shader thread saturation, as the GPU waits less on the CPU's share of shader workloads. The new AMD Software Adrenalin 23.40.14.01 drivers are off the main driver update channel, and is intended for developers and enthusiasts to start exploring GPU Work Graphs. GPU Upload Heaps, and certain features of Shader Model 6.8 on supported AMD Radeon GPUs. There are some known issues with the driver specific to AMD's implementation of GPU Work Graphs, and the latest version of the Agility SDK in general, which are listed below.

DOWNLOAD: AMD Software Adrenalin 23.40.14.01 for Agility SDK Support

Microsoft's Latest Agility SDK Released with Cutting-edge Work Graphs API

Microsoft's DirectX department is scheduled to show off several innovations at this month's Game Developers Conference (GDC), although a late February preview has already spilled their DirectSR Super Resolution API's beans. Today, retail support for Shader Model 6.8 and Work Graphs has been introduced with an updated version of the company's Agility Software Development Kit. Program manager, Joshua Tucker, stated that these technologies will be showcased on-stage at GDC 2024—Shader Model 6.8 arrives with a "host of new features for shader developers, including Start Vertex/Instance Location, Wave Size Range, and Expanded Comparison Sampling." A linked supplementary article—D3D12 Work Graphs—provides an in-depth look into the cutting-edge API's underpinnings, best consumed if you have an hour or two to spare.

Tucker summarized the Work Graphs API: "(it) utilizes the full potential of your GPU. It's not just an upgrade to the existing models, but a whole new paradigm that enables more efficient, flexible, and creative game development. With Work Graphs, you can generate and schedule GPU work on the fly, without relying on the host. This means you can achieve higher performance, lower latency, and greater scalability for your games with tasks such as culling, binning, chaining of compute work, and much more." AMD and NVIDIA are offering driver support on day one. Team Red has discussed the launch of "Microsoft DirectX 12 Work Graphs 1.0 API" in a GPUOpen blog—they confirm that "a deep dive" into the API will happen during their Advanced Graphics Summit presentation. NVIDIA's Wessam Bahnassi has also discussed the significance of Work Graphs—check out his "Advancing GPU-driven rendering" article. Graham Wihlidal—of Epic Games—is excited about the latest development: "we have been advocating for something like this for a number of years, and it is very exciting to finally see the release of Work Graphs."

Microsoft DirectSR Super Resolution API Brings Together DLSS, FSR and XeSS

Microsoft has just announced that their new DirectSR Super Resolution API for DirectX will provide a unified interface for developers to implement super resolution in their games. This means that game studios no longer have to choose between DLSS, FSR, XeSS, or spend additional resources to implement, bug-test and support multiple upscalers. For gamers this is huge news, too, because they will be able to run upscaling in all DirectSR games—no matter the hardware they own. While AMD FSR and Intel XeSS run on all GPUs from all vendors, NVIDIA DLSS is exclusive to Team Green's hardware. With their post, Microsoft also confirms that DirectSR will not replace FSR/DLSS/XeSS with a new upscaler by Microsoft, rather that it builds on existing technologies that are already available, unifying access to them.

While we have to wait until March 21 for more details to be revealed at GDC 2024, Microsoft's Joshua Tucker stated in a blog post: "We're thrilled to announce DirectSR, our new API designed in partnership with GPU hardware vendors to enable seamless integration of Super Resolution (SR) into the next generation of games. Super Resolution is a cutting-edge technique that increases the resolution and visual quality in games. DirectSR is the missing link developers have been waiting for when approaching SR integration, providing a smoother, more efficient experience that scales across hardware. This API enables multi-vendor SR through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including NVIDIA DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS. DirectSR will be available soon in the Agility SDK as a public preview, which will enable developers to test it out and provide feedback. Don't miss our DirectX State of the Union at GDC to catch a sneak peek at how DirectSR can be used with your games!"

Khronos Publishes Vulkan Roadmap 2024, Highlights Expanded 3D Features

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, announced the latest roadmap milestone for Vulkan, the cross-platform 3D graphics and compute API. The Vulkan roadmap targets the "immersive graphics" market, made up of mid- to high-end smartphones, tablets, laptops, consoles, and desktop devices. The Vulkan Roadmap 2024 milestone captures a set of capabilities that are expected to be supported in new products for that market, beginning in 2024. The roadmap specification provides a significant increase in functionality for the targeted devices and sets the evolutionary direction of the API, including both new hardware capabilities and improvements to the programming model for Vulkan developers.

Vulkan Roadmap 2024 is the second milestone release on the Vulkan Roadmap. Products that support it must be Vulkan 1.3 conformant and support the extensions and capabilities defined in both the 2022 and 2024 Roadmap specifications. Vulkan roadmap specifications use the Vulkan Profile mechanism to help developers build portable Vulkan applications; roadmap requirements are expressed in machine-readable JSON files, and tooling in the Vulkan SDK auto-generates code that makes it easy for developers to query for and enable profile support in their applications.

Pixelworks & NetEase Partner Up on Entirely New Visual Mobile Experiences

Pixelworks, Inc., a leading provider of visual processing solutions, today announced that it has partnered with a flagship game IP, Revelation, developed by the Thunderfire Business Group, a game studio of the world's leading game developer, NetEase Games, and provided visual processing solutions to optimize the display quality of the mobile version of the game. The Revelation Mobile integrates Pixelworks' Rendering Accelerator SDK to significantly enhance the visual quality for mobile gaming. Coupled with a Pixelworks X7 visual processor, the Rendering Accelerator works as a bridge, delivering an exceptional 120 FPS visual experience on mobile devices with low power consumption, enabling a more immersive gaming experience for mobile gamers.

Revelation Mobile is a flagship mobile game developed by NetEase Games. Centered around the theme of oriental fantasy, the game creates a 3D fantasy world where characters can soar to the heights of the heavens and plunge into the depths of the seas. It faithfully recreates the vast and beautiful Cloud Song Continent. The game boasts stunning and eye-catching visuals, from the boundless cloudscapes to the enchanting underwater scenes, all meticulously crafted to bring the world to life. Rich colors, together with impressive lighting effects, make the Cloud Song Continent, a realm where diverse human landscapes intertwine with natural beauty, even more vivid and vibrant. Players follow their characters on a journey of growth, where navigating vast expanses of distant skies and deep oceans allows them to embrace innate sense of freedom, while simultaneously enduring the challenges of pioneering exploration and harvesting unique life experiences through blocking strategies and gaming moves.

KIOXIA Donates Command Set Specification to Software-Enabled Flash Project

KIOXIA America, Inc. today announced that it has donated a command set specification to the Linux Foundation vendor-neutral Software-Enabled FlashTM Project. Built to deliver on the promise of software-defined flash, Software-Enabled Flash technology gives storage developers control over their data placement, latency outcomes, and workload isolation requirements. Through its open API and SDKs, hyperscale environments may optimize their own flash protocols, such as flexible direct placement (FDP) or zoned namespace (ZNS), while accelerating adoption of new flash technologies. This unique combination of open source software and purpose-built hardware can help data centers maximize the value of flash memory. KIOXIA has developed working samples of hardware modules for hyperscalers, storage developers and application developers.

"We are delighted to provide command set specifications to the Software-Enabled Flash Project," said Eric Ries, senior vice president and general manager of the Memory and Storage Strategy Division for KIOXIA America, Inc. "This is an important step that allows the ecosystem to bring products to market, and enables customers to extract the maximum value from flash memory."

OpenHW Group Announces Tape Out of RISC-V-based CORE-V MCU Development Kit

OpenHW Group today announced that the industry's most comprehensive Development Kit for an open-source RISC-V MCU is now available to be ordered. The OpenHW CORE-V MCU DevKit includes an open-source printed circuit board (PCB) which integrates OpenHW's CORE-V MCU and various peripherals, a software development kit (SDK) with a full-featured Eclipse-based integrated development environment (IDE), as well as connectivity to Amazon Web Services (AWS) via AWS IoT ExpressLink for secure and reliable connectivity between IoT devices and AWS cloud services.

The comprehensive open-source CORE-V MCU DevKit enables software development for embedded, internet-of-things (IoT), and artificial intelligence (AI)-driven applications. The CORE-V MCU is based on the open-source CV32E40P embedded-class processor, a small, efficient, 32-bit, in-order open-source RISC-V core with a four-stage pipeline that implements the RV32IM[F]C RISC-V instruction extensions.

DEEPX Announces State-of-the-Art AI Chip Product Lineup

DEEPX, a leading AI semiconductor technology company, aims to drive innovation in the rapidly evolving edge AI landscape with its state-of-the-art, low-power, high-performance AI chip product lineup. With a focus on revolutionizing application areas such as smart cities, surveillance, smart factories, and other industries, DEEPX unveiled its latest AI semiconductor solutions at the 2023 Samsung Foundry Forum (SFF), under the theme of "For AI Everywhere."

Recognizing the importance of collaboration and technological partnerships, DEEPX leveraged Samsung Electronics' foundry processes, harnessing the power of 5 nm, 14 nm, and 28 nm technologies for its semiconductor chip designs. As a result, the company has developed a suite of four high-performance, energy-efficient AI semiconductor products: DX-L1, DX-L2, DX-M1, and DX-H1. Each product has been specifically engineered to cater to the unique demands of various market segments, from ultra-compact sensors with minimal data processing requirements to AI-intensive applications such as robotics, computer vision, autonomous vehicles, and many others.

AMD FidelityFX SDK 1.0 Available Now

We are happy to share that our AMD FidelityFX Software Development Kit (SDK) is now available for download!

What is the AMD FidelityFX SDK?
The AMD FidelityFX SDK is our new and easy to integrate solution for developers looking to include AMD FidelityFX technologies into their games without any of the hassle of complicated porting procedures. In a nutshell, it is AMD FidelityFX graphics middleware.

Half Life 2 Path Tracing Mod Highly Praised for Adding Extra Visual Flair

A Half Life 2 modder, Igor Zdrowowicz, has managed to integrate path tracing into the game - with striking results, even at an early stage in development. His project - codenamed "HL2RTX" - has been in-progress for a handful of months, and the modder has managed to integrate a ray tracing system into Valve's classic shooter (sort of) via NVIDIA's RTX Remix SDK. That package has become open source only very recently, so Zdrowowicz stated that a Portal RTX-derived binary was the tool of choice.

NVIDIA's Lightspeed Studios has already created an impressive RTX conversion of Valve's fan favorite puzzle-platform game Portal (2007), and modders have been poring over the underlying technology and update technique processes. A number of old favorites, running on past iterations of the Source Engine (DirectX 8 and 9), are getting the amateur mod community sprucing up treatment. It is posited that Zdrowowicz will have an easier time with his current conversion project, thanks to an official release of NVIDIA's RTX Remix toolkit (albeit in early access).

AMD Brings ROCm to Consumer GPUs on Windows OS

AMD has published an exciting development for its Radeon Open Compute Ecosystem (ROCm) users today. Now, ROCm is coming to the Windows operating system, and the company has extended ROCm support for consumer graphics cards instead of only supporting professional-grade GPUs. This development milestone is essential for making AMD's GPU family more competent with NVIDIA and its CUDA-accelerated GPUs. For those unaware, AMD ROCm is a software stack designed for GPU programming. Similarly to NVIDIA's CUDA, ROCm is designed for AMD GPUs and was historically limited to Linux-based OSes and GFX9, CDNA, and professional-grade RDNA GPUs.

However, according to documents obtained by Tom's Hardware (which are behind a login wall), AMD has brought support for ROCm to Radeon RX 6900 XT, Radeon RX 6600, and R9 Fury GPU. What is interesting is not the inclusion of RX 6900 XT and RX 6600 but the support for R9 Fury, an eight-year-old graphics card. Also, what is interesting is that out of these three GPUs, only R9 Fury has full ROCm support, the RX 6900 XT has HIP SDK support, and RX 6600 has only HIP runtime support. And to make matters even more complicated, the consumer-grade R9 Fury GPU has full ROCm support only on Linux and not Windows. The reason for this strange selection of support has yet to be discovered. However, it is a step in the right direction, as AMD has yet to enable more functionality on Windows and more consumer GPUs to compete with NVIDIA.

AMD Introduces Alveo MA35D Media Accelerator

AMD today announced the AMD Alveo MA35D media accelerator featuring two 5 nm, ASIC-based video processing units (VPUs) supporting the AV1 compression standard and purpose-built to power a new era of live interactive streaming services at scale. With over 70% of the global video market being dominated by live content, a new class of low-latency, high-volume interactive streaming applications are emerging such as watch parties, live shopping, online auctions, and social streaming.

The Alveo MA35D media accelerator delivers the high channel density, with up to 32x 1080p60 streams per card, power efficiency and ultra-low-latency performance critical to reducing the skyrocketing infrastructure costs now required for scaling such compute intensive content delivery. Compared to the previous generation Alveo U30 media accelerator, the Alveo MA35D delivers up to 4x higher channel density, 4x max lower latency in 4K and 1.8x greater compression efficiency to achieve the same VMAF score—a common video quality metric.

DirectX 12 API New Feature Set Introduces GPU Upload Heaps, Enables Simultaneous Access to VRAM for CPU and GPU

Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

They continue to describe how the update allows the CPU to gain access to the pool of VRAM on the connected graphics card: "With the VRAM being managed by Windows, D3D now exposes the heap memory access directly to the CPU! This allows both the CPU and GPU to directly access the memory simultaneously, removing the need to copy data from the CPU to the GPU increasing performance in certain scenarios." This GPU optimization could offer many benefits in the context of computer games, since memory requirements continue to grow in line with an increase in visual sophistication and complexity.

NVIDIA Previews and Releases Path Tracing SDK at GDC 2023

As promised and detailed earlier, NVIDIA has now released the first SDK for Path Tracing. Path Tracing, which should accurately re-create physics of all light sources in a scene, has been around and we already had a chance to see it in some demos like Quake RTX and Portal RTX, but now, thanks to some previously available NVIDIA tools and features, it could be finally coming to more games and not just tech demos.

According to NVIDIA, the thing that makes Path Tracing possible now, and more accessible to developers, is the combination of previously available NVIDIA technologies, as well as some new ones, including the new performance multiplier in DLSS 3, called the DLSS Frame Generation. The DLSS Frame Generation, working on GeForce 40 series cards and using the Optical Flow Accelerator, is what made real-time path tracing possible. The RTX Path Tracing SDK, according to NVIDIA, should re-create the physics of all light sources in a scene in order to reproduce what the eye sees in real life, and allows to build a reference path traces to ensure that lighting during production is true to life, while accelerating the iteration process; or build high-quality photo modes for RT-capable GPUs or real-time, ultra-quality modes that take advantage of the Ada Lovelace architecture.

Razer Introduces Universal Haptics SDK and Directional Haptics at GDC 2023

Razer, the leading global lifestyle brand for gamers, today announced the release of the Interhaptics universal HD haptic SDK and directional haptics at the Game Developers Conference (GDC) 2023 in San Francisco. This free SDK release focuses on enabling a heightened immersive gaming experience, bringing audio and visual effects to life with HD haptic feedback that can now be completely customized through the Interhaptics SDK.

With today's announcement, Interhaptics, the leading haptic technology platform, has expanded its support to include PlayStation 5*, PlayStation 4, Meta Quest 2, X-input controllers, iOS, and Android devices for game engines such as Unity and Unreal Engine. Additionally, the haptic composer software has been upgraded to include in-app testing for DualSense wireless controllers for PS5 and select Razer HyperSense headsets. Interhaptics can now deploy HD haptics on over 5 billion devices across multiple ecosystems. Developers can sign up for the waiting list for the Razer Kraken V3 HyperSense Dev Kit with programmable directional HD haptics at the Interhaptics website.

Lattice Extends Low Power Leadership with New Lattice Avant FPGA Platform

Lattice Semiconductor, the low power programmable leader, today unveiled Lattice Avant, a new FPGA platform purpose-built to bring the company's power efficient architecture, small size, and performance leadership to mid-range FPGAs. Lattice Avant offers best-in-class power efficiency, advanced connectivity, and optimized compute that enable Lattice to address an expanded set of customer applications across the Communications, Computing, Industrial, and Automotive markets.

"With Lattice Avant, we extend our low power leadership position in the FPGA industry and are poised to continue our rapid pace of innovation, while also doubling the addressable market for our product portfolio," said Jim Anderson, President and CEO, Lattice Semiconductor. "We created Avant to address our customers' need for compelling mid-range FPGA solutions, and we're excited to help them accelerate their designs with new levels of power efficiency and performance."

Innodisk Proves AI Prowess with Launch of FPGA Machine Vision Platform

Innodisk, a leading global provider of industrial-grade flash storage, DRAM memory and embedded peripherals, has announced its latest step into the AI market, with the launch of EXMU-X261, an FPGA Machine Vision Platform. Powered by AMD's Xilinx Kria K26 SOM, which was designed to enable smart city and smart factory applications, Innodisk's FPGA Machine Vision Platform is set to lead the way for industrial system integrators looking to develop machine vision applications.

Automated defect inspection, a key machine vision application, is an essential technology in modern manufacturing. Automated visual inspection guarantees that the product works as expected and meets specifications. In these cases, it is vital that a fast and highly accurate inspection system is used. Without AI, operators must manually inspect each product, taking an average of three seconds per item. Now, with the help of AI solutions such as Innodisk's FPGA Machine Vision Platform, product inspection in factories can be automated, and the end result is not only faster and cheaper, but can be completely free of human error.

HaptX Introduces Industry's Most Advanced Haptic Gloves, Priced for Scalable Deployment

HaptX Inc., the leading provider of realistic haptic technology, today announced the availability of pre-orders of the company's new HaptX Gloves G1, a ground-breaking haptic device optimized for the enterprise metaverse. HaptX has engineered HaptX Gloves G1 with the features most requested by HaptX customers, including improved ergonomics, multiple glove sizes, wireless mobility, new and improved haptic functionality, and multiplayer collaboration, all priced as low as $4,500 per pair - a fraction of the cost of the award-winning HaptX Gloves DK2.

"With HaptX Gloves G1, we're making it possible for all organizations to leverage our lifelike haptics," said Jake Rubin, Founder and CEO of HaptX. "Touch is the cornerstone of the next generation of human-machine interface technologies, and the opportunities are endless." HaptX Gloves G1 leverages advances in materials science and the latest manufacturing techniques to deliver the first haptic gloves that fit like a conventional glove. The Gloves' digits, palm, and wrist are soft and flexible for uninhibited dexterity and comfort. Available in four sizes (Small, Medium, Large, and Extra Large), these Gloves offer the best fit and performance for all adult hands. Inside the Gloves are hundreds of microfluidic actuators that physically displace your skin, so when you touch and interact with virtual objects, the objects feel real.

Tap Systems Launches TapXR - A Wrist Worn Keyboard/Controller For AR and VR

Tap Systems is excited to announce the release of the new TapXR wearable keyboard and controller. The TapXR is the first wrist-wearable device that allows users to type, input commands, and navigate menus. It allows fast, accurate, discreet, and eyes-free texting and control with any bluetooth device, including phones, tablets, smart TVs, and virtual and augmented reality headsets. TapXR works by sensing the user's finger taps on any surface, and decoding them into digital signals.

While conventional hand gestures are slow, error prone and fatiguing, tapping is fast, accurate, and does not cause visual or physical fatigue. Tap users have achieved typing speeds of over 70 words per minute using one hand. While hand tracking supports relatively few gestures and has no haptic feedback, TapXR has over 100 unique commands, and is inherently tactile.

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

NVIDIA Jetson AGX Orin 32GB Production Modules Now Available

Bringing new AI and robotics applications and products to market, or supporting existing ones, can be challenging for developers and enterprises. The NVIDIA Jetson AGX Orin 32 GB production module—available now—is here to help. Nearly three dozen technology providers in the NVIDIA Partner Network worldwide are offering commercially available products powered by the new module, which provides up to a 6x performance leap over the previous generation.

With a wide range of offerings from Jetson partners, developers can build and deploy feature-packed Orin-powered systems sporting cameras, sensors, software and connectivity suited for edge AI, robotics, AIoT and embedded applications. Production-ready systems with options for peripherals enable customers to tackle challenges in industries from manufacturing, retail and construction to agriculture, logistics, healthcare, smart cities, last-mile delivery and more.

Mobileye Launches EyeQ Kit: New SDK for Advanced Safety and Driver-Assistance Systems

Mobileye, an Intel company, has launched the EyeQ Kit - its first software development kit (SDK) for the EyeQ system-on-chip that powers driver-assistance and future autonomous technologies for automakers worldwide. Built to leverage the powerful and highly power-efficient architecture of the upcoming EyeQ 6 High and EyeQ Ultra processors, EyeQ Kit allows automakers to utilize Mobileye's proven core technology, while deploying their own differentiated code and human-machine interface tools on the EyeQ platform.

"EyeQ Kit allows our customers to benefit from the best of both worlds — Mobileye's proven and validated core technologies, along with their own expertise in delivering unique driver experiences and interfaces. As more core functions of vehicles are defined in software, we know our customers will want the flexibility and capacity they need to differentiate and define their brands through code."
- Prof. Amnon Shashua, Mobileye president and chief executive officer

Synopsys Introduces Industry's Highest Performance Neural Processor IP

Addressing increasing performance requirements for artificial intelligence (AI) systems on chip (SoCs), Synopsys, Inc. today announced its new neural processing unit (NPU) IP and toolchain that delivers the industry's highest performance and support for the latest, most complex neural network models. Synopsys DesignWare ARC NPX6 and NPX6FS NPU IP address the demands of real-time compute with ultra-low power consumption for AI applications. To accelerate application software development for the ARC NPX6 NPU IP, the new DesignWare ARC MetaWare MX Development Toolkit provides a comprehensive compilation environment with automatic neural network algorithm partitioning to maximize resource utilization.

"Based on our seamless experience integrating the Synopsys DesignWare ARC EV Processor IP into our successful NU4000 multi-core SoC, we have selected the new ARC NPX6 NPU IP to further strengthen the AI processing capabilities and efficiency of our products when executing the latest neural network models," said Dor Zepeniuk, CTO at Inuitive, a designer of powerful 3D and vision processors for advanced robotics, drones, augmented reality/virtual reality (AR/VR) devices and other edge AI and embedded vision applications. "In addition, the easy-to-use ARC MetaWare tools help us take maximum advantage of the processor hardware resources, ultimately helping us to meet our performance and time-to-market targets."
Return to Keyword Browsing
Nov 17th, 2024 17:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts