News Posts matching #API

Return to Keyword Browsing

KIOXIA Announces the First Samples of Hardware that Supports the Linux Foundation's Software-Enabled Flash Community Project

KIOXIA America, Inc. today announced the availability of the first hardware samples that support the Linux Foundation's vendor-neutral Software-Enabled Flash Community Project, which is making flash software-defined. The company is expecting to deliver customer samples in August 2023. Built for the demanding needs of hyperscale environments, Software-Enabled Flash technology helps hyperscale cloud providers and storage developers maximize the value of flash memory. The hardware from KIOXIA is the first step to putting this working technology in the hands of developers.

The first running units will be showcased in live demonstrations in the KIOXIA booth (#307) next week at Flash Memory Summit 2023 (FMS 2023). This new class of drive consists of purpose-built, media-centric flash hardware focused on hyperscale requirements that work with an open source API and libraries to provide the needed functionality. By unlocking the power of flash, this technology breaks free from legacy hard disk drive (HDD) protocols and creates a platform specific to flash media in a hyperscale environment.

Leading Cloud Service, Semiconductor, and System Providers Unite to Form Ultra Ethernet Consortium

Announced today, Ultra Ethernet Consortium (UEC) is bringing together leading companies for industry-wide cooperation to build a complete Ethernet-based communication stack architecture for high-performance networking. Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads are rapidly evolving and require best-in-class functionality, performance, interoperability and total cost of ownership, without sacrificing developer and end-user friendliness. The Ultra Ethernet solution stack will capitalize on Ethernet's ubiquity and flexibility for handling a wide variety of workloads while being scalable and cost-effective.

Ultra Ethernet Consortium is founded by companies with long-standing history and experience in high-performance solutions. Each member is contributing significantly to the broader ecosystem of high-performance in an egalitarian manner. The founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta and Microsoft, who collectively have decades of networking, AI, cloud and high-performance computing-at-scale deployments.

AMD FidelityFX SDK 1.0 Available Now

We are happy to share that our AMD FidelityFX Software Development Kit (SDK) is now available for download!

What is the AMD FidelityFX SDK?
The AMD FidelityFX SDK is our new and easy to integrate solution for developers looking to include AMD FidelityFX technologies into their games without any of the hassle of complicated porting procedures. In a nutshell, it is AMD FidelityFX graphics middleware.

NVIDIA Collaborates With Microsoft to Accelerate Enterprise-Ready Generative AI

NVIDIA today announced that it is integrating its NVIDIA AI Enterprise software into Microsoft's Azure Machine Learning to help enterprises accelerate their AI initiatives. The integration will create a secure, enterprise-ready platform that enables Azure customers worldwide to quickly build, deploy and manage customized applications using the more than 100 NVIDIA AI frameworks and tools that come fully supported in NVIDIA AI Enterprise, the software layer of NVIDIA's AI platform.

"With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation," said Manuvir Das, vice president of enterprise computing at NVIDIA. "The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production."

Imagination Technologies Launches the IMG CXM GPU

Imagination Technologies is bringing seamless visual experiences to cost-sensitive consumer devices with the new IMG CXM GPU range which includes the smallest GPU to support HDR user interfaces natively.

Consumers are looking for visuals on their smart home platforms that are as detailed, smooth, and responsive as the experience they are accustomed to on mobile devices. At the same time, ambitious content providers are aligning the look and feel of their applications' user interfaces with their cinematic content, by integrating advanced features such as 4K and HDR.

DirectX 12 API New Feature Set Introduces GPU Upload Heaps, Enables Simultaneous Access to VRAM for CPU and GPU

Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

They continue to describe how the update allows the CPU to gain access to the pool of VRAM on the connected graphics card: "With the VRAM being managed by Windows, D3D now exposes the heap memory access directly to the CPU! This allows both the CPU and GPU to directly access the memory simultaneously, removing the need to copy data from the CPU to the GPU increasing performance in certain scenarios." This GPU optimization could offer many benefits in the context of computer games, since memory requirements continue to grow in line with an increase in visual sophistication and complexity.

Adobe Unveils Firefly, a Family of new Creative Generative AI

Today, Adobe introduced Firefly, a new family of creative generative AI models, first focused on the generation of images and text effects. Firefly will bring even more precision, power, speed and ease directly into Creative Cloud, Document Cloud, Experience Cloud and Adobe Express workflows where content is created and modified. Firefly will be part of a series of new Adobe Sensei generative AI services across Adobe's clouds.Adobe has over a decade-long history of AI innovation, delivering hundreds of intelligent capabilities through Adobe Sensei into applications that hundreds of millions of people rely upon. Features like Neural Filters in Photoshop, Content Aware Fill in After Effects, Attribution AI in Adobe Experience Platform and Liquid Mode in Acrobat empower Adobe customers to create, edit, measure, optimize and review billions of pieces of content with power, precision, speed and ease. These innovations are developed and deployed in alignment with Adobe's AI ethics principles of accountability, responsibility and transparency.

"Generative AI is the next evolution of AI-driven creativity and productivity, transforming the conversation between creator and computer into something more natural, intuitive and powerful," said David Wadhwani, president, Digital Media Business, Adobe. "With Firefly, Adobe will bring generative AI-powered 'creative ingredients' directly into customers' workflows, increasing productivity and creative expression for all creators from high-end creative professionals to the long tail of the creator economy."

AMD Software Adrenalin 23.2.1 Released: Finally Updates for RX 6000 Series and Older

AMD today released its first driver update in more than two months for Radeon RX 6000 series and RX 5000 series, the Adrenalin 23.2.1 WHQL. The drivers introduce optimization for "Forspoken," with up to 7% performance improvement on offer at 4K with an RX 6950 XT GPU; and optimization for "Dead Space" (2023). The drivers also introduce IREE compiler using MLIR interface on Vulkan, and a handful of new Vulkan API extensions. The drivers also catch up on the past two months of optimizations that were only released for the RX 7000 series, covering Marvel's Spider-Man Remastered, Hogwarts Legacy, and some performance improvements to games released in the past six months.

Among the issues fixed with this release include the "Delayed Write Failed" error noticed in Windows 11 22H2, SpaceEngine lower-than-expected performance, display corruption in the Points Shop section of Steam; YouTube playback with Enhanced Sync on some RX 6000 series GPUs; game crashes noticed with Door Kickers 2, Baldur's Gate (Vulkan), Emergency 4, and Sea of Thieves. These drivers unify the driver release trunk, spanning RX 400 series, RX 500 series, RX Vega series, RX 5000 series, RX 6000 series, and RX 7000 series.

DOWNLOAD: AMD Software Adrenalin 23.2.1 WHQL

Forspoken Simply Doesn't Work with AMD Radeon RX 400 and RX 500 "Polaris" GPUs

AMD Radeon RX 400 series and RX 500 series graphics cards based on the "Polaris" graphics architecture are simply unable to run "Forspoken," as users on Reddit report. The game has certain DirectX 12 feature-level 12_1 API requirements that the architecture does not meet. Interestingly, NVIDIA's "Maxwell" graphics architecture, which predates AMD "Polaris" by almost a year, supports FL 12_1, and is able to play the game. Popular GPUs from the "Maxwell" generation include the GeForce GTX 970 and GTX 960. Making matters much worse, AMD is yet to release an update to its Adrenalin graphics drivers for the RX Vega, RX 5000, and RX 6000 series that come with "Forspoken" optimization. Its latest 23.1.2 beta drivers that come with these optimizations only support the RX 7000 series RDNA3 graphics cards. It's now been over 50 days since the vast majority of AMD discrete GPUs have received a driver update.

Microsoft and OpenAI Extend Partnership with Additional Investment

Today, we are announcing the third phase of our long-term partnership with OpenAI through a multiyear, multibillion dollar investment to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world.

This agreement follows our previous investments in 2019 and 2021. It extends our ongoing collaboration across AI supercomputing and research and enables each of us to independently commercialize the resulting advanced AI technologies.

HaptX Introduces Industry's Most Advanced Haptic Gloves, Priced for Scalable Deployment

HaptX Inc., the leading provider of realistic haptic technology, today announced the availability of pre-orders of the company's new HaptX Gloves G1, a ground-breaking haptic device optimized for the enterprise metaverse. HaptX has engineered HaptX Gloves G1 with the features most requested by HaptX customers, including improved ergonomics, multiple glove sizes, wireless mobility, new and improved haptic functionality, and multiplayer collaboration, all priced as low as $4,500 per pair - a fraction of the cost of the award-winning HaptX Gloves DK2.

"With HaptX Gloves G1, we're making it possible for all organizations to leverage our lifelike haptics," said Jake Rubin, Founder and CEO of HaptX. "Touch is the cornerstone of the next generation of human-machine interface technologies, and the opportunities are endless." HaptX Gloves G1 leverages advances in materials science and the latest manufacturing techniques to deliver the first haptic gloves that fit like a conventional glove. The Gloves' digits, palm, and wrist are soft and flexible for uninhibited dexterity and comfort. Available in four sizes (Small, Medium, Large, and Extra Large), these Gloves offer the best fit and performance for all adult hands. Inside the Gloves are hundreds of microfluidic actuators that physically displace your skin, so when you touch and interact with virtual objects, the objects feel real.

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

Intel Data-Center GPU Flex Series "Arctic Sound-M" Launched: Visual Processing, Media, and Inference top Applications

Intel today launched its Arctic Sound M line of data-center GPUs. These are not positioned as HPC processors like the "Ponte Vecchio," but GPUs targeting cloud-compute providers, with their main applications being in the realm of visual processing, media, and AI inferencing. Their most interesting aspect has to be the silicon, which are the same 6 nm "ACM-G11" and "ACM-G10" chips powering the Arc "Alchemist" client graphics cards, based on the Xe-HPG architecture. Even more interesting is their typical board power values, ranging between 75 W to 150 W. The cards are built in the PCI-Express add-on card form-factor, with their cooling solutions optimized for rack airflow.

The marketing name for these cards is simply Intel Data Center GPU Flex, with two models being offered: The Data Center GPU Flex-140, and Flex-170. The Flex-170 is a full-sized add-on card based on the larger ACM-G10 silicon, which has 32 Xe Cores (4,096 unified shaders), whereas the Flex-140, interestingly, is a low-profile dual-GPU card with two smaller ACM-G11 chips that each has 8 Xe Cores (1,024 unified shaders). The two chips appear to be sharing a PCIe bridge chip in the renders. Both models come with four Xe Media Engines that pack AV1 encode hardware-acceleration, XMX AI acceleration, real-time ray tracing, and GDDR6 memory.

Intel Xe iGPUs and Arc Graphics Lack DirectX 9 Support, Rely on API Translation to Play Older Games

So you thought your Arc A380 graphics card, or the Gen12 Xe iGPU in your 12th Gen Core processors were good enough to munch through your older games from the 2000s and early 2010s? Not so fast. Intel Graphics states that the Xe-LP and Xe-HPG graphics architectures, which power the Gen12 Iris Xe iGPUs and the new Arc "Alchemist" graphics cards, lack native support for the DirectX 9 graphics API. The two rely on API translation such as Microsoft D3D9On12, which attempts to translate D3D9 API commands to D3D12, which the drivers can recognize.

Older graphics architectures such as the Gen11 powering "Ice Lake," and Gen9.5 found in all "Skylake" derivatives, feature native support for DirectX 9, however when paired with Arc "Alchemist" graphics cards, the drivers are designed to engage D3D9On12 to accommodate the discrete GPU, unless the dGPU is disabled. API translation can be unreliable and buggy, and Intel points you to Microsoft and the game developers for support, Intel Graphics won't be providing any.

ÆPIC Leak is an Architectural CPU Bug Affecting 10th, 11th, and 12th Gen Intel Core Processors

The x86 CPU family has been vulnerable to many attacks in recent years. With the arrival of Spectre and Meltdown, we have seen side-channel attacks overtake both AMD and Intel designs. However, today we find out that researchers are capable of exploiting Intel's latest 10th, 11th, and 12th generation Core processors with a new CPU bug called ÆPIC Leak. Named after Advanced Programmable Interrupt Controller (APIC) that handles interrupt requests to regulate multiprocessing, the leak is claimeing to be the first "CPU bug able to architecturally disclose sensitive data." Researchers Pietro Borrello (Sapienza University of Rome), Andreas Kogler (Graz Institute of Technology), Martin Schwarzl (Graz), Moritz Lipp (Amazon Web Services), Daniel Gruss (Graz University of Technology), and Michael Schwarz (CISPA Helmholtz Center for Information Security) discovered this flaw in Intel processors.
ÆPIC Leak is the first CPU bug able to architecturally disclose sensitive data. It leverages a vulnerability in recent Intel CPUs to leak secrets from the processor itself: on most 10th, 11th and 12th generation Intel CPUs the APIC MMIO undefined range incorrectly returns stale data from the cache hierarchy. In contrast to transient execution attacks like Meltdown and Spectre, ÆPIC Leak is an architectural bug: the sensitive data gets directly disclosed without relying on any (noisy) side channel. ÆPIC Leak is like an uninitialized memory read in the CPU itself.

A privileged attacker (Administrator or root) is required to access APIC MMIO. Thus, most systems are safe from ÆPIC Leak. However, systems relying on SGX to protect data from privileged attackers would be at risk, thus, have to be patched.

Intel Teams Up with Aible to Fast-Track Enterprise Analytics and AI

Intel's collaboration with Aible enables teams across key industries to leverage artificial intelligence and deliver rapid and measurable business impact. This deep collaboration, which includes engineering optimizations and an innovative benchmarking program, enhances Aible's ability to deliver rapid results to its enterprise customers. When paired with Intel processors, Aible's technology provides a serverless-first approach, allowing developers to build and run applications without having to manage servers, and build modern applications with increased agility and lower total cost of ownership (TCO).

"Today's enterprise IT infrastructure leaders face significant challenges building a foundation that is designed to help business teams drive value from AI initiatives in the data center. We've moved past talking about the potential of AI, as business teams across key industries are experiencing measurable business impact within days, using Intel Xeon Scalable processors with built-in Intel software optimizations with Aible," said Kavitha Prasad, Intel vice president and general manager of Datacenter, AI and Cloud Execution and Strategy.

Intel Unveils Arc Pro Graphics Cards for Workstations and Professional Software

Intel has today unveiled another addition to its discrete Arc Alchemist graphics card lineup, with a slight preference to the professional consumer market. Intel has prepared three models for creators and entry pro-vis solutions, called Intel Arc Pro graphics cards. All GPUs are AV1 accelerated, have ray tracing support, and are designed to handle AI acceleration inside applications like Adobe Premiere Pro. At the start, we have a small A30M mobile GPU aimed at laptop designs. It has a 3.5 TeraFLOP FP32 capability inside a configurable 35-50 Watt TDP envelope, has eight ray tracing cores, and 4 GB of GDDR6 memory. Its display output connectors depend on OEM's laptop design.

Next, we have the Arc A40 Pro discrete single-slot GPU. Having 3.5 TeraFLOPs of FP32 single-precision performance, it has eight ray tracing cores and 6 GB of GDDR6 memory. The listed maximum TDP for this model is 50 Watts. It has four mini-DP ports for video output, and it can drive two monitors at 8K 60 Hz, one at 5K 240 Hz, two at 5K 120 Hz, or four at 4K 60 Hz refresh rate. Its bigger brother, the Arc A50 Pro, is a dual-slot design with 4.8 TeraFLOPs of single-precision FP32 computing, has eight ray tracing cores, and 6 GB of GDDR6 memory as well. It has the same video output capability as the Arc A40 Pro, with a beefier cooling setup to handle the 75 Watt TDP. All software developed using the OneAPI toolkit can be accelerated using these GPUs. Intel is working with the industry to adapt professional software for Arc Pro graphics.

KIOXIA Introduces Sample PCIe NVMe Technology-Based Flash Hardware for SEF

Supporting the Linux Foundation's Software-Enabled Flash open-source project, KIOXIA America, Inc. today announced innovative new software-defined technology and sample hardware based on PCIe and NVMe technology. This technology fully uncouples flash storage from legacy HDD protocols, allowing flash to realize its full capability and potential as a storage media. KIOXIA will highlight Software-Enabled Flash at this week's Flash Memory Summit Conference & Expo at its booth #307 on the show floor and present the session, "NVMe Software-Enabled Flash Storage for Hyperscale Data Centers," at the Santa Clara Convention Center.

o reach efficiency at scale, hyperscale cloud storage needs more from flash storage devices that are currently based on hard disk drive protocols created decades ago. To resolve this, the Linux Foundation's Software-Enabled Flash Community Project will enable industry adoption of a software-defined flash API, giving developers the ability to customize flash storage specific to data center, application and workload requirements. The project was created to benefit the storage developer community with a vendor agnostic, flexible solution that meets the evolving requirements of the modern data center.

Supermicro Launches Multi-GPU Cloud Gaming Solutions Based on Intel Arctic Sound-M

Super Micro Computer, Inc., a global leader in enterprise computing, storage, networking, and green computing technology, is announcing future Total IT Solutions for availability with Android Cloud Gaming and Media Processing & Delivery. These new solutions will incorporate the Intel Data Center GPU, codenamed Arctic Sound-M, and will be supported on several Supermicro servers. Supermicro solutions that will contain the Intel Data Center GPUs codenamed Arctic Sound-M, include the 4U 10x GPU server for transcoding and media delivery, the Supermicro BigTwin system with up to eight Intel Data Center GPUs, codenamed Arctic Sound-M in 2U for media processing applications, the Supermicro CloudDC server for edge AI inferencing, and the Supermicro 2U 2-Node server with three Intel Data Center GPUs, codenamed Arctic Sound-M per node, optimized for cloud gaming. Additional systems will be made available later this year.

"Supermicro will extend our media processing solutions by incorporating the Intel Data Center GPU," said Charles Liang, President, and CEO, Supermicro. "The new solutions will increase video stream rates and enable lower latency Android cloud gaming. As a result, Android cloud gaming performance and interactivity will increase dramatically with the Supermicro BigTwin systems, while media delivery and transcoding will show dramatic improvements with the new Intel Data Center GPUs. The solutions will expand our market-leading accelerated computing offerings, including everything from Media Processing & Delivery to Collaboration, and HPC."

AMD Software Adrenalin 22.7.1 Released, Includes OpenGL Performance Boost and AI Noise-Suppression

AMD on Tuesday released the AMD Software Adrenalin 22.7.1 drivers, which include several major updates to the feature-set. To begin with, AMD has significantly updated its OpenGL ICD (installable client driver), which can have an incredible 79 percent increase in frame-rates at 4K with "Fabulous" settings, as measured on the flagship RX 6950 XT, and up to 75 percent, as measured on the entry-level RX 6400. Also debuting is AMD Noise Suppression, a new feature that lets you clear out your voice-calls and in-game voice-chats. The software leverages AI to filter out background noises that don't identify as the prominent foreground speech. Radeon Super Resolution support has been extended to RX 5000 series and RX 6000 series GPUs running on Ryzen processor notebooks with Hybrid graphics setups.

Besides these, Adrenalin 22.7.1 adds optimization for "Swordsman Remake," support for Radeon Boost plus VRS with "Elden Ring," "Resident Evil VIII," and "Valorant." The drivers improve support for Windows 11 22H2 Update, and Agility SDK 1.602 and 1.607. A few more Vulkan API extensions are added with this release. Among the handful issues fixed are lower-than-expected F@H performance on RX 6000 series, Auto Undervolt disabling idle-fan-stop; "Hitman 3" freezing when switching between windows in exclusive fullscreen mode; blurry web video upscaling on certain RX 6000 series cards, and Enhanced Sync locking framerates to 15 FPS with video playback on extended monitors.

DOWNLOAD: AMD Software Adrenalin 22.7.1

Intel Releases Open Source AI Reference Kits

Intel has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. First introduced at Intel Vision, the reference kits include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.

"Innovation thrives in an open, democratized environment. The Intel accelerated open AI software ecosystem including optimized popular frameworks and Intel's AI tools are built on the foundation of an open, standards-based, unified oneAPI programming model. These reference kits, built with components of Intel's end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions."

AMD WMMA Instruction is Direct Response to NVIDIA Tensor Cores

AMD's RDNA3 graphics IP is just around the corner, and we are hearing more information about the upcoming architecture. Historically, as GPUs advance, it is not unusual for companies to add dedicated hardware blocks to accelerate a specific task. Today, AMD engineers have updated the backend of the LLVM compiler to include a new instruction called Wave Matrix Multiply-Accumulate (WMMA). This instruction will be present on GFX11, which is the RDNA3 GPU architecture. With WMMA, AMD will offer support for processing 16x16x16 size tensors in FP16 and BF16 precision formats. With these instructions, AMD is adding new arrangements to support the processing of matrix multiply-accumulate operations. This is closely mimicking the work NVIDIA is doing with Tensor Cores.

AMD ROCm 5.2 API update lists the use case for this type of instruction, which you can see below:
rocWMMA provides a C++ API to facilitate breaking down matrix multiply accumulate problems into fragments and using them in block-wise operations that are distributed in parallel across GPU wavefronts. The API is a header library of GPU device code, meaning matrix core acceleration may be compiled directly into your kernel device code. This can benefit from compiler optimization in the generation of kernel assembly and does not incur additional overhead costs of linking to external runtime libraries or having to launch separate kernels.

rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.

Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.

Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.

Intel Arc Alchemist GPUs Get Vulkan 1.3 Compatibility

A part of the process of building a graphics card is designing compatibility to execute the latest graphics APIs like DirectX, OpenGL, and Vulkan. Today, we have confirmation that Intel's Arc Alchemist discrete graphics cards will be compatible with Vulkan's latest iteration - version 1.3. In January, Khronos, the team behind Vulkan API, released their regular two-year update to the standard. Graphics card vendors like NVIDIA and AMD announced support immediately with their drivers. Today, the Khronos website officially lists Intel Arc Alchemist mobile graphics cards as compatible with Vulkan 1.3 with Intel Arc A770M, A730M, A550M, A370M, and A350M GPUs.

At the time of writing, there is no official announcement for the desktop cards yet. However, given that the mobile SKUs are supporting the latest standard, it is extremely likely that the desktop variants will also carry the same level of support.

Intel Releases OpenVINO 2022.1 to Advance AI Inferencing for Developers

Since OpenVINO launched in 2018, Intel has enabled hundreds of thousands of developers to dramatically accelerate AI inferencing performance, starting at the edge and extending to the enterprise and the client. Today, ahead of MWC Barcelona 2022, the company launched a new version of the Intel Distribution of OpenVINO Toolkit. New features are built upon three-and-a-half years of developer feedback and include a greater selection of deep learning models, more device portability choices and higher inferencing performance with fewer code changes.

"The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network," said Adam Burns, vice president, OpenVINO Developer Tools in the Network and Edge Group.
Return to Keyword Browsing
Dec 18th, 2024 10:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts